X
Blog | GenAI | General | Insights | Resources

Garbage in, SQL out

Introduction

Enterprises are rapidly exploring how to integrate Generative AI (GenAI) into core operations, with large language models (LLMs) at the center of this shift. One area gaining traction is using LLMs to make data more accessible to non-technical users.

Business users often need on-demand access to data for tasks like tracking marketing campaign performance, analyzing sales by region or customer segment, or monitoring operational KPIs. Enabling them allows enterprises to make faster, more informed decisions without relying on IT or data teams. This democratization of data accelerates productivity, enhances operational efficiency, and enables personalized customer experiences. By empowering users across departments to explore and analyze data independently, organizations foster collaboration, improve transparency, and build a truly data-driven culture.

There are off-the-shelf solutions that convert natural language into executable queries (Databricks AI/BI Genie or Snowflake Cortex). Enterprises can also choose to build custom text-to-SQL solutions. 

Both help reduce that dependency by allowing users to ask data questions directly in natural language.

To be effective, these systems must go beyond language understanding:

  • Data context: Interpret metadata, table relationships, and naming conventions.
  • Query accuracy: Generate valid SQL that reflects user intent and executes correctly.
  • Performance: Ensure queries are efficient and scalable across large datasets.

While text-to-SQL solutions require effort to incorporate some of these features, enterprises still choose them owing to control and transparency around key aspects such as governance of the LLMs, data transfer, security, and privacy concerns. 

When building these custom solutions, we aim to understand the impact of data issues on them. 

Business leaders recognize that poor data quality – missing values, inconsistent formats, and outdated dictionaries – can undermine any analytics initiative. The key question: Can LLMs generate useful, accurate SQL when the underlying data is messy or poorly documented? Or does the “Garbage In, Garbage Out” principle hold, even for state-of-the-art GenAI?

We investigate whether large language models (LLMs) can still generate useful, accurate SQL queries when the underlying data lacks structure or documentation. In other words, can LLMs infer enough context from imperfect data to support effective self-service analytics?

Methodology

To investigate whether garbage in really does entail garbage out or if we can engineer our way into an efficient SQL solution with properly trained LLMs, we evaluated three scenarios, each representing a different level of data readiness:

  • Bronze (Raw, minimally processed data): Poor metadata, little to no documentation.
  • Silver (Cleansed, validated data): Improved structure and metadata.
  • Gold (Curated, analysis-ready data): Optimized for AI and BI, with rich context.

For each scenario, we assessed:

  • Data awareness: How much schema/business context is provided to the LLM.
  • Prompt engineering: Use of few-shot examples and business logic.
  • Latency: Time to generate and execute queries.
  • Accuracy: Percentage of correct, business-intent-aligned SQL queries.

To facilitate this comparison, we designed and implemented a custom architecture that served as our baseline. This architecture provided the flexibility to control the amount of information supplied to the LLM. In the following sections, we will outline the key components of our architecture.

Architecture

Our architecture integrates several components that collectively enhance the overall workflow. The process begins with a user query, focusing on the Text-to-SQL conversion phase. The generated SQL statements are executed against a database, serving as the primary performance metric.

Since SQL queries typically return tabular numerical results, we incorporated an additional LLM to contextualize and explain the output. This “Explainer” model takes the initial user query along with the SQL results and generates a meaningful response. The final output is delivered to the user interface (UI) via a Databricks Serving Endpoint. To evaluate the output, we use MLFlow.eval
Test Scenarios

We iterate through the multiple stages, delineated into three key scenarios:

  1. Using Bronze Tables
  2. Using Bronze Tables with Advanced Prompting
  3. Using Gold Layer Tables with minimal prompting: Leveraging streamlined/ engineered tables off of the streaming datasets

Each scenario varied the input quality to study the impact on the output, i.e., the generated SQL statement.

The first scenario uses raw bronze tables with minimal data awareness and business context. While this requires little upfront effort, it results in high latency and low accuracy, making it a poor choice for production use.

The second approach still relies on bronze tables but adds advanced prompting. When only prima facie metadata is used with no few-shot examples, accuracy remains low despite the high effort. However, when the prompt includes rich metadata, few-shot examples, and embedded business logic, accuracy improves significantly to 80%, though latency remains high (over 45 seconds).

The final and most efficient setup uses well-structured gold tables with minimal prompting. By leveraging curated data and embedding only light business logic in the prompt, this approach achieves high accuracy (95%) with low latency (~10 seconds) and reduced engineering effort.

 

Key Takeaways

  • Data quality is king: Even the most advanced LLMs cannot fully compensate for missing or inconsistent metadata. Clean, curated data (gold layer) is essential for reliable, high-accuracy Text-to-SQL solutions.
  • Prompt engineering helps, but has limits: More examples and richer context improve performance, but add latency and complexity. There’s a point of diminishing returns if the underlying data remains poor.
  • Semantic layers and ontologies matter: Solutions that enrich schema context, whether via semantic models, ontologies, or curated dictionaries, consistently outperform those relying on raw data alone.
  • Performance trade-offs are real: Efforts to boost accuracy (via prompt engineering or metadata enrichment) can increase latency and operational complexity. Striking the right balance is crucial for production use.

“Garbage In, Garbage Out” still applies, even in the GenAI era. LLMs can abstract some complexity, but they are not a substitute for clean, well-structured data.

IA Generativa is transforming how businesses innovate, automate, and create value. Whether you’re exploring use cases or scaling production-ready solutions, our GenAI consultants are here to guide you every step of the way. Reach out to our experts to learn how we can help you design, implement, and optimize GenAI solutions tailored to your unique needs. Let’s build the future—together.


Authors

Related Posts

Ago 04 2025

How Lovelytics and Databricks Partnered to Migrate and Automate Databricks’ Internal Reporting to AI/BI

Introduction: What is AI/BI and Why It’s a Game-Changer For years, BI tools have helped organizations analyze and visualize data, but the landscape has shifted....
Jul 31 2025

Announcing the Geospatial AI Accelerator, Our Latest Brickbuilder 

Built on Databricks to unlock AI-driven insights from geospatial data We’re excited to announce the launch of the Geospatial AI Accelerator by Lovelytics, our latest...
Jul 31 2025

Agentic AI: Building Secure, Ethical, and Governed AI Agents 

A practical guide for business and technology leaders Introduction: When AI Acts Autonomously, Can You Trust It? AI agents capable of independent decision-making...
Jul 23 2025

Why Data Literacy Is Critical to Enable a Data-Driven Culture

In the age of digital transformation, nearly every organization I have encountered in practice has expressed a desire to be “data-driven”. But there's a critical...
Jul 21 2025

Why Integrating Data Observability is No Longer Optional

In the modern data-driven enterprise, data is no longer just a byproduct of operations, it’s a key strategic asset.  Unfortunately, as data pipelines grow in...
Jul 09 2025

Why are Data Catalog and Data Management Companies the New Acquisition Target? 

At the end of May, Salesforce announced that they were acquiring Informatica for about $8 billion. The acquisition demonstrated Salesforce's intent to enhance its data...
Jul 01 2025

Agentic AI: The Future of Intelligent Business Automation

Artificial intelligence (AI) is no longer just a tool for augmenting human decision-making—it is rapidly evolving into an autonomous, self-learning force that is...
Jun 30 2025

Three Emerging GenAI Patterns Reshaping the Enterprise: Insights from DAIS 2025

The 2025 Databricks Data + AI Summit showcased the rapid evolution of Generative AI (GenAI) in the enterprise. One of the most anticipated moments was the chat between...
Jun 24 2025

The Invisible Handbrake: How Poor Governance and Misaligned Processes Undermine Business Enablement

The Business Enablement Mirage Organizations today are racing to enable their businesses through digital transformation, AI-powered insights, and connected workflows....
Jun 23 2025

From Productivity Paradox to GenAI Acceleration: Key Takeaways from DAIS 2025

Historical Perspective on Innovation: From Dynamos to AI Agents In the late 19th century, the promise of electrification captured the imagination of industrialists....