Schema on the Inside: A Two-Phase Fine-Tuning Method for High-Efficiency Text-to-SQL at Scale
DOI:
https://doi.org/10.1609/aaai.v40i47.41446Abstract
Applying large, proprietary API-based language models to text-to-SQL tasks poses a significant industry challenge: reliance on massive, schema-heavy prompts results in prohibitive per-token API costs and high latency, hindering scalable production deployment. We present a specialized, self-hosted 8B-parameter model designed for a conversational bot in CriQ, a sister app to Dream11—India’s largest fantasy sports platform with over 250 million users—that answers user queries about cricket statistics. Our novel two-phase supervised fine-tuning approach enables the model to internalize the entire database schema, eliminating the need for long-context prompts. This reduces input tokens by over 99%, from a 17k-token baseline to fewer than 100, and replaces costly external API calls with efficient local inference. The resulting system achieves 98.4% execution success and 92.5% semantic accuracy, substantially outperforming a prompt-engineered baseline using Google’s Gemini Flash 2.0 (95.6% execution, 89.4% semantic accuracy). These results demonstrate a practical path toward high-precision, low-latency text-to-SQL applications using domain-specialized, self-hosted language models in large-scale production environments.Downloads
Published
2026-03-14
How to Cite
Soni, C., Chourasia, S., Kumar, G., & Kapoor, H. (2026). Schema on the Inside: A Two-Phase Fine-Tuning Method for High-Efficiency Text-to-SQL at Scale. Proceedings of the AAAI Conference on Artificial Intelligence, 40(47), 40110-40117. https://doi.org/10.1609/aaai.v40i47.41446
Issue
Section
IAAI Technical Track on Deployed Highly Innovative Applications of AI