go back
go back
Volume 18, No. 12
Panel on Neural Relational Data: Tabular Foundation Models, LLMs... or both?
Abstract
Recent breakthroughs in artificial intelligence have produced Large Language Models (LLMs) and a new wave of Tabular Foundation Models (TFMs). Both promise to redefine how we query, integrate, and reason over relational data, yet they embody opposing philosophies: LLMs pursue broad generality through massive text-centric pre-training, whereas TFMs embed inductive biases that mirror table structure and relational semantics. This panel assembles researchers and practitioners from academia and industry to debate which path, specialized TFMs, ever stronger general-purpose LLMs, or a hybrid of the two, will most effectively power the next generation of data management systems. Panelists will confront questions of generality, accuracy, scalability, robustness, cost, and usability across core data management tasks such as Text-to-SQL translation, schema understanding, and entity resolution. The discussion aims to surface critical research challenges and guide the community’s investment of effort and resources over the coming years.
PVLDB is part of the VLDB Endowment Inc.
Privacy Policy