Where and when:
Monday 31st of August, Location tba
Abstract: tba
Cortex AI SQL
Abstract: tba
Abstract: tba
Recent advances in large language models (LLMs) have enabled a new generation of AI-powered systems and data management architectures, where reasoning, semantics, and learning are first-class components of data processing. The NOVAS workshop asks: How should system architectures, execution models, and optimization techniques evolve when LLM inference becomes a core system primitive? What are the right abstractions, optimization strategies, and benchmarks for serving LLM-powered workloads efficiently at scale? How do we trade off performance, cost, energy, and accuracy when data systems integrate reasoning, retrieval, and multi-agent execution?
We invite work and early ideas that address these questions through system design, optimization, or theoretical analysis, including contributions that may fall outside traditional database or ML categories but offer clear system-level insights.
Topics of particular interest for the workshop include, but are not limited to:
Declarative systems to compose AI agents and “multi-agent" systems for data processing.
Implementation and optimization for semantic operations, including semantic joins, semantic aggregations, semantic filters.
Multimodal embeddings and semantic question answering over multiple modalities.
DB-inspired techniques to optimize the forward pass in transformer-based architectures.
Scheduling and sharing of large batch workloads of hybrid relational-AI queries.
System-level methods for efficient LLM serving: performance, energy, and cost trade-offs
Strategies for generative AI-based data processing, e.g., RAG, chain-of-thought reasoning.
Benchmarks for data processing tasks using LLMs.
Submission website: tba
Postdoctoral Associate, MIT
Postdoctoral Associate, MIT
5th-year PhD Student, Stanford
Postdoctoral Fellow, Harvard
Professor, TU Nuremberg
Professor, EUROCOM
Submissions will be single blind: authors cannot see reviewer names, but reviewers can see author names. We use OpenReview to host papers and the reviewing process will be public. This means that reviewers' comments can be seen by all, during the submission and for accepted papers after decision, although the reviewers' identity will remain anonymous.
Conflicts of Interests (COIs) are handled using the same rules of VLDB 2026.
The use of LLMs is allowed as a general-purpose assist tool. Authors and reviewers should understand that they take full responsibility for the contents written under their name, including content generated by LLMs that could be construed as plagiarism or scientific misconduct (e.g., fabrication of facts). LLMs are not eligible for authorship.