Where and when:
Monday 31st of August, Location tba
Abstract: tba
Cortex AI SQL
Abstract: tba
Abstract: tba
Recent advances in large language models (LLMs) have enabled a new generation of AI-powered systems and data management architectures, where reasoning, semantics, and learning are first-class components of data processing. The NOVAS workshop asks: How should system architectures, execution models, and optimization techniques evolve when LLM inference becomes a core system primitive? What are the right abstractions, optimization strategies, and benchmarks for serving LLM-powered workloads efficiently at scale? How do we trade off performance, cost, energy, and accuracy when data systems integrate reasoning, retrieval, and multi-agent execution?
We invite work and early ideas that address these questions through system design, optimization, or theoretical analysis, including contributions that may fall outside traditional database or ML categories but offer clear system-level insights.
Topics of particular interest for the workshop include, but are not limited to:
Declarative and “multi-agent" systems for large-scale, agentic data processing.
Implementation and optimization for semantic operations, including semantic joins, semantic aggregations, semantic filters.
Multimodal question answering and data processing
DB-inspired techniques to optimize workloads of hybrid relational-AI queries.
System-level methods for efficient LLM serving: performance, energy, and cost trade-offs
New model architectures for relational data processing (e.g. relational transformers)
Vector databases for embeddings in RAG systems.
Benchmarks for data processing tasks using LLMs.
Submission website: https://openreview.net/group?id=VLDB.org/2026/Workshop/NOVAS
Postdoctoral Associate, MIT
Assistant Professor, Purdue University
PhD Student, Stanford
Postdoctoral Fellow, Harvard
Professor, University of Technology Nuremberg
Professor, EURECOM
Submissions will be single blind: authors cannot see reviewer names, but reviewers can see author names. We use OpenReview to host papers and the reviewing process will be public. This means that reviewers' comments can be seen by all, during the submission and for accepted papers after decision, although the reviewers' identity will remain anonymous.
Conflicts of Interests (COIs) are handled using the same rules of VLDB 2026.
The use of LLMs is allowed as a general-purpose assist tool. Authors and reviewers should understand that they take full responsibility for the contents written under their name, including content generated by LLMs that could be construed as plagiarism or scientific misconduct (e.g., fabrication of facts). LLMs are not eligible for authorship.