VLDB 2027: Call for Contributions - Research Track
PVLDB Volume 20 — Contributions
New for PVLDB Volume 20
- Updated Topics of Interest
- Check Submission Guidelines for updates on the expectation of submitting supplementary materials for transparency and reproducibility.
Overview
The Proceedings of the VLDB (PVLDB), established in 2008, is a scholarly journal for short and timely research papers pursuing a strict quality assurance process. PVLDB is distinguished by a monthly submission process with rapid reviews. PVLDB issues are published regularly throughout the year. A paper will appear in PVLDB soon after acceptance, and possibly in advance of the VLDB Conference. All papers accepted for Volume 20 by July 1, 2027 will form the Research Track of the VLDB 2027 Conference, together with any rollover papers from Volume 19. Papers accepted to Volume 20 after July 1, 2027 will be rolled over to the VLDB 2028 Conference. At least one author of each accepted paper must attend the VLDB 2027 Conference. PVLDB is the only submission channel for research papers to appear in the VLDB 2027 Conference. Please see the Submission Guidelines for paper submission instructions. The submission process for other VLDB 2027 tracks, such as demonstrations or tutorials, is different, and is described in their respective calls for papers.
Scope of PVLDB
PVLDB welcomes original research papers on a broad range of research topics related to all aspects of data management, where systems issues play a significant role, such as data management system technology and information management infrastructures, including their very large scale of experimentation, novel architectures, and demanding applications as well as their underpinning theory. The scope of a submission for PVLDB is also described by the subject areas given below. Moreover, the scope of PVLDB is restricted to scientific areas that are covered by the combined expertise on the submission's topic of the journal's editorial board. Finally, the contributions in the submission should build on work already published in data management outlets, e.g., PVLDB, VLDB Journal, ACM SIGMOD, IEEE ICDE, EDBT, ACM TODS, IEEE TKDE, and go beyond a syntactic citation.
Four Paper Categories
There are four equally important categories of papers in the research track:
- Regular Research Papers
- Experiment, Analysis & Benchmark Papers (EA&B)
- Scalable Data Science Papers (SDS)
- Vision Papers
See Submission Guidelines for page limits for these categories.
Regular Research Papers
PVLDB invites regular research papers of original work with different flavors, which are reviewed with different expectations on novelty and coverage of the experimental evaluation:
- Foundations and Algorithms Papers: The primary contribution of foundations and algorithms papers lies in their formal underpinnings or novel algorithms expressed through theoretical formalism and/or precise pseudocode. In contrast to pure theory venues, authors of such papers are still encouraged to include a prototype implementation and experimental evaluation even though the core contribution is conceptual and algorithmic in nature.
- Systems Papers: The primary contribution of systems papers lies in the development of novel and practical approaches. These papers typically have no theoretical formalism or proofs, but include a principled system design, a solid prototype implementation, and empirical evaluation of a working end-to-end system. The novelty of these papers often lies in the design, innovative system architecture, new abstractions, or interesting and effective combination of existing techniques.
Experiment, Analysis & Benchmark (EA&B) Papers
EA&B papers focus on the extensive evaluation of algorithms, data structures, and systems that are of wide interest. The scientific contribution of an EA&B paper lies in providing: (i) fundamentally new insights into the strengths and weaknesses of existing methods, (ii) new ways to evaluate existing methods and systems, or (iii) characterization of workloads in real-world deployments. We solicit EA&B papers of the following flavors:
- Experimental Survey Papers: Experimental surveys compare multiple existing solutions (including open-source solutions) to a problem and, through extensive experiments, provide a new comprehensive perspective on their strengths and weaknesses. The core contribution of such experimental surveys lies in the gathered insights as well as reusable artifacts which allows for reproducibility.
- Workload Characterization Papers: Since workload characteristics largely influence the design and implementation of new algorithms and systems, such workload characterization papers describe new benchmarks, real-world workload characteristics and phenomena, the working of existing methods and systems under such workloads, or other empirical studies. The core contribution of such papers are often insights into workloads and reusable artifacts such as benchmark suites or traces.
Scalable Data Science (SDS) Papers
SDS papers bridge the gap between the Regular Research papers and the Industrial Track papers, especially for the fast evolving area of data science, data engineering, and applied machine learning. We solicit papers describing the design, implementation, or deployment of systems in the real world, with a special focus on different dimensions of scalability, such as data size, number of sources and models, number of concurrent users and requests, or degree of parallelism. We solicit SDS papers of the following flavors:
- Papers about Deployed Solutions describe the implementation of a system that solves a substantial real-world problem and is (or was) in use for an extended period of time in industry, science, medicine, education, government, non-profit organizations, or as open source. The paper should present the problem, its significance to the application domain, the design choices for the solution, the implementation challenges, and the lessons learned from successes and failures, including post-launch performance analysis.
- Papers about Enabling Infrastructure for deployment of applied machine learning also fall into the SDS category. An example may be an open-source, general-purpose entity linkage tool that takes data from a large number of data sources and links records that refer to the same real-world entity. Another example is a low-latency system for monitoring online model predictions on streaming data at scale to detect concept drift and recommend how to react.
Vision Papers
Vision papers describe novel system architectures, directions, or systems that show great promise for high impact in the future. To this end, vision papers go beyond the typical scope of a research paper, and describe a convincing motivation and new ideas for principled technology. Such vision papers are evaluated based on the novelty of the ideas and preliminary results instead of a full prototype and experimental evaluation.
Topics of Interest
PVLDB welcomes original research papers on a broad range of topics related to all aspects of data management. The themes and topics listed below are intended to serve primarily as indicators of the kinds of data-centric subjects that are of interest to PVLDB – they do not represent an exhaustive list.
Data Mining and Analytics
- Data mining algorithms for various data types
- Data warehousing and OLAP
- Data stream mining
- Parallel and distributed data mining
Data Privacy and Security
- Access control and privacy
- Blockchain
- Privacy-enhancing technologies
Database Performance and Manageability
- Administration and manageability
- Tuning, benchmarking, and performance measurement
DBMS Internals
- Access methods
- Concurrency control, recovery, and transactions
- Memory and storage management
- Multi-core processing and hardware acceleration
- Query processing and optimization
- Views, indexing, and search
Distributed Database Systems
- Cloud data management, resource management, database as a service
- Data networking and content delivery
- Distributed analytics
- Distributed transactions
- Key-value databases
Graph Data Management
- Graph data models, schemas, and query languages
- Graph database systems (storage, indexing, query optimization, etc.)
- Graph schemas and interoperability
- Knowledge graphs and knowledge management
- Web data management and Semantic Web
Information Integration
- Data cleaning, data quality, and data preparation
- Data discovery and search
- Data lakes and data governance
- Heterogeneous and federated DBMS
- Metadata management
- Schema matching and mapping
Network Data
- Graph algorithms for large-scale analysis
- Graph mining and pattern discovery
- Graph-based inference and application analytics
- Network data analysis (social networks, road networks, hypergraphs, etc.)
Schema and Languages
- Data models and query languages
- Schema management and design
ML/AI for Data Management
- Learned query processing and optimization
- Learned index structures and storage layouts
- Learned algorithms for sorting, compressing, encoding data
- Self-tuning and instance-optimized database systems
Data Management for ML/AI
- Data engineering and model management for ML
- Embeddings and vector databases
- Compilation and optimization in ML systems
- Runtime strategies and data access in ML systems
- New data system infrastructures and tools for applied ML
Novel Database Architectures
- Data management on novel hardware
- Embedded and mobile databases
- Energy-efficient and sustainable data systems
- Video management and analytics systems
Provenance and Workflows
- Debugging and explainable AI
- Process mining
- Profile-based and context-aware data management
- Provenance management and analysis
Specialized and Domain-Specific Data Management
- Crowdsourcing
- Fuzzy, probabilistic, and approximate data
- Image and multimedia databases
- Quantum data management
- Responsible data management
- Scientific and medical data management
- Spatial and temporal databases
Text and Semi-Structured Data
- Data extraction and processing
- Information retrieval
- Text in databases
Time Series Data
- Real-time databases, sensors and IoT, stream databases
- Time series data management and systems
- Time series analytics (forecasting, anomaly detection, imputation, classification, clustering, similarity search, etc.)
User Interfaces
- Data exploration
- Database support for visual analytics
- Database usability
- Interactive querying and visualization for large data
- NL interfaces to data
For details, please visit the PVLDB website: www.vldb.org/pvldb/volumes/20/contributions.
