go back

Volume 18, No. 7

SimRN: Trajectory Similarity Learning in Road Networks based on Distributed Deep Reinforcement Learning

Authors:
Danlei Hu, Yilin Li, Lu Chen, Ziquan Fang, Yushuai Li, Yunjun Gao, Tianyi Li

Abstract

Trajectory similarity computation in road networks is crucial for data analytics. However, both non-learning-based and learningbased methods face challenges. First, they suffer from low accuracy due to manual parameter selection for model training and the omission of key spatio-temporal features in road networks. Second, they have low efficiency, stemming from the high time complexity of similarity computation and the time-consuming training process. Third, learning-based methods struggle with poor model generality due to the small size of available training samples. To address these challenges, we propose an effective and efficient trajectory similarity learning framework for road networks, called SimRN . To our knowledge, SimRN is the first deep reinforcement learning (DRL) approach for trajectory similarity computation. Specifically, SimRN consists of three key modules: the spatio-temporal prompt information extraction (STP) module, the trajectory representation based on DRL (TrajRL) module, and the graph contrastive learning (GCL) module. The STP module captures spatio-temporal features from road networks to improve the training of the trajectory representation. The TrajRL module automatically selects optimal parameters and enables parallel training, improving both trajectory representation and the efficiency of similarity computations. The GCL module employs a self-supervised contrastive learning paradigm to generate sufficient samples while preserving spatial constraints and temporal dependencies of trajectories. Extensive experiments on two real-world datasets, compared with three state-of-the-art methods, show that SimRN : (i) improves accuracy by 20%–40%, (ii) achieves speedups of 2–4x, and (iii) demonstrates strong generality, enabling effective similarity learning with very small sample sizes.

PVLDB is part of the VLDB Endowment Inc.

Privacy Policy