go back

Volume 18, No. 8

TMLKD: Few-shot Trajectory Metric Learning via Knowledge Distillation

Authors:
Danling Lai, Jiajie Xu, Jianfeng Qu, Pingfu Chao, Junhua Fang, Chengfei Liu

Abstract

Trajectory metric learning, which supports the trajectory similarity search, is one of the most fundamental tasks in spatial-temporal data analysis. However, existing trajectory metric learning methods rely on massive labels of pairwise trajectory distance, and thus cannot be applied to few-shot scenarios frequently occurring in real-world applications. Though performance drops caused by insufficient labels can be alleviated by knowledge distillation, we demonstrate that they cannot be directly applied to few-shot trajectory metric learning due to the domain shift problem. To this end, this paper proposes invariant and relaxed learning enhanced knowledge distillation method TMLKD for few-shot trajectory metric learning, such that domain-invariant representation and rank knowledge can be distilled. Specifically, in the representation learning phase, it first employs an adversarial sub-network to distinguish domain-specific and domain-invariant information, so as to distill transferable representation knowledge from teacher models. To mitigate the few-shot problem in student model training, we further enrich sparse labels of the target domain by utilizing the rank knowledge revealed in teachers’ predictions. Particularly, TMLKD employs a list-wise learning-to-rank approach to learn the relaxed trajectory ranking orders instead of focusing on all the samples inefficiently. Finally, to guide accurate distillation, we adaptively assign reliability of teacher prediction by utilizing the ground-truth labels, to avoid misleading the student model with low-quality teacher predictions. Extensive experiments on three real-world datasets demonstrate the superiority of our model.

PVLDB is part of the VLDB Endowment Inc.

Privacy Policy