RGBDT500--Collaborating Vision, Depth, and Thermal Signals for Multi-Modal Tracking

1 Jiangnan University 2 Zhejiang University 3 Tsinghua University 4 University of Surrey

News

  1. 05/11/2025: The RGBDT500 dataset is released at Google Drive.
  2. 05/11/2025: A baseline method RDTTrack for tri-modal tracking is released at GitHub Page.
  3. 07/29/2025: The Evaluation Toolkit and results of 17 trackers are released at GitHub Page.

Highlights

  1. More modalities : RGB, Depth, and Thermal Infrared
  2. Large-scale dataset : 500 tri-modal sequences, 203.7K RGB-D-T image triplets
  3. Comprehensive evaluation: Visual tracking, RGB-T tracking, RGB-D tracking, RGB-D-T tracking
  4. Generic scene and diverse object category: >=66 object classes; >100 scenes

Abstract


Existing multi-modal object tracking approaches primarily focus on dual-modal paradigms, such as RGB-Depth or RGB-Thermal, yet remain challenged in complex scenarios due to limited input modalities. To address this gap, this work introduces a novel multi-modal tracking task that leverages three complementary modalities, including visible RGB, Depth (D), and Thermal Infrared (TIR), aiming to enhance robustness in complex scenarios. To support this task, we construct a new multi-modal tracking dataset, coined RGBDT500, which consists of 500 videos with synchronised frames across the three modalities. Each frame provides spatially aligned RGB, depth, and thermal infrared images with precise object bounding box annotations. Furthermore, we propose a novel multi-modal tracker, dubbed RDTTrack. RDTTrack integrates tri-modal information for robust tracking by leveraging a pretrained RGB-only tracking model and prompt learning techniques. In specific, RDTTrack fuses thermal infrared and depth modalities under a proposed orthogonal projection constraint, then integrates them with RGB signals as prompts for the pre-trained foundation tracking model, effectively harmonising tri-modal complementary cues. The experimental results demonstrate the effectiveness and advantages of the proposed method, showing significant improvements over existing dual-modal approaches in terms of tracking accuracy and robustness in complex scenarios.

Download

Type                       Baidu Disk           Google Drive          
Full Dataset link link
Training Set link link
Test Set link link

Evaluation & Results

For evaluation, the Area Under Curve (AUC) and Distance Precision (DP) of precision and success plots are adopted. The Evaluation Toolkit and results of 17 trackers are released at GitHub Page.



Citation

@InProceedings{Zhu_RGBDT500,
  author = {Xue-Feng Zhu and Tianyang Xu and Yifan Pan and Jinjie Gu and Xi Li and Jiwen Lu and Xiao-Jun Wu and Josef Kittler},
  title = {Collaborating Vision, Depth, and Thermal Signals for Multi-Modal Tracking: Dataset and Algorithm},
  year = {2025}
}
          

Contact

If you have any question, please contact Xue-Feng Zhu at xuefeng_zhu95@163.com.