Exploiting Scene Depth for Object Detection with Multimodal Transformers (BMVC 2021)

Abstract

We propose a generic framework MEDUSA (Multimodal Estimated-Depth Unification with Self-Attention) to fuse RGB and depth information using multimodal transformers in the context of object detection. Unlike previous methods that use the depth measured from various physical sensors such as Kinect and Lidar, we show that the depth maps inferred by a monocular depth estimator can play an important role to enhance the performance of modern object detectors. In order to make use of the estimated depth, \algname{} encompasses a robust feature extraction phase, followed by multimodal transformers for RGB-D fusion. The main strength of \algname{} lies in its broad applicability for any existing large-scale RGB datasets including PASCAL VOC and Microsoft COCO. Extensive experiments with three datasets show that \algname{} achieves higher precision than several strong baselines.

Publication
Proceedings of the 32nd British Machine Vision Conference