Research Article

Multi-Sensor Data Simulation and Object Detection: Integrating Cameras, LiDAR, Radar, and Depth Estimation for Enhanced 3D Analysis

Authors

  • Deshpande, Spriha Santa Clara, USA

Abstract

The integration of data from cameras, Light Detection and Ranging (LiDARs), and radars provides a highly robust mechanism for detecting and tracking objects in autonomous systems. Each sensor offers unique advantages—cameras provide rich visual data, LiDARs ensure accurate depth information, and radars are effective under adverse weather conditions. This project combines these data sources through multi-sensor fusion techniques to achieve superior object detection and distance estimation. Using a YOLO-based object detection model alongside stereo vision for depth estimation, the system simulates multi-sensor data and offers real-time 3D visualization. The approach significantly enhances detection accuracy and spatial interpretation compared to single-sensor methods, paving the way for safer and more efficient autonomous vehicles and robotic systems.

Article information

Journal

Journal of Computer Science and Technology Studies

Volume (Issue)

5 (1)

Pages

57-73

Published

2023-03-25

How to Cite

Deshpande, Spriha. (2023). Multi-Sensor Data Simulation and Object Detection: Integrating Cameras, LiDAR, Radar, and Depth Estimation for Enhanced 3D Analysis. Journal of Computer Science and Technology Studies, 5(1), 57-73. https://doi.org/10.32996/jcsts.2023.5.1.8

Downloads

Views

40

Downloads

13

Keywords:

multi-sensor fusion, cameras, LiDAR, radar, object detection, YOLO, stereo vision, depth estimation, autonomous systems, real-time 3D visualization, detection accuracy, spatial interpretation, autonomous vehicles, robotic systems, sensor integration