top of page

Sensor Fusion: Object Detection
 

 In this part, a deep-learning approach is used to detect vehicles in LiDAR data based on a birds-eye view perspective of the 3D point-cloud. Also, a series of performance measures are used to evaluate the performance of the detection approach. 

My Tasks

  • Compute Lidar Point-Cloud from Range Image

    • Visualize the range-image channels​

    • Visualize lidar point-cloud​

​​

  • Create Birds-Eye View from Lidar PCL

    • Convert sensor coordinates to BEV-map coordinates​

    • Compute intensity layer of the BEV map​

    • Compute the height layer of the BEV map​

​​

  • Model-based Object Detection in BEV Image

    • Add a second model from a GitHub repo​

    • Extract 3D bounding boxes from model response​

​​

  • Performance Evaluation for Object Detection

    • Compute intersection-over-union between labels and detections​

    • Compute false-negatives and false-positives​

    • Compute precision and recall​

Compute Lidar Point-Cloud from Range Image

Range Image 1

Upper: distance image

Lower: intensity image

​

Range Image 2

Upper: distance image

Lower: intensity image

​

Point-Cloud 1

You can clearly see the features of the vehicle in the point-cloud.

​

Point-Cloud 2

Point cloud view at the intersection.

​

Create Birds-Eye View from Lidar Point Cloud

Birds-Eye View1

Overlap the intensity layer and the height layer together.

​

Birds-Eye View2

Birds-eye view at the intersection.

​

Model-based Object Detection in BEV Image

Object Detection1

Use the darknet to do object detection. 

​

Object Detection2

Object Detection at the intersection

​

Performance Evaluation for Object Detection

Performance Metrics

​

Number of Frames: 100

Number of Vehicles Detected: 270

Detection Precision: 0.978

  • High precision means the object detection model performs well in detecting vehicles

Detection Recall: 0.882

  • The actual object detection model crops the BEV map, resulting in the inability to detect vehicles that are far away from the source. As a consequence, the False Negative rate is somewhat elevated.

Intersection Over Union: distribution of ious

Position Errors in X: the deviation above the x-axis coordinate between ground-truth labels and detections

Position Errors in Y: the deviation above the y-axis coordinate between ground-truth labels and detections

Position Errors in Z: the deviation above the z-axis coordinate between ground-truth labels and detections

bottom of page