Advancements in 3D LiDAR Object Detection

January 9, 2025

Introduction

In the rapidly evolving fields of robotics and autonomous systems, object detection is crucial for enabling machines to understand and interact with their environments. This understanding is essential for tasks like collision avoidance, path planning, and interaction with dynamic environments. Unlike traditional cameras that capture 2D images, LiDAR generates point cloud data with detailed depth perception, making it ideal for detecting objects at both close and far ranges, and in lighting conditions where cameras may struggle. In this blog, we introduce two advancements developed in LidarView and Kitware’s SLAM library in object detection using LiDAR sensors, highlighting different use cases based on whether the sensor is fixed or moving.

Object detection with fixed LiDAR

We have recently enhanced the Motion Detector filter in LidarView. This filter focuses on object detection environments where the LiDAR is static. It operates in two phases: motion detection and the clustering of motion points.

Motion Detection 

At the core of our motion detection system is the Gaussian Mixture Model (GMM). This statistical approach effectively distinguishes between stationary and moving points by modeling the distribution of point cloud data across frames. The key concept is to create a spherical map around the LiDAR sensor, where each bin contains a GMM that represents the depth values at that specific space defined by spherical angular coordinates of the laser.

Figure 1. The spherical map with Gaussian Mixture Model in each bin

The number of the gaussian distributions in each GMM is dynamic, allowing the model to adapt to varying environmental complexities. Each distribution is classified as either background (static) or foreground (moving object). The assumption here is that points belonging to the background appears more frequently over time than those from moving objects. For each point in a frame, the model evaluates which Gaussian distribution it most likely belongs to by calculating maximum probability. If the point fits a foreground distribution, it is identified as part of a moving object.

Clustering Motion Points 

Once motion points are detected, they need to be clustered for further analysis. Our filter offers three distinct clustering methods, each with specific advantages depending on the use case:

  • Euclidean Clustering: This method groups points based on their spatial proximity, making it ideal for identifying compact motion areas. It is particularly useful when a fast clustering approach is needed, as it quickly groups points that are close to each other in 3D space.
  • Gaussian Mixture Model Clustering: In this phase, GMM is utilized again but focuses on the clustering of motion points in each frame. By leveraging the probabilistic nature of GMM, this method enhances the accuracy of clustering, especially in complex environments where simple proximity-based clustering might struggle.
  • Region Growing: A voxel grid is built to represent motion points. This technique starts from a set of seed voxels and gradually expands regions in the grid. It is particularly useful when motion points are scattered but related, as it effectively merges adjacent or nearby clusters that belong to the same moving object.
Motion detection and clustering in an indoor scene.
Note : The cluster ID increases with an internal counter that also accounts for cluster seeds which do not meet the criteria to be included in the filter’s output. As a result, the displayed cluster ID may exceed the number of visible output clusters.
Motion detector and clustering at crossroad

Both the GMM-based and Region Growing clustering methods offer the potential for rough tracking, as they can provide insight into the movement patterns of detected objects by grouping motion points over time. More elaborate tracking (based for instance on Kalman filtering) may be added as a post process step to enhance this.

This filter is especially useful for applications like surveillance, traffic monitoring, and industrial automation, where detecting and tracking motion in a static scene is critical. Its ability to quickly cluster motion points and offer rough tracking insights makes it a valuable component in systems that require precise object detection and analysis.

Obstacle Detection in SLAM

Obstacle detection is now integrated into the ROS wrapping of our SLAM library, providing a critical tool for ensuring that robotic systems navigate safely and efficiently. 

Our method for obstacle detection begins with the use of a prior reference map,  which serves  to differentiate between known and unknown points in the environment. Points that do not exist in the reference map are classified as potential obstacles. Once these obstacle points are identified, they are projected onto the ground to generate an occupancy grid. This grid serves as an effective representation of the environment, indicating which areas are free and which are occupied by obstacles. To further refine the obstacle detection process, we employ the region growing method for clustering points on the occupancy grid. This technique groups adjacent obstacle points, improving the handling of moving objects and reducing occlusion effects. The clustering allows for more efficient management of detected obstacles by providing information such as the position, size and orientation of each obstacle.

Conclusion

The motion detection filter in LidarView and obstacle detection in SLAM provide practical tools for addressing specific object detection challenges in robotics. These methods improve the ability to detect and cluster motion in static environments and safely navigate dynamic spaces using a reference map. By offering flexible and customizable solutions, Kitware’s tools empower developers and researchers to implement these techniques in various applications, from surveillance to autonomous navigation, ensuring their systems operate safely and efficiently in real-world conditions. We invite you to explore Kitware’s tools and technologies, contribute to our open source platforms and contact us if you see domains where we could support you to enrich and/or adapt those technologies to your needs.

Leave a Reply