- Document Number:
20240404064
- Appl. No:
18/675567
- Application Filed:
May 28, 2024
- نبذة مختصرة :
Systems and methods for performing semantic segmentation of LiDAR point clouds are provided. LiDAR point data are generated using aerial vehicles equipped with LiDAR transceivers. Tiles representing sampled areas of the point cloud are augmented by rotation or division, allowing for multiple semantic segmentation by a neural network, resulting in a smoother segmentation with an increased quality. To increase the quality further, buffers can be used around tiles, the scan angle of each LiDAR point can be used as input to the neural network, and points of an object can be segmented into different classes. The resulting segmentations are then aggregated to provide a final semantic segmentation, which can be used to create a digital terrain map, a building footprint or a vegetation map.
- Claim:
1. A system for performing a semantic segmentation of a LiDAR point cloud, the system comprising: an aerial collection vehicle equipped with an LiDAR transceiver, the LiDAR transceiver comprising at least a transmitter configured to send a laser beam towards an object and a receiver configured to detect a reflection of the laser beam on the object, the optical transceiver configured to create point data based on the reflection of the beam, the point data corresponding to the LiDAR point cloud; and a classification subsystem, comprising at least: an acquisition module, configured to receive the LiDAR point cloud, a partitioning module, configured to partition the LiDAR point cloud in tiles, each tile representing a sampled area of the LiDAR point cloud, an augmentation module, configured to create a plurality of transformed tiles associated with each tile using at least one of: rotating the corresponding tile about an axis by a plurality of random angles, and dividing the corresponding tile using a specific column size randomly chosen from a predefined range of optimal column sizes; a classification module, configured to implement multiple semantic segmentation, the classification module comprising a neural network trained to input one of the plurality of transformed tiles and output a corresponding segmented transformed tile; and an aggregation module, configured to aggregate a plurality of segmented transformed tiles output by the neural network and create a final semantic segmentation result defining a segmented point cloud including a class for each point of the LiDAR point cloud.
- Claim:
2. The system of claim 1, further comprising one or more additional aerial collection vehicles, each equipped with an additional LiDAR transceiver, wherein the point data is a combination of the detections of the aerial collection vehicle and of the one or more additional aerial collection vehicles.
- Claim:
3. A system for performing a semantic segmentation of a LiDAR point cloud, the system comprising: at least one processor; at least one memory; an acquisition module, configured to receive tiles, each representing a sampled area of the LiDAR point cloud, and a predefined range of optimal column sizes for dividing the tiles; an augmentation module, configured to create a plurality of transformed tiles associated with each tile using at least one of: rotating the corresponding tile about an axis by a plurality of random angles, and dividing the corresponding tile using a specific column size randomly chosen from a predefined range of optimal column sizes; a classification module, configured to implement multiple semantic segmentation, the classification module comprising a neural network trained to input one of the plurality of transformed tiles and output a corresponding segmented transformed tile; and an aggregation module, configured to aggregate a plurality of segmented transformed tiles output by the neural network and create a final semantic segmentation result defining a segmented point cloud including a class for each point of the LiDAR point cloud.
- Claim:
4. The system of claim 3, wherein the point data comprise, for each of a plurality of points, at least one of: coordinates, a return number, a number of returns, an intensity and a scan angle.
- Claim:
5. The system of claim 4, wherein the input of the neural network comprises, for each point, at least the scan angle of the point.
- Claim:
6. The system of claim 3, wherein the tiles are provided with an overlap buffer and wherein points of the segmented transformed tiles output corresponding to the overlap buffer are discarded.
- Claim:
7. The system of claim 3, wherein the augmentation module is configured to rotate each of the tiles about a z axis by an angle [mathematical expression included] rad, with i indicating an i'th rotation, r a random real number in the range [0; 1] generated for each rotation, and n a predefined number of inferences.
- Claim:
8. The system of claim 3, comprising a sampling module, configured to sample the plurality of transformed tiles before input in the neural network.
- Claim:
9. The system of claim 3, wherein the neural network is trained to classify points corresponding to different layers of a same object in different classes.
- Claim:
10. The system of claim 3, further configured to create a digital terrain map, a building footprint and/or a vegetation map.
- Claim:
11. A method for performing a semantic segmentation of a LiDAR point cloud, the method comprising: receiving tiles, each representing a sampled area of the LiDAR point cloud, and a predefined range of optimal column sizes for dividing the point cloud tiles; creating a plurality of transformed tiles associated with each tile using at least one of: rotating the corresponding tile about an axis by a plurality of random angles, and dividing the corresponding tile using a specific column size randomly chosen from a predefined range of optimal column sizes performing multiple semantic segmentation by a neural network trained to input one of the plurality of transformed tiles and output a corresponding segmented transformed tile; and aggregating a plurality of segmented transformed tiles output by the neural network to create a final semantic segmentation result defining a segmented point cloud including a class for each point of the LiDAR point cloud.
- Claim:
12. The method of claim 11, comprising the step of acquiring the point cloud by an aerial collection vehicle equipped with a LiDAR transceiver, the optical transceiver comprising at least a transmitter configured to send a laser beam towards an object and a receiver configured to detect a reflection of the laser beam on the object, the LiDAR optical transceiver configured to create point data based on the reflection of the beam, the point data corresponding to the LiDAR point cloud.
- Claim:
13. The method of claim 12, comprising the steps of: acquiring additional point data by one or more additional collection vehicles, each equipped with an additional optical transceiver; and combining the point data and the additional point data to form the point cloud.
- Claim:
14. The method of claim 11, wherein the point data comprise, for each of a plurality of points, at least one of: coordinates, a return number, a number of returns, an intensity and a scan angle.
- Claim:
15. The method of claim 14, wherein the input of the neural network comprises, for each point, at least the scan angle of the point.
- Claim:
16. The method of claim 11, wherein rotating the corresponding tile comprises rotating the corresponding tile about a z axis by an angle [mathematical expression included] rad, with i indicating an i'th rotation, r a random real number in the range [0; 1) generated for each rotation, and n a predefined number of inferences.
- Claim:
17. The method of claim 11, wherein the tiles are provided with an overlap buffer, comprising the step of discarding points of the segmented transformed tiles output corresponding to the overlap buffer.
- Claim:
18. The method of claim 11, comprising the step of sampling the plurality of transformed tiles before input in the neural network.
- Claim:
19. The method of claim 11, wherein the neural network is trained to classify points corresponding to different layers of a same object in different classes.
- Claim:
20. The method of claim 11, further comprising creating a digital terrain map, a building footprint and/or a vegetation map.
- Current International Class:
06; 01; 06
- الرقم المعرف:
edspap.20240404064
No Comments.