3 research outputs found
CNN-based People Detection in Voxel Space using Intensity Measurements and Point Cluster Flattening
In this paper real-time people detection is demonstrated in a relatively large indoor industrial robot cell as well as in an outdoor environment. Six depth sensors mounted at the ceiling are used to generate a merged point cloud of the cell. The merged point cloud is segmented into clusters and flattened into gray-scale 2D images in the xy and xz planes. These images are then used as input to a classifier based on convolutional neural networks (CNNs). The final output is the 3D position (x,y,z) and bounding box representing the human. The system is able to detect and track multiple humans in real-time, both indoors and outdoors. The positional accuracy of the proposed method has been verified against several ground truth positions, and was found to be within the point-cloud voxel-size used, i.e. 0.04m. Tests on outdoor datasets yielded a detection recall of 76.9 percent and an F1 score of 0.87
Recommended from our members
Generic implementation of CAD models for nuclear simulation
The goal of this project is to utilize the preexisting framework of GADRAS to simulate the radiation leakage from arbitrary CAD models without sacrificing speed or accuracy. The proposed solution is to use STL files to define models. Then, a three-dimension binning structure is created to contain all the elements of the file. This results in preservation of speed, without adding higher performance hardware requirements. Finally, the discretization is performed using a three-dimension framework to utilize GADRAS’ refinement algorithm. The combination of these two enhancements results in an absolute error within 10% for standard conditions, and 20% for edge case conditions. The addition of arbitrary models will simplify the modeling process for complex shapes, allow for more flexible models, and allow for creation of models that are simply impossible in the current framework.Mechanical Engineerin
Embedded Processing and Compression of 3D Sensor Data for Large Scale Industrial Environments
This paper presents a scalable embedded solution for processing and transferring 3D point cloud data. Sensors based on the time-of-flight principle generate data which are processed on a local embedded computer and compressed using an octree-based scheme. The compressed data is transferred to a central node where the individual point clouds from several nodes are decompressed and filtered based on a novel method for generating intensity values for sensors which do not natively produce such a value. The paper presents experimental results from a relatively large industrial robot cell with an approximate size of 10 m × 10 m × 4 m. The main advantage of processing point cloud data locally on the nodes is scalability. The proposed solution could, with a dedicated Gigabit Ethernet local network, be scaled up to approximately 440 sensor nodes, only limited by the processing power of the central node that is receiving the compressed data from the local nodes. A compression ratio of 40.5 was obtained when compressing a point cloud stream from a single Microsoft Kinect V2 sensor using an octree resolution of 4 cm