36 research outputs found

    4D Scene Reconstruction in Multi-Target Scenarios

    Get PDF
    In this report, we introduce a complex approach on 4D reconstruction of dynamic scenarios containing multiple walking pedestrians. The input of the process is a point cloud sequence recorded by a rotating multi-beam Lidar sensor, which monitors the scene from a fixed position. The output is a geometrically reconstructed and textured scene containing moving 4D people models, which can follow in real time the trajectories of the walking pedestrians observed on the Lidar data flow. Our implemented system consists of four main steps. First, we separate foreground and background regions in each point cloud frame of the sequence by a robust probabilistic approach. Second, we perform moving pedestrian detection and tracking, so that among the point cloud regions classified as foreground, we separate the different objects, and assign the corresponding people positions to each other over the consecutive frames of the Lidar measurement sequence. Third, we geometrically reconstruct the ground, walls and further objects of the background scene, and texture the obtained models with photos taken from the scene. Fourth we insert into the scene textured 4D models of moving pedestrians which were preliminary created in a special 4D reconstruction studio. Finally, we integrate the system elements in a joint dynamic scene model and visualize the 4D scenario

    3D People Surveillance on Range Data Sequences of a Rotating Lidar

    Get PDF
    In this paper, we propose an approach on real-time 3D people surveillance, with probabilistic foreground modeling, multiple person tracking and on-line re-identification. Our principal aim is to demonstrate the capabilities of a special range sensor, called rotating multi-beam (RMB) Lidar, as a future possible surveillance camera. We present methodological contributions in two key issues. First, we introduce a hybrid 2D--3D method for robust foreground-background classification of the recorded RMB-Lidar point clouds, with eliminating spurious effects resulted by quantification error of the discretized view angle, non-linear position corrections of sensor calibration, and background flickering, in particularly due to motion of vegetation. Second, we propose a real-time method for moving pedestrian detection and tracking in RMB-Lidar sequences of dense surveillance scenarios, with short- and long-term object assignment. We introduce a novel person re-identification algorithm based on solely the Lidar measurements, utilizing in parallel the range and the intensity channels of the sensor, which provide biometric features. Quantitative evaluation is performed on seven outdoor Lidar sequences containing various multi-target scenarios displaying challenging outdoor conditions with low point density and multiple occlusions

    Viewpoint-free Video Synthesis with an Integrated 4D System

    Get PDF
    In this paper, we introduce a complex approach on 4D reconstruction of dynamic scenarios containing multiple walking pedestrians. The input of the process is a point cloud sequence recorded by a rotating multi-beam Lidar sensor, which monitors the scene from a fixed position. The output is a geometrically reconstructed and textured scene containing moving 4D people models, which can follow in real time the trajectories of the walking pedestrians observed on the Lidar data flow. Our implemented system consists of four main steps. First, we separate foreground and background regions in each point cloud frame of the sequence by a robust probabilistic approach. Second, we perform moving pedestrian detection and tracking, so that among the point cloud regions classified as foreground, we separate the different objects, and assign the corresponding people positions to each other over the consecutive frames of the Lidar measurement sequence. Third, we geometrically reconstruct the ground, walls and further objects of the background scene, and texture the obtained models with photos taken from the scene. Fourth we insert into the scene textured 4D models of moving pedestrians which were preliminary created in a special 4D reconstruction studio. Finally, we integrate the system elements in a joint dynamic scene model and visualize the 4D scenario

    Lidar-based Gait Analysis and Activity Recognition in a 4D Surveillance System

    Get PDF
    This paper presents new approaches for gait and activity analysis based on data streams of a Rotating Multi Beam (RMB) Lidar sensor. The proposed algorithms are embedded into an integrated 4D vision and visualization system, which is able to analyze and interactively display real scenarios in natural outdoor environments with walking pedestrians. The main focus of the investigations are gait based person re-identification during tracking, and recognition of specific activity patterns such as bending, waving, making phone calls and checking the time looking at wristwatches. The descriptors for training and recognition are observed and extracted from realistic outdoor surveillance scenarios, where multiple pedestrians are walking in the field of interest following possibly intersecting trajectories, thus the observations might often be affected by occlusions or background noise. Since there is no public database available for such scenarios, we created and published a new Lidar-based outdoors gait and activity dataset on our website, that contains point cloud sequences of 28 different persons extracted and aggregated from 35 minutes-long measurements. The presented results confirm that both efficient gait-based identification and activity recognition is achievable in the sparse point clouds of a single RMB Lidar sensor. After extracting the people trajectories, we synthesized a free-viewpoint video, where moving avatar models follow the trajectories of the observed pedestrians in real time, ensuring that the leg movements of the animated avatars are synchronized with the real gait cycles observed in the Lidar stream

    Lidar-based Gait Analysis and Activity Recognition in a 4D Surveillance System

    Get PDF

    Struktúrális információ az érzékelők mérési terében = Structural information in the space of sensor networks

    Get PDF
    A projekt során különböző körülmények között végeztünk méréseket, és ennek megfelelő feladatokban értünk el eredményeket: 1. Több kamera használatával: mozgáskövetés, mozgásjelleg/viselkedés felismerés, helyszín geometria viszonyainak bemérése. 2. Mélységi detekcióra alkalmas eszközökkel: LIDAR és TOF kamera képeiből illetve pontfelhőjéből detektáltunk mozgásjellemzőket, 3D alakzatokat. 3. Légi és orvosi képeken illetve képsorozatokon: változások követése, jellegzetes struktúrák detektálása. A projekt során jelentős elméleti eredmények születettek: 1. A vizsgált helyszín jellemző struktúráinak illetve változásainak felismerésére, 2. Új képleírók kidolgozása gyenge felbontású alakzatok felismeréséhez és finom felbontású aktív kontúr előállítására, 3. Videókép sorozatokon a szokatlan mozgássorok illetve speciális viselkedések felismerése, követése, 4. Mélységi információk szűrése 2D (gráfok, dekonvolúció), illetve 3D (LIDAR, TOF) adatokon. Az eredményeket a téma szakkonferenciáin, illetve a szakma jelentős folyóirataiban publikáltuk. | We have built up several measurement environments for the project’ purposes, and we have achieved results evaluating the experiments in these setups: 1. Multicamera system: motion tracking, recognition of the behavior of the objects, the structural geometry given by the scene events, 2. Devices for depth measurements: images and point-clouds of LIDAR and Time-of-Flight cameras for motion tracking and shape detection, 3. Aerial and medical images/image series: detection of changes, finding featuring structures. During the project the following important theoretical results have been published in the most important conferences and journals: 1. Change detection and structure recognition of the given scene, 2. Improved feature point set for low resolution pattern recognition and enhanced active contour detection, 3. Unusual motion flow pattern and crowd behavior detection on video sequences, 4. Depth information filters in 2D (graphs, deconvolution) and in 3D (LIDAR, TOF)

    Towards 4D Virtual City Reconstruction From Lidar Point Cloud Sequences

    Get PDF
    In this paper we propose a joint approach on virtual city reconstruction and dynamic scene analysis based on point cloud sequences of a single car-mounted Rotating Multi-Beam (RMB) Lidar sensor. The aim of the addressed work is to create 4D spatio-temporal models of large dynamic urban scenes containing various moving and static objects. Standalone RMB Lidar devices have been frequently applied in robot navigation tasks and proved to be efficient in moving object detection and recognition. However, they have not been widely exploited yet for geometric approximation of ground surfaces and building facades due to the sparseness and inhomogeneous density of the individual point cloud scans. In our approach we propose an automatic registration method of the consecutive scans without any additional sensor information such as IMU, and introduce a process for simultaneously extracting reconstructed surfaces, motion information and objects from the registered dense point cloud completed with point time stamp information
    corecore