45 research outputs found
KALMAN FILTER BASED FEATURE ANALYSIS FOR TRACKING PEOPLE FROM AIRBORNE IMAGES
Recently, analysis of man events in real-time using computer vision techniques became a very important research field. Especially, understanding motion of people can be helpful to prevent unpleasant conditions. Understanding behavioral dynamics of people can also help to estimate future states of underground passages, shopping center like public entrances, or streets. In order to bring an automated solution to this problem, we propose a novel approach using airborne image sequences. Although airborne image resolutions are not enough to see each person in detail, we can still notice a change of color components in the place where a person exists. Therefore, we propose a color feature detection based probabilistic framework in order to detect people automatically. Extracted local features behave as observations of the probability density function (pdf) of the people locations to be estimated. Using an adaptive kernel density estimation method, we estimate the corresponding pdf. First, we use estimated pdf to detect boundaries of dense crowds. After that, using background information of dense crowds and previously extracted local features, we detect other people in non-crowd regions automatically for each image in the sequence. We benefit from Kalman filtering to track motion of detected people. To test our algorithm, we use a stadium entrance image data set taken from airborne camera system. Our experimental results indicate possible usage of the algorithm in real-life man events. We believe that the proposed approach can also provide crucial information to police departments and crisis management teams to achieve more detailed observations of people in large open area events to prevent possible accidents or unpleasant conditions
A semi-automatic procedure for texturing of laser scanning point clouds with google streetview images
We introduce a method to texture 3D urban models with photographs that even works for Google Streetview images and can be done with currently available free software. This allows realistic texturing, even when it is not possible or cost-effective to (re)visit a scanned site to take textured scans or photographs. Mapping a photograph onto a 3D model requires knowledge of the intrinsic and extrinsic camera parameters. The common way to obtain intrinsic parameters of a camera is by taking several photographs of a calibration object with a priori known structure. The extra challenge of using images from a database such as Google Streetview, rather than your own photographs, is that it does not allow for any controlled calibration. To overcome this limitation, we propose to calibrate the panoramic viewer of Google Streetview using Structure from Motion (SfM) on any structure of which Google Streetview offers views from multiple angles. After this, the extrinsic parameters for any other view can be calculated from 3 or more tie points between the image from Google Streetview and a 3D model of the scene. These point correspondences can either be obtained automatically or selected by manual annotation. We demonstrate how this procedure provides realistic 3D urban models in an easy and effective way, by using it to texture a publicly available point cloud from a terrestrial laser scan made in Bremen, Germany, with a screenshot from Google Streetview, after estimating the focal length from views from Paris, France.Geoscience & Remote SensingCivil Engineering and Geoscience
Active Shapes for Automatic 3D Modeling of Buildings
Recent technological developments help us to acquire high quality 3D measurements of our urban environment. However, these measurements, which come as point clouds or Digital Surface Models (DSM), do not directly give 3D geometrical models of buildings. In addition to that, they are not suitable for fast 3D rendering. Therefore, detection and 3D reconstruction of buildings is an important research topic. We introduce a new active shape fitting algorithm for generating building models. Two significant improvements of the introduced method compared to our previous active shape algorithm are: (1) here, active shapes are initialized as cubes; and (2) the new energy function is computed by measuring the distances of the vertical cube faces to the building facade points and also by measuring the mean distance between the rooftop points and the top face of the cube. The proposed method helps to obtain 3D building models automatically even when the facade borders are difficult to detect because of neighboring trees or other objects. For testing the proposed approach, we use Airborne Laser Scanning (ALS) data of an area in Delft, The Netherlands. We compare the proposed 3D active shape fitting method with a previously developed 2D method. The results show the possible usage of the algorithm when simple and easy-to-render 3D models of large cities are needed.Geoscience & Remote SensingCivil Engineering and Geoscience
Accuracy assessment of building point clouds automatically generated from iphone images
Low-cost sensor generated 3D models can be useful for quick 3D urban model updating, yet the quality of the models is questionable. In this article, we evaluate the reliability of an automatic point cloud generation method using multi-view iPhone images or an iPhone video file as an input. We register such automatically generated point cloud on a TLS point cloud of the same object to discuss accuracy, advantages and limitations of the iPhone generated point clouds. For the chosen example showcase, we have classified 1.23% of the iPhone point cloud points as outliers, and calculated the mean of the point to point distances to the TLS point cloud as 0.11m. Since a TLS point cloud might also include measurement errors and noise, we computed local noise values for the point clouds from both sources. Mean (μ) and standard deviation (σ) of roughness histograms are calculated as (μ1 = 0.44m., σ1 = 0.071m.) and (μ2 = 0.025m., σ2 = 0.037m.) for the iPhone and TLS point clouds respectively. Our experimental results indicate possible usage of the proposed automatic 3D model generation framework for 3D urban map updating, fusion and detail enhancing, quick and real-time change detection purposes. However, further insights should be obtained first on the circumstances that are needed to guarantee a successful point cloud generation from smartphone images.Optical and Laser Remote Sensin
Automatic classification of trees from laser scanning point clouds
Development of laser scanning technologies has promoted tree monitoring studies to a new level, as the laser scanning point clouds enable accurate 3D measurements in a fast and environmental friendly manner. In this paper, we introduce a probability matrix computation based algorithm for automatically classifying laser scanning point clouds into ’tree’ and ’non-tree’ classes. Our method uses the 3D coordinates of the laser scanning points as input and generates a new point cloud which holds a label for each point indicating if it belongs to the ’tree’ or ’non-tree’ class. To do so, a grid surface is assigned to the lowest height level of the point cloud. The grids are filled with probability values which are calculated by checking the point density above the grid. Since the tree trunk locations appear with very high values in the probability matrix, selecting the local maxima of the grid surface help to detect the tree trunks. Further points are assigned to tree trunks if they appear in the close proximity of trunks. Since heavy mathematical computations (such as point cloud organization, detailed shape 3D detection methods, graph network generation) are not required, the proposed algorithm works very fast compared to the existing methods. The tree classification results are found reliable even on point clouds of cities containing many different objects. As the most significant weakness, false detection of light poles, traffic signs and other objects close to trees cannot be prevented. Nevertheless, the experimental results on mobile and airborne laser scanning point clouds indicate the possible usage of the algorithm as an important step for tree growth observation, tree counting and similar applications. While the laser scanning point cloud is giving opportunity to classify even very small trees, accuracy of the results is reduced in the low point density areas further away than the scanning location. These advantages and disadvantages of two laser scanning point cloud sources are discussed in detail.Geoscience & Remote SensingCivil Engineering and Geoscience
Recommended from our members
Environmental thresholds for mass-extinction events
While the global-average temperatures are rapidly rising, more researchers have been shifting their focus towards the past mass-extinction events in order to show the relations between temperature increase and temperature thresholds which might trigger extinction of species. These temperature and mass-extinction relation graphs are found practical by conservationists and policy makers to determine temperature threshold values to set climate targets. Unfortunately, this approach might be dangerous, because mass-extinction events (MEEs) are related to many environmental parameters and temperature is only one of them. Herein we provide a more comprehensive evaluation of the environmental thresholds required to sustain a habitable planet. Besides, we suggest actions within the sustainable-development goals (SDGs) to observe those critical environmental parameters, in order to assure having an inhabitable planet for the current living species. © 2022Open access journalThis item from the UA Faculty Publications collection is made available by the University of Arizona with support from the University of Arizona Libraries. If you have questions, please contact us at [email protected]
Automatic registration of Iphone images to LASER point clouds of the urban structures using shape features
Fusion of 3D airborne laser (LIDAR) data and terrestrial optical imagery can be applied in 3D urban modeling and model up-dating.
The most challenging aspect of the fusion procedure is registering the terrestrial optical images on the LIDAR point clouds. In this
article, we propose an approach for registering these two different data from different sensor sources. As we use iPhone camera images
which are taken in front of the interested urban structure by the application user and the high resolution LIDAR point clouds of the
acquired by an airborne laser sensor. After finding the photo capturing position and orientation from the iPhone photograph metafile,
we automatically select the area of interest in the point cloud and transform it into a range image which has only grayscale intensity
levels according to the distance from the image acquisition position. We benefit from local features for registering the iPhone image to
the generated range image. In this article, we have applied the registration process based on local feature extraction and graph matching.
Finally, the registration result is used for facade texture mapping on the 3D building surface mesh which is generated from the LIDAR
point cloud. Our experimental results indicate possible usage of the proposed algorithm framework for 3D urban map updating and
enhancing purposes
Reinforcement learning helps slam: Learning to build maps
In this research, we investigate the use of Reinforcement Learning (RL) for an effective and robust solution for exploring unknown and indoor environments and reconstructing their maps. We benefit from a Simultaneous Localization and Mapping (SLAM) algorithm for real-time robot localization and mapping. Three different reward functions are compared and tested in different environments with growing complexity. The performances of the three different RL-based path planners are assessed not only on the training environments, but also on an a priori unseen environment to test the generalization properties of the policies. The results indicate that RL-based planners trained to maximize the coverage of the map are able to consistently explore and construct the maps of different indoor environments