7 research outputs found

    Bayesian network based computer vision algorithm for traffic monitoring using video

    Get PDF
    This paper presents a novel approach to estimating the 3D velocity of vehicles from video. Here we propose using a Bayesian Network to classify objects into pedestrians and different types of vehicles, using 2D features extracted from the video taken from a stationary camera. The classification allows us to estimate an approximate 3D model for the different classes. The height information is then used with the image co-ordinates of the object and the camera's perspective projection matrix to estimate the objects 3D world co-ordinates and hence its 3D velocity. Accurate velocity and acceleration estimates are both very useful parameters in traffic monitoring systems. We show results of highly accurate classification and measurement of vehicle's motion from real life traffic video streams.Kumar, P.; Ranganath, S.; Weimin, H

    Modeling Boundaries of Influence among Positional Uncertainty Fields

    Get PDF
    Within a CIS environment, the proper use of information requires the identification of the uncertainty associated with it. As such, there has been a substantial amount of research dedicated to describing and quantifying spatial data uncertainty. Recent advances in sensor technology and image analysis techniques are making image-derived geospatial data increasingly popular. Along with development in sensor and image analysis technologies have come departures from conventional point-by-point measurements. Current advancements support the transition from traditional point measures to novel techniques that allow the extraction of complex objects as single entities (e.g., road outlines, buildings). As the methods of data extraction advance, so too must the methods of estimating the uncertainty associated with the data. Not only will object uncertainties be modeled, but the connections between these uncertainties will also be estimated. The current methods for determining spatial accuracy for lines and areas typically involve defining a zone of uncertainty around the measured line, within which the actual line exists with some probability. Yet within the research community, the proper shape of this \u27uncertainty band\u27 is a topic with much dissent. Less contemplated is the manner in which such areas of uncertainty interact and influence one another. The development of positional error models, from the epsilon band and error band to the rigorous G-band, has focused on statistical models for estimating independent line features. Yet these models are not suited to model the interactions between uncertainty fields of adjacent features. At some point, these distributed areas of uncertainty around the features will intersect and overlap one another. In such instances, a feature\u27s uncertainty zone is defined not only by its measurement, but also by the uncertainty associated with neighboring features. It is therefore useful to understand and model the interactions between adjacent uncertainty fields. This thesis presents an analysis of estimation and modeling techniques of spatial uncertainty, focusing on the interactions among fields of positional uncertainty for image-derived linear features. Such interactions are assumed to occur between linear features derived from varying methods and sources, allowing the application of an independent error model. A synthetic uncertainty map is derived for a set of linear and aerial features, containing distributed fields of uncertainty for individual features. These uncertainty fields are shown to be advantageous for communication and user understanding, as well as being conducive to a variety of image processing techniques. Such image techniques can combine overlapping uncertainty fields to model the interaction between them. Deformable contour models are used to extract sets of continuous uncertainty boundaries for linear features, and are subsequently applied to extract a boundary of influence shared by two uncertainty fields. These methods are then applied to a complex scene of uncertainties, modeling the interactions of multiple objects within the scene. The resulting boundary uncertainty representations are unique from the previous independent error models which do not take neighboring influences into account. By modeling the boundary of interaction among the uncertainties of neighboring features, a more integrated approach to error modeling and analysis can be developed for complex spatial scenes and datasets

    Evaluation of automatic building detection approaches combining high resolution images and LiDAR data

    Full text link
    In this paper, two main approaches for automatic building detection and localization using high spatial resolution imagery and LiDAR data are compared and evaluated: thresholding-based and object-based classification. The thresholding-based approach is founded on the establishment of two threshold values: one refers to the minimum height to be considered as building, defined using the LiDAR data, and the other refers to the presence of vegetation, which is defined according to the spectral response. The other approach follows the standard scheme of object-based image classification: segmentation, feature extraction and selection, and classification, here performed using decision trees. In addition, the effect of the inclusion in the building detection process of contextual relations with the shadows is evaluated. Quality assessment is performed at two different levels: area and object. Area-level evaluates the building delineation performance, whereas object-level assesses the accuracy in the spatial location of individual buildings. The results obtained show a high efficiency of the evaluated methods for building detection techniques, in particular the thresholding-based approach, when the parameters are properly adjusted and adapted to the type of urban landscape considered. © 2011 by the authors.The authors appreciate the financial support provided by the Spanish Ministry of Science and Innovation and FEDER in the framework of the projects CGL2009-14220 and CGL2010-19591/BTE, and the support of the Spanish Instituto Geografico Nacional (IGN).Hermosilla, T.; Ruiz Fernández, LÁ.; Recio Recio, JA.; Estornell Cremades, J. (2011). Evaluation of automatic building detection approaches combining high resolution images and LiDAR data. Remote Sensing. 3:1188-1210. https://doi.org/10.3390/rs3061188S118812103Mayer, H. (1999). Automatic Object Extraction from Aerial Imagery—A Survey Focusing on Buildings. Computer Vision and Image Understanding, 74(2), 138-149. doi:10.1006/cviu.1999.0750Kim, T., & Muller, J.-P. (1999). Development of a graph-based approach for building detection. Image and Vision Computing, 17(1), 3-14. doi:10.1016/s0262-8856(98)00092-4Irvin, R. B., & McKeown, D. M. (1989). Methods for exploiting the relationship between buildings and their shadows in aerial imagery. IEEE Transactions on Systems, Man, and Cybernetics, 19(6), 1564-1575. doi:10.1109/21.44071Lin, C., & Nevatia, R. (1998). Building Detection and Description from a Single Intensity Image. Computer Vision and Image Understanding, 72(2), 101-121. doi:10.1006/cviu.1998.0724Katartzis, A., & Sahli, H. (2008). A Stochastic Framework for the Identification of Building Rooftops Using a Single Remote Sensing Image. IEEE Transactions on Geoscience and Remote Sensing, 46(1), 259-271. doi:10.1109/tgrs.2007.904953Lee, D. S., Shan, J., & Bethel, J. S. (2003). Class-Guided Building Extraction from Ikonos Imagery. Photogrammetric Engineering & Remote Sensing, 69(2), 143-150. doi:10.14358/pers.69.2.143STASSOPOULOU, A., & CAELLI, T. (2000). BUILDING DETECTION USING BAYESIAN NETWORKS. International Journal of Pattern Recognition and Artificial Intelligence, 14(06), 715-733. doi:10.1142/s0218001400000477Jin, X., & Davis, C. H. (2005). Automated Building Extraction from High-Resolution Satellite Imagery in Urban Areas Using Structural, Contextual, and Spectral Information. EURASIP Journal on Advances in Signal Processing, 2005(14). doi:10.1155/asp.2005.2196Kim, Z., & Nevatia, R. (1999). Uncertain Reasoning and Learning for Feature Grouping. Computer Vision and Image Understanding, 76(3), 278-288. doi:10.1006/cviu.1999.0803Dare, P. M. (2005). Shadow Analysis in High-Resolution Satellite Imagery of Urban Areas. Photogrammetric Engineering & Remote Sensing, 71(2), 169-177. doi:10.14358/pers.71.2.169Weidner, U., & Förstner, W. (1995). Towards automatic building extraction from high-resolution digital elevation models. ISPRS Journal of Photogrammetry and Remote Sensing, 50(4), 38-49. doi:10.1016/0924-2716(95)98236-sCord, M., & Declercq, D. (2001). Three-dimensional building detection and modeling using a statistical approach. IEEE Transactions on Image Processing, 10(5), 715-723. doi:10.1109/83.918565Ma, R. (2005). DEM Generation and Building Detection from Lidar Data. Photogrammetric Engineering & Remote Sensing, 71(7), 847-854. doi:10.14358/pers.71.7.847Miliaresis, G., & Kokkas, N. (2007). Segmentation and object-based classification for the extraction of the building class from LIDAR DEMs. Computers & Geosciences, 33(8), 1076-1087. doi:10.1016/j.cageo.2006.11.012Zhang, K., Yan, J., & Chen, S.-C. (2006). Automatic Construction of Building Footprints From Airborne LIDAR Data. IEEE Transactions on Geoscience and Remote Sensing, 44(9), 2523-2533. doi:10.1109/tgrs.2006.874137Lafarge, F., Descombes, X., Zerubia, J., & Pierrot-Deseilligny, M. (2008). Automatic building extraction from DEMs using an object approach and application to the 3D-city modeling. ISPRS Journal of Photogrammetry and Remote Sensing, 63(3), 365-381. doi:10.1016/j.isprsjprs.2007.09.003Yu, B., Liu, H., Wu, J., Hu, Y., & Zhang, L. (2010). Automated derivation of urban building density information using airborne LiDAR data and object-based method. Landscape and Urban Planning, 98(3-4), 210-219. doi:10.1016/j.landurbplan.2010.08.004Paparoditis, N., Cord, M., Jordan, M., & Cocquerez, J.-P. (1998). Building Detection and Reconstruction from Mid- and High-Resolution Aerial Imagery. Computer Vision and Image Understanding, 72(2), 122-142. doi:10.1006/cviu.1998.0722Estornell, J., Ruiz, L. A., Velázquez-Martí, B., & Hermosilla, T. (2011). Analysis of the factors affecting LiDAR DTM accuracy in a steep shrub area. International Journal of Digital Earth, 4(6), 521-538. doi:10.1080/17538947.2010.533201Ruiz, L. A., Recio, J. A., Fernández-Sarría, A., & Hermosilla, T. (2011). A feature extraction software tool for agricultural object-based image analysis. Computers and Electronics in Agriculture, 76(2), 284-296. doi:10.1016/j.compag.2011.02.007Haralick, R. M., Shanmugam, K., & Dinstein, I. (1973). Textural Features for Image Classification. IEEE Transactions on Systems, Man, and Cybernetics, SMC-3(6), 610-621. doi:10.1109/tsmc.1973.4309314Sutton, R. N., & Hall, E. L. (1972). Texture Measures for Automatic Classification of Pulmonary Disease. IEEE Transactions on Computers, C-21(7), 667-676. doi:10.1109/t-c.1972.223572Freund, Y. (1995). Boosting a Weak Learning Algorithm by Majority. Information and Computation, 121(2), 256-285. doi:10.1006/inco.1995.1136Shufelt, J. A. (1999). Performance evaluation and analysis of monocular building extraction from aerial imagery. IEEE Transactions on Pattern Analysis and Machine Intelligence, 21(4), 311-326. doi:10.1109/34.761262Shan, J., & Lee, S. D. (2005). Quality of Building Extraction from IKONOS Imagery. Journal of Surveying Engineering, 131(1), 27-32. doi:10.1061/(asce)0733-9453(2005)131:1(27

    Detección automática de edificios y clasificación de usos del suelo en entornos urbanos con imágenes de alta resolución y datos LiDAR

    Full text link
    Esta Tesis tiene como objetivo establecer una metodología fiable de detección automática de edificaciones para la clasificación automática de los usos del suelo en entornos urbanos utilizando imágenes aéreas de alta resolución y datos LiDAR. Estos datos se corresponden con la información adquirida en el marco del Plan Nacional de Ortofotografía Aérea (PNOA), y se encuentran a disposición de las administraciones públicas españolas. Para realizar la localización de edificaciones se adaptan y analizan dos técnicas empleando imágenes de alta resolución y datos LiDAR: la primera se basa en el establecimiento de valores umbral en altura y vegetación, y la segunda utiliza una aproximación mediante la clasificación orientada a objetos. La clasificación de los entornos urbanos se ha realizado empleando un enfoque orientado a objetos, definidos a partir de los límites cartográficos de las parcelas catastrales. La descripción cualitativa de los objetos para su posterior clasificación se realiza mediante un conjunto de características descriptivas especialmente diseñadas para la caracterización de entornos urbanos. La información que proporcionan estas características se refiere a la respuesta espectral de cada objeto o parcela, la textura, la altura y sus características geométricas y de forma. Además, se describe el contexto de cada objeto considerando dos niveles: interno y externo. En el nivel interno se extraen características referentes a las coberturas de edificaciones y vegetación contenidas en una parcela. En el nivel externo se calculan características globales de la manzana urbana en la que una parcela esta enmarcada. Se analiza la contribución específica de las características descriptivas en la descripción, así como su aporte en la clasificación de los usos del sueloHermosilla Gómez, T. (2011). Detección automática de edificios y clasificación de usos del suelo en entornos urbanos con imágenes de alta resolución y datos LiDAR [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/11232Palanci

    Building Detection Using Bayesian Networks

    No full text
    This paper further explores the uses of Bayesian Networks for detecting buildings from digital orthophotos. This work differs from current research in building detection in so far as it utilizes the ability of Bayesian Networks to provide probabilistic methods for evidence combination and, via training, to determine how such evidence should be weighted to maximize classification. In this vein, then, we have also utilized expert performance to not only configure the network values but also to adapt the feature extraction pre-processing units to fit human behavior as closely as possible. Results from digital orthophotos of the Washington DC area prove that such an approach is feasible, robust and worth further analysi
    corecore