582 research outputs found

    Toward Guaranteed Illumination Models for Non-Convex Objects

    Full text link
    Illumination variation remains a central challenge in object detection and recognition. Existing analyses of illumination variation typically pertain to convex, Lambertian objects, and guarantee quality of approximation in an average case sense. We show that it is possible to build V(vertex)-description convex cone models with worst-case performance guarantees, for non-convex Lambertian objects. Namely, a natural verification test based on the angle to the constructed cone guarantees to accept any image which is sufficiently well-approximated by an image of the object under some admissible lighting condition, and guarantees to reject any image that does not have a sufficiently good approximation. The cone models are generated by sampling point illuminations with sufficient density, which follows from a new perturbation bound for point images in the Lambertian model. As the number of point images required for guaranteed verification may be large, we introduce a new formulation for cone preserving dimensionality reduction, which leverages tools from sparse and low-rank decomposition to reduce the complexity, while controlling the approximation error with respect to the original cone

    View generated database

    Get PDF
    This document represents the final report for the View Generated Database (VGD) project, NAS7-1066. It documents the work done on the project up to the point at which all project work was terminated due to lack of project funds. The VGD was to provide the capability to accurately represent any real-world object or scene as a computer model. Such models include both an accurate spatial/geometric representation of surfaces of the object or scene, as well as any surface detail present on the object. Applications of such models are numerous, including acquisition and maintenance of work models for tele-autonomous systems, generation of accurate 3-D geometric/photometric models for various 3-D vision systems, and graphical models for realistic rendering of 3-D scenes via computer graphics

    Computer Aided Multi-Data Fusion Dismount Modeling

    Get PDF
    Recent research efforts strive to address the growing need for dismount surveillance, dismount tracking and characterization. Current work in this area utilizes hyperspectral and multispectral imaging systems to exploit spectral properties in order to detect areas of exposed skin and clothing characteristics. Because of the large bandwidth and high resolution, hyperspectral imaging systems pose great ability to characterize and detect dismounts. A multi-data dismount modeling system where the development and manipulation of dismount models is a necessity. This thesis demonstrates a computer aided multi-data fused dismount model, which facilitates studies of dismount detection, characterization and identification. The system is created by fusing: pixel mapping, signature attachment, and pixel mixing algorithms. The developed multi-data dismount model produces simulated hyperspectral images that closely represent an image collected by a hyperspectral imager. The dismount model can be modified to fit the researcher\u27s needs. The multi-data model structure allows the employment of a database of signatures acquired from several sources. The model is flexible enough to allow further exploitation, enhancement and manipulation. The multi-data dismount model developed in this effort fulfills the need for a dismount modeling tool in a hyperspectral imaging environment

    Continuous Modeling of 3D Building Rooftops From Airborne LIDAR and Imagery

    Get PDF
    In recent years, a number of mega-cities have provided 3D photorealistic virtual models to support the decisions making process for maintaining the cities' infrastructure and environment more effectively. 3D virtual city models are static snap-shots of the environment and represent the status quo at the time of their data acquisition. However, cities are dynamic system that continuously change over time. Accordingly, their virtual representation need to be regularly updated in a timely manner to allow for accurate analysis and simulated results that decisions are based upon. The concept of "continuous city modeling" is to progressively reconstruct city models by accommodating their changes recognized in spatio-temporal domain, while preserving unchanged structures. However, developing a universal intelligent machine enabling continuous modeling still remains a challenging task. Therefore, this thesis proposes a novel research framework for continuously reconstructing 3D building rooftops using multi-sensor data. For achieving this goal, we first proposes a 3D building rooftop modeling method using airborne LiDAR data. The main focus is on the implementation of an implicit regularization method which impose a data-driven building regularity to noisy boundaries of roof planes for reconstructing 3D building rooftop models. The implicit regularization process is implemented in the framework of Minimum Description Length (MDL) combined with Hypothesize and Test (HAT). Secondly, we propose a context-based geometric hashing method to align newly acquired image data with existing building models. The novelty is the use of context features to achieve robust and accurate matching results. Thirdly, the existing building models are refined by newly proposed sequential fusion method. The main advantage of the proposed method is its ability to progressively refine modeling errors frequently observed in LiDAR-driven building models. The refinement process is conducted in the framework of MDL combined with HAT. Markov Chain Monte Carlo (MDMC) coupled with Simulated Annealing (SA) is employed to perform a global optimization. The results demonstrates that the proposed continuous rooftop modeling methods show a promising aspects to support various critical decisions by not only reconstructing 3D rooftop models accurately, but also by updating the models using multi-sensor data

    Geometry with a STEM and Gamification Approach: A Didactic Experience in Secondary Education

    Get PDF
    Recent societal changes have meant that education has had to adapt to digital natives of the 21st century. These changes have required a transformation in the current educational paradigm, where active methodologies and ICT have become vehicles for achieving this goal, designing complete teaching sequences with STEM approaches that help students to learn. Under a gamified approach, this document addresses a didactic proposal in geometry focused on STEM disciplines. This proposal combines tools such as AR, VR, manipulative materials, and social networks, with techniques such as m-learning, cooperative-learning, and flipped-learning, which make methodological transformation possible. The research was carried out during two academic years under an action research framework. It departed from a traditional methodology and, in two cycles, methodology was improved with the benefits that gamification brings to STEM proposals in Secondary Education. The data gathered in the experiment were analysed following a mixed method. Learning produced, strategies employed, successes and errors, and results of a questionnaire are presented. Evidence shows an improvement in academic performance from 50% fails to 100% pass, most of the students ended up motivated, participation was of the whole group, more than 80% showed positive emotions, and thanks to the cooperative-learning, group cohesion was improved.This study was partially funded by the ERDF (European Regional Development Fund) research project from the FEDER-Andalusian Regional Government grant UAL2020-SEJ-B2086 and by University of Málaga (Spain). Partial funding for open access charge: Universidad de Málaga

    An Iconography-Based Modeling Approach for the Spatio-Temporal Analysis of Architectural Heritage

    Get PDF
    The study of historic buildings is usually based on the collection and analysis of iconographic sources such as photographs, drawings, engravings, paintings or sketches. This paper describes a methodological approach to make use of the existing iconographic corpus for the analysis and the 3D management of building transformations. Iconography is used for different goals. Firstly, it's a source of geometric information (image-based-modeling of anterior states); secondly, it's used for the re-creation of visual appearance (image-based texture extraction); thirdly it's a proof of the temporal distribution of shape transformations (spatio-temporal modeling); finally it becomes a visual support for the study of building transformations (visual comparison between different temporal states). The aim is to establish a relation between the iconography used for the hypothetical reconstruction and the 3D representation that depends on it. This approach relates to the idea of using 3D representations like visualization systems capable of reflecting the amount of knowledge developed by the study of a historic buildin

    Geometric Expression Invariant 3D Face Recognition using Statistical Discriminant Models

    No full text
    Currently there is no complete face recognition system that is invariant to all facial expressions. Although humans find it easy to identify and recognise faces regardless of changes in illumination, pose and expression, producing a computer system with a similar capability has proved to be particularly di cult. Three dimensional face models are geometric in nature and therefore have the advantage of being invariant to head pose and lighting. However they are still susceptible to facial expressions. This can be seen in the decrease in the recognition results using principal component analysis when expressions are added to a data set. In order to achieve expression-invariant face recognition systems, we have employed a tensor algebra framework to represent 3D face data with facial expressions in a parsimonious space. Face variation factors are organised in particular subject and facial expression modes. We manipulate this using single value decomposition on sub-tensors representing one variation mode. This framework possesses the ability to deal with the shortcomings of PCA in less constrained environments and still preserves the integrity of the 3D data. The results show improved recognition rates for faces and facial expressions, even recognising high intensity expressions that are not in the training datasets. We have determined, experimentally, a set of anatomical landmarks that best describe facial expression e ectively. We found that the best placement of landmarks to distinguish di erent facial expressions are in areas around the prominent features, such as the cheeks and eyebrows. Recognition results using landmark-based face recognition could be improved with better placement. We looked into the possibility of achieving expression-invariant face recognition by reconstructing and manipulating realistic facial expressions. We proposed a tensor-based statistical discriminant analysis method to reconstruct facial expressions and in particular to neutralise facial expressions. The results of the synthesised facial expressions are visually more realistic than facial expressions generated using conventional active shape modelling (ASM). We then used reconstructed neutral faces in the sub-tensor framework for recognition purposes. The recognition results showed slight improvement. Besides biometric recognition, this novel tensor-based synthesis approach could be used in computer games and real-time animation applications

    Grouping Uncertain Oriented Projective Geometric Entities with Application to Automatic Building Reconstruction

    Get PDF
    The fully automatic reconstruction of 3d scenes from a set of 2d images has always been a key issue in photogrammetry and computer vision and has not been solved satisfactory so far. Most of the current approaches match features between the images based on radiometric cues followed by a reconstruction using the image geometry. The motivation for this work is the conjecture that in the presence of highly redundant data it should be possible to recover the scene structure by grouping together geometric primitives in a bottom-up manner. Oriented projective geometry will be used throughout this work, which allows to represent geometric primitives, such as points, lines and planes in 2d and 3d space as well as projective cameras, together with their uncertainty. The first major contribution of the work is the use of uncertain oriented projective geometry, rather than uncertain projective geometry, that enables the representation of more complex compound entities, such as line segments and polygons in 2d and 3d space as well as 2d edgels and 3d facets. Within the uncertain oriented projective framework a procedure is developed, which allows to test pairwise relations between the various uncertain oriented projective entities. Again, the novelty lies in the possibility to check relations between the novel compound entities. The second major contribution of the work is the development of a data structure, specifically designed to enable performing the tests between large numbers of entities in an efficient manner. Being able to efficiently test relations between the geometric entities, a framework for grouping those entities together is developed. Various different grouping methods are discussed. The third major contribution of this work is the development of a novel grouping method that by analyzing the entropy change incurred by incrementally adding observations into an estimation is able to balance efficiency against robustness in order to achieve better grouping results. Finally the applicability of the proposed representations, tests and grouping methods for the task of purely geometry based building reconstruction from oriented aerial images is demonstrated. It will be shown that in the presence of highly redundant datasets it is possible to achieve reasonable reconstruction results by grouping together geometric primitives.Gruppierung unsicherer orientierter projektiver geometrischer Elemente mit Anwendung in der automatischen Gebäuderekonstruktion Die vollautomatische Rekonstruktion von 3D Szenen aus einer Menge von 2D Bildern war immer ein Hauptanliegen in der Photogrammetrie und Computer Vision und wurde bisher noch nicht zufriedenstellend gelöst. Die meisten aktuellen Ansätze ordnen Merkmale zwischen den Bildern basierend auf radiometrischen Eigenschaften zu. Daran schließt sich dann eine Rekonstruktion auf der Basis der Bildgeometrie an. Die Motivation für diese Arbeit ist die These, dass es möglich sein sollte, die Struktur einer Szene durch Gruppierung geometrischer Primitive zu rekonstruieren, falls die Eingabedaten genügend redundant sind. Orientierte projektive Geometrie wird in dieser Arbeit zur Repräsentation geometrischer Primitive, wie Punkten, Linien und Ebenen in 2D und 3D sowie projektiver Kameras, zusammen mit ihrer Unsicherheit verwendet.Der erste Hauptbeitrag dieser Arbeit ist die Verwendung unsicherer orientierter projektiver Geometrie, anstatt von unsicherer projektiver Geometrie, welche die Repräsentation von komplexeren zusammengesetzten Objekten, wie Liniensegmenten und Polygonen in 2D und 3D sowie 2D Edgels und 3D Facetten, ermöglicht. Innerhalb dieser unsicheren orientierten projektiven Repräsentation wird ein Verfahren zum testen paarweiser Relationen zwischen den verschiedenen unsicheren orientierten projektiven geometrischen Elementen entwickelt. Dabei liegt die Neuheit wieder in der Möglichkeit, Relationen zwischen den neuen zusammengesetzten Elementen zu prüfen. Der zweite Hauptbeitrag dieser Arbeit ist die Entwicklung einer Datenstruktur, welche speziell auf die effiziente Prüfung von solchen Relationen zwischen vielen Elementen ausgelegt ist. Die Möglichkeit zur effizienten Prüfung von Relationen zwischen den geometrischen Elementen erlaubt nun die Entwicklung eines Systems zur Gruppierung dieser Elemente. Verschiedene Gruppierungsmethoden werden vorgestellt. Der dritte Hauptbeitrag dieser Arbeit ist die Entwicklung einer neuen Gruppierungsmethode, die durch die Analyse der änderung der Entropie beim Hinzufügen von Beobachtungen in die Schätzung Effizienz und Robustheit gegeneinander ausbalanciert und dadurch bessere Gruppierungsergebnisse erzielt. Zum Schluss wird die Anwendbarkeit der vorgeschlagenen Repräsentationen, Tests und Gruppierungsmethoden für die ausschließlich geometriebasierte Gebäuderekonstruktion aus orientierten Luftbildern demonstriert. Es wird gezeigt, dass unter der Annahme von hoch redundanten Datensätzen vernünftige Rekonstruktionsergebnisse durch Gruppierung von geometrischen Primitiven erzielbar sind
    • …
    corecore