168 research outputs found

    3D object reconstruction using computer vision : reconstruction and characterization applications for external human anatomical structures

    Get PDF
    Tese de doutoramento. Engenharia Informática. Faculdade de Engenharia. Universidade do Porto. 201

    Distributed scene reconstruction from multiple mobile platforms

    Get PDF
    Recent research on mobile robotics has produced new designs that provide house-hold robots with omnidirectional motion. The image sensor embedded in these devices motivates the application of 3D vision techniques on them for navigation and mapping purposes. In addition to this, distributed cheapsensing systems acting as unitary entity have recently been discovered as an efficient alternative to expensive mobile equipment. In this work we present an implementation of a visual reconstruction method, structure from motion (SfM), on a low-budget, omnidirectional mobile platform, and extend this method to distributed 3D scene reconstruction with several instances of such a platform. Our approach overcomes the challenges yielded by the plaform. The unprecedented levels of noise produced by the image compression typical of the platform is processed by our feature filtering methods, which ensure suitable feature matching populations for epipolar geometry estimation by means of a strict quality-based feature selection. The robust pose estimation algorithms implemented, along with a novel feature tracking system, enable our incremental SfM approach to novelly deal with ill-conditioned inter-image configurations provoked by the omnidirectional motion. The feature tracking system developed efficiently manages the feature scarcity produced by noise and outputs quality feature tracks, which allow robust 3D mapping of a given scene even if - due to noise - their length is shorter than what it is usually assumed for performing stable 3D reconstructions. The distributed reconstruction from multiple instances of SfM is attained by applying loop-closing techniques. Our multiple reconstruction system merges individual 3D structures and resolves the global scale problem with minimal overlaps, whereas in the literature 3D mapping is obtained by overlapping stretches of sequences. The performance of this system is demonstrated in the 2-session case. The management of noise, the stability against ill-configurations and the robustness of our SfM system is validated on a number of experiments and compared with state-of-the-art approaches. Possible future research areas are also discussed

    Large-area visually augmented navigation for autonomous underwater vehicles

    Get PDF
    Submitted to the Joint Program in Applied Ocean Science & Engineering in partial fulfillment of the requirements for the degree of Doctor of Philosophy at the Massachusetts Institute of Technology and the Woods Hole Oceanographic Institution June 2005This thesis describes a vision-based, large-area, simultaneous localization and mapping (SLAM) algorithm that respects the low-overlap imagery constraints typical of autonomous underwater vehicles (AUVs) while exploiting the inertial sensor information that is routinely available on such platforms. We adopt a systems-level approach exploiting the complementary aspects of inertial sensing and visual perception from a calibrated pose-instrumented platform. This systems-level strategy yields a robust solution to underwater imaging that overcomes many of the unique challenges of a marine environment (e.g., unstructured terrain, low-overlap imagery, moving light source). Our large-area SLAM algorithm recursively incorporates relative-pose constraints using a view-based representation that exploits exact sparsity in the Gaussian canonical form. This sparsity allows for efficient O(n) update complexity in the number of images composing the view-based map by utilizing recent multilevel relaxation techniques. We show that our algorithmic formulation is inherently sparse unlike other feature-based canonical SLAM algorithms, which impose sparseness via pruning approximations. In particular, we investigate the sparsification methodology employed by sparse extended information filters (SEIFs) and offer new insight as to why, and how, its approximation can lead to inconsistencies in the estimated state errors. Lastly, we present a novel algorithm for efficiently extracting consistent marginal covariances useful for data association from the information matrix. In summary, this thesis advances the current state-of-the-art in underwater visual navigation by demonstrating end-to-end automatic processing of the largest visually navigated dataset to date using data collected from a survey of the RMS Titanic (path length over 3 km and 3100 m2 of mapped area). This accomplishment embodies the summed contributions of this thesis to several current SLAM research issues including scalability, 6 degree of freedom motion, unstructured environments, and visual perception.This work was funded in part by the CenSSIS ERC of the National Science Foundation under grant EEC-9986821, in part by the Woods Hole Oceanographic Institution through a grant from the Penzance Foundation, and in part by a NDSEG Fellowship awarded through the Department of Defense

    Automatic face recognition using stereo images

    Get PDF
    Face recognition is an important pattern recognition problem, in the study of both natural and artificial learning problems. Compaxed to other biometrics, it is non-intrusive, non- invasive and requires no paxticipation from the subjects. As a result, it has many applications varying from human-computer-interaction to access control and law-enforcement to crowd surveillance. In typical optical image based face recognition systems, the systematic vaxiability arising from representing the three-dimensional (3D) shape of a face by a two-dimensional (21)) illumination intensity matrix is treated as random vaxiability. Multiple examples of the face displaying vaxying pose and expressions axe captured in different imaging conditions. The imaging environment, pose and expressions are strictly controlled and the images undergo rigorous normalisation and pre-processing. This may be implemented in a paxtially or a fully automated system. Although these systems report high classification accuracies (>90%), they lack versatility and tend to fail when deployed outside laboratory conditions. Recently, more sophisticated 3D face recognition systems haxnessing the depth information have emerged. These systems usually employ specialist equipment such as laser scanners and structured light projectors. Although more accurate than 2D optical image based recognition, these systems are equally difficult to implement in a non-co-operative environment. Existing face recognition systems, both 2D and 3D, detract from the main advantages of face recognition and fail to fully exploit its non-intrusive capacity. This is either because they rely too much on subject co-operation, which is not always available, or because they cannot cope with noisy data. The main objective of this work was to investigate the role of depth information in face recognition in a noisy environment. A stereo-based system, inspired by the human binocular vision, was devised using a pair of manually calibrated digital off-the-shelf cameras in a stereo setup to compute depth information. Depth values extracted from 2D intensity images using stereoscopy are extremely noisy, and as a result this approach for face recognition is rare. This was cofirmed by the results of our experimental work. Noise in the set of correspondences, camera calibration and triangulation led to inaccurate depth reconstruction, which in turn led to poor classifier accuracy for both 3D surface matching and 211) 2 depth maps. Recognition experiments axe performed on the Sheffield Dataset, consisting 692 images of 22 individuals with varying pose, illumination and expressions

    A Unified Hybrid Formulation for Visual SLAM

    Get PDF
    Visual Simultaneous Localization and Mapping (Visual SLAM (VSLAM)), is the process of estimating the six degrees of freedom ego-motion of a camera, from its video feed, while simultaneously constructing a 3D model of the observed environment. Extensive research in the field for the past two decades has yielded real-time and efficient algorithms for VSLAM, allowing various interesting applications in augmented reality, cultural heritage, robotics and the automotive industry, to name a few. The underlying formula behind VSLAM is a mixture of image processing, geometry, graph theory, optimization and machine learning; the theoretical and practical development of these building blocks led to a wide variety of algorithms, each leveraging different assumptions to achieve superiority under the presumed conditions of operation. An exhaustive survey on the topic outlined seven main components in a generic VSLAM pipeline, namely: the matching paradigm, visual initialization, data association, pose estimation, topological/metric map generation, optimization, and global localization. Before claiming VSLAM a solved problem, numerous challenging subjects pertaining to robustness in each of the aforementioned components have to be addressed; namely: resilience to a wide variety of scenes (poorly textured or self repeating scenarios), resilience to dynamic changes (moving objects), and scalability for long-term operation (computational resources awareness and management). Furthermore, current state-of-the art VSLAM pipelines are tailored towards static, basic point cloud reconstructions, an impediment to perception applications such as path planning, obstacle avoidance and object tracking. To address these limitations, this work proposes a hybrid scene representation, where different sources of information extracted solely from the video feed are fused in a hybrid VSLAM system. The proposed pipeline allows for seamless integration of data from pixel-based intensity measurements and geometric entities to produce and make use of a coherent scene representation. The goal is threefold: 1) Increase camera tracking accuracy under challenging motions, 2) improve robustness to challenging poorly textured environments and varying illumination conditions, and 3) ensure scalability and long-term operation by efficiently maintaining a global reusable map representation

    Feature-based calibration of distributed smart stereo camera networks

    Get PDF
    A distributed smart camera network is a collective of vision-capable devices with enough processing power to execute algorithms for collaborative vision tasks. A true 3D sensing network applies to a broad range of applications, and local stereo vision capabilities at each node offer the potential for a particularly robust implementation. A novel spatial calibration method for such a network is presented, which obtains pose estimates suitable for collaborative 3D vision in a distributed fashion using two stages of registration on robust 3D features. The method is first described in a general, modular sense, assuming some ideal vision and registration algorithms. Then, existing algorithms are selected for a practical implementation. The method is designed independently of networking details, making only a few basic assumptions about the underlying network\u27s capabilities. Experiments using both software simulations and physical devices are designed and executed to demonstrate performance

    Remote vision based multi gesture interaction in natural indoor environments

    Get PDF
    Der Einsatz von Computersehen als Sensor für die Interaktion mit technischen Systemen hat in den letzten Jahren starkes Interesse gefunden. In vielen der bekannt gewordenen Fallstudien und Anwendungen werden Posen oder Bewegungen einer interagierenden Person durch einen Rechner, der mit Kameras ausgestattet ist, beobachtet, und die Reaktionen des Rechners dem Benutzer angezeigt, der sein Verhalten dann so ändert, dass ein gewünschtes Interaktionsziel erreicht wird. Diese Arbeit greift zwei wesentliche Schwierigkeiten der computersehensbasierten oder perzeptuellen Mensch-Maschine-Interaktion auf: das Unterscheiden von Gesten von willkürlichen Körperhaltungen oder Bewegungen sowie der Umgang mit natürlichen Umgebungen. Ferner wird die Frage der Abtrennung der computersehensbasierten Schnittstelle von der Anwendung angegangen, analog zu heutigen anwendungsunabhängigen graphischen Benutzungsschnittstellen. Wesentliche Beiträge sind - eine so genannte "Interaktionsraumarchitektur", die die computersehensbasierte Schnittstelle von der Anwendung durch eine Folge von Interaktionsräumen entkoppelt, die aufeinander abgebildet werden, - eine so genannte "Interaktionsraumarchitektur", die die computersehensbasierte Schnittstelle von der Anwendung durch eine Folge von Interaktionsräumen entkoppelt, die aufeinander abgebildet werden, - ein Konzept der "Mehrtyp-Gesteninteraktion", die verschiedene Gesten mit örtlichen und zeitlichen Randbedingungen kombiniert, um so die Zuverlässigkeit der Gestenerkennung zu erhöhen, - zwei Konzepte zur optischen Kalibrierung des Interaktionsraumes, die den Aufwand der Integration von Kameras in die Interaktionsumgebung reduzieren, - eine Lösung des Problems der Kombination von Zeigegesten mit statischen Handgesten durch die Verwendung von statischen Kameras für globale Ansichten und rechnergesteuerten aktiven Kameras für lokal angepasste Ansichten, - eine Kombination von mehreren Methoden, um das Problem von unzuverlässigen Ergebnissen der Bildsegmentierung zu mindern, die durch wechselnde Beleuchtung, die für natürliche Umgebungen typisch ist, hervorgerufen werden: Fehlererkennung und Konturkorrektur auf Grundlage von Bildfolgen und mehreren Ansichten, situationsabhängige Signalverarbeitung sowie automatische Parameteranpassung. Die Tragfähigkeit der Konzepte wird anhand eines Systems zur computersehensbasierten Interaktion mit einer Rückprojektionswand nachgewiesen, das implementiert und evaluiert wurde.Computer vision as a sensor of interaction with technical systems has found increasing interest in the past few years. In many of the proposed case studies and applications, the user's current pose or motion is observed by cameras attached to a computer, and the computer's reaction is displayed to the user who changes the pose accordingly in order to reach a desired goal of interaction. The focus of this thesis is on two major difficulties of computer vision-based, or perceptual, human-computer interaction: distinguishing gestures from arbitrary postures or motions, and coping with troubles caused by natural environments. Furthermore, we address the question of decoupling the computer vision-based interface from the application in order to achieve independency between both, analogously to today's application-independent graphical user interfaces. The main contributions are - a so-called “interaction space architecture” which decouples the computer vision interface from the application by using a sequence of interaction spaces mapped on each other, - a concept of “multi-type gesture interaction” which combines several gestures with spatial and temporal constraints in order to increase the reliability of gesture recognition, two concepts of optical calibration of the interaction space which reduce the efforts of integrating the cameras as sensors in the environment of interaction, a solution to the problem of combining pointing gestures with static hand gestures, by using static cameras for global views and computer-controlled active cameras for locally adapted views, a combination of several methods for coping with unreliable results of image segmentation caused by varying illumination typical for natural environments: error detection and contour correction from image sequences and multiple views, situation-dependent signal processing, and automatic parameter control. The concepts are proved based on a system for computer vision-based interaction with a backprojection wall, which has been implemented and evaluated

    Purposive three-dimensional reconstruction by means of a controlled environment

    Get PDF
    Retrieving 3D data using imaging devices is a relevant task for many applications in medical imaging, surveillance, industrial quality control, and others. As soon as we gain procedural control over parameters of the imaging device, we encounter the necessity of well-defined reconstruction goals and we need methods to achieve them. Hence, we enter next-best-view planning. In this work, we present a formalization of the abstract view planning problem and deal with different planning aspects, whereat we focus on using an intensity camera without active illumination. As one aspect of view planning, employing a controlled environment also provides the planning and reconstruction methods with additional information. We incorporate the additional knowledge of camera parameters into the Kanade-Lucas-Tomasi method used for feature tracking. The resulting Guided KLT tracking method benefits from a constrained optimization space and yields improved accuracy while regarding the uncertainty of the additional input. Serving other planning tasks dealing with known objects, we propose a method for coarse registration of 3D surface triangulations. By the means of exact surface moments of surface triangulations we establish invariant surface descriptors based on moment invariants. These descriptors allow to tackle tasks of surface registration, classification, retrieval, and clustering, which are also relevant to view planning. In the main part of this work, we present a modular, online approach to view planning for 3D reconstruction. Based on the outcome of the Guided KLT tracking, we design a planning module for accuracy optimization with respect to an extended E-criterion. Further planning modules endow non-discrete surface estimation and visibility analysis. The modular nature of the proposed planning system allows to address a wide range of specific instances of view planning. The theoretical findings in this work are underlined by experiments evaluating the relevant terms

    Görsel-ataletsel duyaç tümleştirme kullanılarak şehirlerde 3b modelleme.

    Get PDF
    In this dissertation, a real-time, autonomous and geo-registered approach is presented to tackle the large scale 3D urban modeling problem using a camera and inertial sensors. The proposed approach exploits the special structures of urban areas and visual-inertial sensor fusion. The buildings in urban areas are assumed to have planar facades that are perpendicular to the local level. A sparse 3D point cloud of the imaged scene is obtained from visual feature matches using camera poses estimates, and planar patches are obtained by an iterative Hough Transform on the 2D projection of the sparse 3D point cloud in the direction of gravity. The result is a compact and dense depth map of the building facades in terms of planar patches. The plane extraction is performed on sequential frames and a complete model is obtained by plane fusion. Inertial sensor integration helps to improve camera pose estimation, 3D reconstruction and planar modeling stages. For camera pose estimation, the visual measurements are integrated with the inertial sensors by means of an indirect feedback Kalman filter. This integration helps to get reliable and geo-referenced camera pose estimates in the absence of GPS. The inertial sensors are also used to filter out spurious visual feature matches in the 3D reconstruction stage, find the direction of gravity in plane search stage, and eliminate out of scope objects from the model using elevation data. The visual-inertial sensor fusion and urban heuristics utilization are shown to outperform the classical approaches for large scale urban modeling in terms of consistency and real-time applicability.Ph.D. - Doctoral Progra

    Image-Based Rendering Of Real Environments For Virtual Reality

    Get PDF
    corecore