1,113 research outputs found

    Automatic Reconstruction of Fault Networks from Seismicity Catalogs: 3D Optimal Anisotropic Dynamic Clustering

    Get PDF
    We propose a new pattern recognition method that is able to reconstruct the 3D structure of the active part of a fault network using the spatial location of earthquakes. The method is a generalization of the so-called dynamic clustering method, that originally partitions a set of datapoints into clusters, using a global minimization criterion over the spatial inertia of those clusters. The new method improves on it by taking into account the full spatial inertia tensor of each cluster, in order to partition the dataset into fault-like, anisotropic clusters. Given a catalog of seismic events, the output is the optimal set of plane segments that fits the spatial structure of the data. Each plane segment is fully characterized by its location, size and orientation. The main tunable parameter is the accuracy of the earthquake localizations, which fixes the resolution, i.e. the residual variance of the fit. The resolution determines the number of fault segments needed to describe the earthquake catalog, the better the resolution, the finer the structure of the reconstructed fault segments. The algorithm reconstructs successfully the fault segments of synthetic earthquake catalogs. Applied to the real catalog constituted of a subset of the aftershocks sequence of the 28th June 1992 Landers earthquake in Southern California, the reconstructed plane segments fully agree with faults already known on geological maps, or with blind faults that appear quite obvious on longer-term catalogs. Future improvements of the method are discussed, as well as its potential use in the multi-scale study of the inner structure of fault zones

    Change blindness: eradication of gestalt strategies

    Get PDF
    Arrays of eight, texture-defined rectangles were used as stimuli in a one-shot change blindness (CB) task where there was a 50% chance that one rectangle would change orientation between two successive presentations separated by an interval. CB was eliminated by cueing the target rectangle in the first stimulus, reduced by cueing in the interval and unaffected by cueing in the second presentation. This supports the idea that a representation was formed that persisted through the interval before being 'overwritten' by the second presentation (Landman et al, 2003 Vision Research 43149–164]. Another possibility is that participants used some kind of grouping or Gestalt strategy. To test this we changed the spatial position of the rectangles in the second presentation by shifting them along imaginary spokes (by ±1 degree) emanating from the central fixation point. There was no significant difference seen in performance between this and the standard task [F(1,4)=2.565, p=0.185]. This may suggest two things: (i) Gestalt grouping is not used as a strategy in these tasks, and (ii) it gives further weight to the argument that objects may be stored and retrieved from a pre-attentional store during this task

    Optimal Compression of Point Clouds

    Get PDF
    Image-based localization is a crucial step in many 3D computer vision applications, e.g., self-driving cars, robotics, and augmented reality among others. Unfortunately, many image-based-localization applications require the storage of large scenes, and many camera pose estimators struggle to scale when the scene representation is large. To alleviate the aforementioned problems, many applications compress a scene representation by reducing the number of 3D points of a point cloud. The state-of-the-art compresses a scene representation by using a K-cover-based algorithm. While the state-of-the-art selects a subset of 3D points that maximizes the probability of accurately estimating the camera pose of a new image, the state-of-the-art does not guarantee an optimal compression and has parameters that are hard to tune. We propose to compress a scene representation by means of a constrained quadratic program that resembles a one-class support vector machine (SVM). Thanks to this resemblance, we derived a variant of the sequential minimal optimization, a widely adopted algorithm to train SVMs. The proposed method uses the points corresponding to the support vectors as the subset to represent a scene. Our experiments on publicly large-scale image-based localization show that our proposed approach delivers four times fewer failed localizations than that of the state-of-the-art while scaling on average two orders of magnitude more favorably

    Augmented Reality for Subsurface Utility Engineering, Revisited

    Get PDF

    Beyond Controlled Environments: 3D Camera Re-Localization in Changing Indoor Scenes

    Full text link
    Long-term camera re-localization is an important task with numerous computer vision and robotics applications. Whilst various outdoor benchmarks exist that target lighting, weather and seasonal changes, far less attention has been paid to appearance changes that occur indoors. This has led to a mismatch between popular indoor benchmarks, which focus on static scenes, and indoor environments that are of interest for many real-world applications. In this paper, we adapt 3RScan - a recently introduced indoor RGB-D dataset designed for object instance re-localization - to create RIO10, a new long-term camera re-localization benchmark focused on indoor scenes. We propose new metrics for evaluating camera re-localization and explore how state-of-the-art camera re-localizers perform according to these metrics. We also examine in detail how different types of scene change affect the performance of different methods, based on novel ways of detecting such changes in a given RGB-D frame. Our results clearly show that long-term indoor re-localization is an unsolved problem. Our benchmark and tools are publicly available at waldjohannau.github.io/RIO10Comment: ECCV 2020, project website https://waldjohannau.github.io/RIO1

    Towards Visual Localization, Mapping and Moving Objects Tracking by a Mobile Robot: a Geometric and Probabilistic Approach

    Get PDF
    Dans cette thĂšse, nous rĂ©solvons le problĂšme de reconstruire simultanĂ©ment une reprĂ©sentation de la gĂ©omĂ©trie du monde, de la trajectoire de l'observateur, et de la trajectoire des objets mobiles, Ă  l'aide de la vision. Nous divisons le problĂšme en trois Ă©tapes : D'abord, nous donnons une solution au problĂšme de la cartographie et localisation simultanĂ©es pour la vision monoculaire qui fonctionne dans les situations les moins bien conditionnĂ©es gĂ©omĂ©triquement. Ensuite, nous incorporons l'observabilitĂ© 3D instantanĂ©e en dupliquant le matĂ©riel de vision avec traitement monoculaire. Ceci Ă©limine les inconvĂ©nients inhĂ©rents aux systĂšmes stĂ©rĂ©o classiques. Nous ajoutons enfin la dĂ©tection et suivi des objets mobiles proches en nous servant de cette observabilitĂ© 3D. Nous choisissons une reprĂ©sentation Ă©parse et ponctuelle du monde et ses objets. La charge calculatoire des algorithmes de perception est allĂ©gĂ©e en focalisant activement l'attention aux rĂ©gions de l'image avec plus d'intĂ©rĂȘt. ABSTRACT : In this thesis we give new means for a machine to understand complex and dynamic visual scenes in real time. In particular, we solve the problem of simultaneously reconstructing a certain representation of the world's geometry, the observer's trajectory, and the moving objects' structures and trajectories, with the aid of vision exteroceptive sensors. We proceeded by dividing the problem into three main steps: First, we give a solution to the Simultaneous Localization And Mapping problem (SLAM) for monocular vision that is able to adequately perform in the most ill-conditioned situations: those where the observer approaches the scene in straight line. Second, we incorporate full 3D instantaneous observability by duplicating vision hardware with monocular algorithms. This permits us to avoid some of the inherent drawbacks of classic stereo systems, notably their limited range of 3D observability and the necessity of frequent mechanical calibration. Third, we add detection and tracking of moving objects by making use of this full 3D observability, whose necessity we judge almost inevitable. We choose a sparse, punctual representation of both the world and the moving objects in order to alleviate the computational payload of the image processing algorithms, which are required to extract the necessary geometrical information out of the images. This alleviation is additionally supported by active feature detection and search mechanisms which focus the attention to those image regions with the highest interest. This focusing is achieved by an extensive exploitation of the current knowledge available on the system (all the mapped information), something that we finally highlight to be the ultimate key to success

    Factors Affecting Spatial Awareness in Non- Stereo Visual Representations of Virtual, Real and Digital Image Environments

    Get PDF
    The increasing number of applications employing virtual environment (VE) technologies as a tool, particularly those that use VE as surrogates, makes it important to examine the ability of VE to provide realistic simulations to users. Accurate space and distance perceptions have been suggested as essential preconditions for the reliable use of VE technologies in various applications. However, space and distance perception in the VE has been reported by some investigators as being perceived differently from the real world. Thus, the overall aim of this thesis is to improve our understanding of factors affecting spatial awareness in the VE. The general approach is based on a strategy of conducting empirical investigations comparing tasks performed in the VE to similar tasks performed in the real world. This research has examined the effect of display related factors on users' spatial task performance in the context of static, dynamic and interactive presentations. Three sets of experiments in these respective contexts were conducted to explore the influence of image type, display size, viewing distance, physiological cues, interface device and travel modes on distance estimate and spatial memory tasks. For distance perception, results revealed that the effect of image type depends on the context of presentations, the type of asymmetrical distances and image resolution. The effect of display size in static and dynamic presentations is consistent with the results of previous investigations. However, results from evaluations conducted by the author have indicated that other factors such as viewing distance and physiological cues were also accountable. In interactive presentations, results indicated that display size had different effects on different users whereby familiarity with display size may influence user's performance. Similarly, it was shown that a commonly used interface device is more useful and beneficial for user's spatial memory performance in the VE than the less familiar ones. In terms of travel mode, the natural method of movement available in the real world may not necessary be better than the unnatural movement which is possible in the VE. The results of investigations reported in this thesis contribute towards knowledge and understanding on factors affecting spatial awareness in the real and VE. In particular, they highlight the influence of these factors in space and distance perception in different contexts of VE presentations which will serve as important scientifically based guidelines for designers and users ofVE applications
    • 

    corecore