276 research outputs found

    SHREC’20 Track:Retrieval of digital surfaces with similar geometric reliefs

    Get PDF
    International audienceThis paper presents the methods that have participated in the SHREC'20 contest on retrieval of surface patches with similar geometric reliefs and 1 the analysis of their performance over the benchmark created for this challenge. The goal of the context is to verify the possibility of retrieving 3D models only based on the reliefs that are present on their surface and to compare methods that are suitable for this task. This problem is related to many real world applications, such as the classification of cultural heritage goods or the analysis of different materials. To address this challenge, it is necessary to characterize the local "geometric pattern" information, possibly forgetting model size and bending. Seven groups participated in this contest and twenty runs were submitted for evaluation. The performances of the methods reveal that good results are achieved with a number of techniques that use different approaches

    Methods for Real-time Visualization and Interaction with Landforms

    Get PDF
    This thesis presents methods to enrich data modeling and analysis in the geoscience domain with a particular focus on geomorphological applications. First, a short overview of the relevant characteristics of the used remote sensing data and basics of its processing and visualization are provided. Then, two new methods for the visualization of vector-based maps on digital elevation models (DEMs) are presented. The first method uses a texture-based approach that generates a texture from the input maps at runtime taking into account the current viewpoint. In contrast to that, the second method utilizes the stencil buffer to create a mask in image space that is then used to render the map on top of the DEM. A particular challenge in this context is posed by the view-dependent level-of-detail representation of the terrain geometry. After suitable visualization methods for vector-based maps have been investigated, two landform mapping tools for the interactive generation of such maps are presented. The user can carry out the mapping directly on the textured digital elevation model and thus benefit from the 3D visualization of the relief. Additionally, semi-automatic image segmentation techniques are applied in order to reduce the amount of user interaction required and thus make the mapping process more efficient and convenient. The challenge in the adaption of the methods lies in the transfer of the algorithms to the quadtree representation of the data and in the application of out-of-core and hierarchical methods to ensure interactive performance. Although high-resolution remote sensing data are often available today, their effective resolution at steep slopes is rather low due to the oblique acquisition angle. For this reason, remote sensing data are suitable to only a limited extent for visualization as well as landform mapping purposes. To provide an easy way to supply additional imagery, an algorithm for registering uncalibrated photos to a textured digital elevation model is presented. A particular challenge in registering the images is posed by large variations in the photos concerning resolution, lighting conditions, seasonal changes, etc. The registered photos can be used to increase the visual quality of the textured DEM, in particular at steep slopes. To this end, a method is presented that combines several georegistered photos to textures for the DEM. The difficulty in this compositing process is to create a consistent appearance and avoid visible seams between the photos. In addition to that, the photos also provide valuable means to improve landform mapping. To this end, an extension of the landform mapping methods is presented that allows the utilization of the registered photos during mapping. This way, a detailed and exact mapping becomes feasible even at steep slopes

    Local Deformation Modelling for Non-Rigid Structure from Motion

    Get PDF
    PhDReconstructing the 3D geometry of scenes based on monocular image sequences is a long-standing problem in computer vision. Structure from motion (SfM) aims at a data-driven approach without requiring a priori models of the scene. When the scene is rigid, SfM is a well understood problem with solutions widely used in industry. However, if the scene is non-rigid, monocular reconstruction without additional information is an ill-posed problem and no satisfactory solution has yet been found. Current non-rigid SfM (NRSfM) methods typically aim at modelling deformable motion globally. Additionally, most of these methods focus on cases where deformable motion is seen as small variations from a mean shape. In turn, these methods fail at reconstructing highly deformable objects such as a flag waving in the wind. Additionally, reconstructions typically consist of low detail, sparse point-cloud representation of objects. In this thesis we aim at reconstructing highly deformable surfaces by modelling them locally. In line with a recent trend in NRSfM, we propose a piecewise approach which reconstructs local overlapping regions independently. These reconstructions are merged into a global object by imposing 3D consistency of the overlapping regions. We propose our own local model – the Quadratic Deformation model – and show how patch division and reconstruction can be formulated in a principled approach by alternating at minimizing a single geometric cost – the image re-projection error of the reconstruction. Moreover, we extend our approach to dense NRSfM, where reconstructions are preformed at the pixel level, improving the detail of state of the art reconstructions. Finally we show how our principled approach can be used to perform simultaneous segmentation and reconstruction of articulated motion, recovering meaningful segments which provide a coarse 3D skeleton of the object.Fundacao para a Ciencia e a Tecnologia (FCT) under Doctoral Grant SFRH/BD/70312/2010; European Research Council under ERC Starting Grant agreement 204871-HUMANI

    Skeletonization methods for image and volume inpainting

    Get PDF

    Deformable and articulated 3D reconstruction from monocular video sequences

    Get PDF
    PhDThis thesis addresses the problem of deformable and articulated structure from motion from monocular uncalibrated video sequences. Structure from motion is defined as the problem of recovering information about the 3D structure of scenes imaged by a camera in a video sequence. Our study aims at the challenging problem of non-rigid shapes (e.g. a beating heart or a smiling face). Non-rigid structures appear constantly in our everyday life, think of a bicep curling, a torso twisting or a smiling face. Our research seeks a general method to perform 3D shape recovery purely from data, without having to rely on a pre-computed model or training data. Open problems in the field are the difficulty of the non-linear estimation, the lack of a real-time system, large amounts of missing data in real-world video sequences, measurement noise and strong deformations. Solving these problems would take us far beyond the current state of the art in non-rigid structure from motion. This dissertation presents our contributions in the field of non-rigid structure from motion, detailing a novel algorithm that enforces the exact metric structure of the problem at each step of the minimisation by projecting the motion matrices onto the correct deformable or articulated metric motion manifolds respectively. An important advantage of this new algorithm is its ability to handle missing data which becomes crucial when dealing with real video sequences. We present a generic bilinear estimation framework, which improves convergence and makes use of the manifold constraints. Finally, we demonstrate a sequential, frame-by-frame estimation algorithm, which provides a 3D model and camera parameters for each video frame, while simultaneously building a model of object deformation

    Skeletonization methods for image and volume inpainting

    Get PDF

    Learning and recovering 3D surface deformations

    Get PDF
    Recovering the 3D deformations of a non-rigid surface from a single viewpoint has applications in many domains such as sports, entertainment, and medical imaging. Unfortunately, without any knowledge of the possible deformations that the object of interest can undergo, it is severely under-constrained, and extremely different shapes can have very similar appearances when reprojected onto an image plane. In this thesis, we first exhibit the ambiguities of the reconstruction problem when relying on correspondences between a reference image for which we know the shape and an input image. We then propose several approaches to overcoming these ambiguities. The core idea is that some a priori knowledge about how a surface can deform must be introduced to solve them. We therefore present different ways to formulate that knowledge that range from very generic constraints to models specifically designed for a particular object or material. First, we propose generally applicable constraints formulated as motion models. Such models simply link the deformations of the surface from one image to the next in a video sequence. The obvious advantage is that they can be used independently of the physical properties of the object of interest. However, to be effective, they require the presence of texture over the whole surface, and, additionally, do not prevent error accumulation from frame to frame. To overcome these weaknesses, we propose to introduce statistical learning techniques that let us build a model from a large set of training examples, that is, in our case, known 3D deformations. The resulting model then essentially performs linear or non-linear interpolation between the training examples. Following this approach, we first propose a linear global representation that models the behavior of the whole surface. As is the case with all statistical learning techniques, the applicability of this representation is limited by the fact that acquiring training data is far from trivial. A large surface can undergo many subtle deformations, and thus a large amount of training data must be available to build an accurate model. We therefore propose an automatic way of generating such training examples in the case of inextensible surfaces. Furthermore, we show that the resulting linear global models can be incorporated into a closed-form solution to the shape recovery problem. This lets us not only track deformations from frame to frame, but also reconstruct surfaces from individual images. The major drawback of global representations is that they can only model the behavior of a specific surface, which forces us to re-train a new model for every new shape, even though it is made of a material observed before. To overcome this issue, and simultaneously reduce the amount of required training data, we propose local deformation models. Such models describe the behavior of small portions of a surface, and can be combined to form arbitrary global shapes. For this purpose, we study both linear and non-linear statistical learning methods, and show that, whereas the latter are better suited for traking deformations from frame to frame, the former can also be used for reconstruction from a single image

    Learning from Complex Neuroimaging Datasets

    Get PDF
    Advancements in Magnetic Resonance Imaging (MRI) allowed for the early diagnosis of neurodevelopmental disorders and neurodegenerative diseases. Neuroanatomical abnormalities in the cerebral cortex are often investigated by examining group-level differences of brain morphometric measures extracted from highly-sampled cortical surfaces. However, group-level differences do not allow for individual-level outcome prediction critical for the application to clinical practice. Despite the success of MRI-based deep learning frameworks, critical issues have been identified: (1) extracting accurate and reliable local features from the cortical surface, (2) determining a parsimonious subset of cortical features for correct disease diagnosis, (3) learning directly from a non-Euclidean high-dimensional feature space, (4) improving the robustness of multi-task multi-modal models, and (5) identifying anomalies in imbalanced and heterogeneous settings. This dissertation describes novel methodological contributions to tackle the challenges above. First, I introduce a Laplacian-based method for quantifying local Extra-Axial Cerebrospinal Fluid (EA-CSF) from structural MRI. Next, I describe a deep learning approach for combining local EA-CSF with other morphometric cortical measures for early disease detection. Then, I propose a data-driven approach for extending convolutional learning to non-Euclidean manifolds such as cortical surfaces. I also present a unified framework for robust multi-task learning from imaging and non-imaging information. Finally, I propose a semi-supervised generative approach for the detection of samples from untrained classes in imbalanced and heterogeneous developmental datasets. The proposed methodological contributions are evaluated by applying them to the early detection of Autism Spectrum Disorder (ASD) in the first year of the infant’s life. Also, the aging human brain is examined in the context of studying different stages of Alzheimer’s Disease (AD).Doctor of Philosoph

    Multi-Scale Integral Invariants for Robust Character Extraction from Irregular Polygon Mesh Data

    Get PDF
    Hunderttausende von antiken Dokumenten in Keilschrift befinden sich in Museen, und täglich werden weitere bei archäologischen Grabungen gefunden. Die Auswertung dieser Dokumente ist wesentlich für das Verständnis der Herkunft von Kultur, Gesetzgebung und Religion. Die Keilschrift ist eine Handschrift und wurde in den Jahrtausenden vor Christi Geburt im gesamten alten Orient benutzt. Der Name leitet sich von den keilförmigen Eindrücken eines Schreibgriffels in den weichen Beschreibstoff Ton ab. Das Anfertigen von Handzeichnungen und Transkriptionen dieser Tontafeln ist eine langwierige Aufgabe und verlangt nach Unterstützung mittels automatisierter rechnergestützter Verfahren. Das Ziel dieser Arbeit ist die präzise Extraktion von Schriftzeichen mit variablen Formen in 3D. Die für die Merkmalsextraktion aus 2D-Mannigfaltigkeiten in 3D entscheidenden Schritte sind Kantenerkennung und Segmentierung. Robuste Techniken in der Signalverarbeitung und dem Shape Matching benutzen hierfür Integralinvarianten in 2D. In aktuellen Arbeiten werden die Integralinvarianten grob geschätzt, um wenige prägnante Merkmale zu finden, mit denen sich zerbrochene 3D-Objekte zusammensetzen lassen. Mit dem Ziel der exakten Bestimmung der 3D-Formen von Zeichen, wurde die aus der Bildverarbeitung und Mustererkennung bekannte Verarbeitungskette an 3D-Modelle angepasst. Diese Modelle bestehen aus Millionen von Messpunkten, die mit optischen 3D-Scannern aufgenommen werden. Die Punkte approximieren Mannigfaltigkeiten durch ein irreguläres Dreiecksnetz. Verschiedene Typen von integralinvarianten Filtern in mehreren Skalen führen zu verschiedenen hochdimensionalen Merkmalsräumen. Faltungen und kombinierte Metriken werden auf die Merkmalsräume angewandt, um Zusammenhangskomponenten zu bestimmen. Diese Komponenten stellen die Zeichen genauer als die Messauflösung dar. Parallel zum Design der Algorithmen werden die Eigenschaften der verschiedenen Integralinvarianten analysiert. Die Interpretation der Filterergebnisse sind von großem Nutzen zur Bestimmung von robusten Krümmungsmaßen und zur Segmentierung. Die Extraktion von Keilschriftzeichen wird mit einer Voronoi basierten Berechnung von minimalen normalisierbaren Vektordarstellungen vervollständigt. Diese Darstellung ist eine wichtige Grundlage für die Paläographie. Weitere Abstraktion und Normalisierung der Darstellung führt zur Zeichenerkennung. Die Einbettung der Algorithmen in das neu entworfene mehrschichtige GigaMesh Software Framework erlaubt eine Vielzahl von Anwendungen. Die Algorithmen nutzen den Speicher effektiv und die Verarbeitungskette ist parallelisiert. Die konfigurierbare Verarbeitungskette hat nur einen relevanten Parameter, nämlich die maximale Größe der zu erwartenden Merkmale. Die vorgestellten Verfahren wurden an Hunderten von Keilschrifttafeln, so wie weiteren realen und synthetischen Objekten getestet.Repräsentative Ergebnisse sowie Aufwands- und Genauigkeitsabschätzung der Algorithmen werden gezeigt. Ein Ausblick auf künftige Erweiterungen und Integralinvarianten in höheren Dimensionen gegeben
    • …
    corecore