259 research outputs found
3D COLORED MESH STRUCTURE-PRESERVING FILTERING WITH ADAPTIVE P-LAPLACIAN ON DIRECTED GRAPHS
International audienceEditing of 3D colored meshes represents a fundamental component of nowadays computer vision and computer graphics applications. In this paper, we propose a framework based on the p-laplacian on directed graphs for structure-preserving filtering. This relies on a novel objective function composed of a fitting term, a smoothness term with a spatially-variant pTV norm, and a structure-preserving term. The last two terms can be related to formulations of the p-Laplacian on directed graphs. This enables to impose different forms of processing onto different graph areas for better smoothing quality
Stochastic spectral-spatial permutation ordering combination for nonlocal morphological processing
International audienceThe extension of mathematical morphology to mul-tivariate data has been an active research topic in recent years. In this paper we propose an approach that relies on the consensus combination of several stochastic permutation orderings. The latter are obtained by searching for a smooth shortest path on a graph representing an image. The construction of the graph can be based on both spatial and spectral information and naturally enables patch-based nonlocal processing
Multi-Material Mesh Representation of Anatomical Structures for Deep Brain Stimulation Planning
The Dual Contouring algorithm (DC) is a grid-based process used to generate surface meshes from volumetric data. However, DC is unable to guarantee 2-manifold and watertight meshes due to the fact that it produces only one vertex for each grid cube. We present a modified Dual Contouring algorithm that is capable of overcoming this limitation. The proposed method decomposes an ambiguous grid cube into a set of tetrahedral cells and uses novel polygon generation rules that produce 2-manifold and watertight surface meshes with good-quality triangles. These meshes, being watertight and 2-manifold, are geometrically correct, and therefore can be used to initialize tetrahedral meshes.
The 2-manifold DC method has been extended into the multi-material domain. Due to its multi-material nature, multi-material surface meshes will contain non-manifold elements along material interfaces or shared boundaries. The proposed multi-material DC algorithm can (1) generate multi-material surface meshes where each material sub-mesh is a 2-manifold and watertight mesh, (2) preserve the non-manifold elements along the material interfaces, and (3) ensure that the material interface or shared boundary between materials is consistent. The proposed method is used to generate multi-material surface meshes of deep brain anatomical structures from a digital atlas of the basal ganglia and thalamus. Although deep brain anatomical structures can be labeled as functionally separate, they are in fact continuous tracts of soft tissue in close proximity to each other. The multi-material meshes generated by the proposed DC algorithm can accurately represent the closely-packed deep brain structures as a single mesh consisting of multiple material sub-meshes. Each sub-mesh represents a distinct functional structure of the brain.
Printed and/or digital atlases are important tools for medical research and surgical intervention. While these atlases can provide guidance in identifying anatomical structures, they do not take into account the wide variations in the shape and size of anatomical structures that occur from patient to patient. Accurate, patient-specific representations are especially important for surgical interventions like deep brain stimulation, where even small inaccuracies can result in dangerous complications. The last part of this research effort extends the discrete deformable 2-simplex mesh into the multi-material domain where geometry-based internal forces and image-based external forces are used in the deformation process. This multi-material deformable framework is used to segment anatomical structures of the deep brain region from Magnetic Resonance (MR) data
Discovering Regularity in Point Clouds of Urban Scenes
Despite the apparent chaos of the urban environment, cities are actually replete with regularity. From the grid of streets laid out over the earth, to the lattice of windows thrown up into the sky, periodic regularity abounds in the urban scene. Just as salient, though less uniform, are the self-similar branching patterns of trees and vegetation that line streets and fill parks. We propose novel methods for discovering these regularities in 3D range scans acquired by a time-of-flight laser sensor. The applications of this regularity information are broad, and we present two original algorithms. The first exploits the efficiency of the Fourier transform for the real-time detection of periodicity in building facades. Periodic regularity is discovered online by doing a plane sweep across the scene and analyzing the frequency space of each column in the sweep. The simplicity and online nature of this algorithm allow it to be embedded in scanner hardware, making periodicity detection a built-in feature of future 3D cameras. We demonstrate the usefulness of periodicity in view registration, compression, segmentation, and facade reconstruction. The second algorithm leverages the hierarchical decomposition and locality in space of the wavelet transform to find stochastic parameters for procedural models that succinctly describe vegetation. These procedural models facilitate the generation of virtual worlds for architecture, gaming, and augmented reality. The self-similarity of vegetation can be inferred using multi-resolution analysis to discover the underlying branching patterns. We present a unified framework of these tools, enabling the modeling, transmission, and compression of high-resolution, accurate, and immersive 3D images
Correspondence of three-dimensional objects
First many thanks go to Prof. Hans du Buf, for his supervision based
on his experience, for providing a stimulating and cheerful research environment
in his laboratory, for letting me participate in the projects that
produced results for papers, thus made me more aware of the state of the
art in Computer Vision, especially in the area of 3D recognition. Also for
his encouraging support and his way to always nd time for discussions,
and last but not the least for the cooking recipes...
Many thanks go also to my laboratory fellows, to Jo~ao Rodrigues, who
invited me to participate in FCT and QREN projects, Jaime Carvalho
Martins and Miguel Farrajota, for discussing scienti c and technical
problems, but also almost all problems in the world.
To all persons, that worked in, or visited the Vision Laboratory, especially
those with whom I have worked with, almost on a daily basis.
A special thanks to the Instituto Superior de Engenharia at UAlg and
my colleagues at the Department of Electrical Engineering, for allowing
me to suspend lectures in order to be present at conferences.
To my family, my wife and my kids
Computerized Analysis of Magnetic Resonance Images to Study Cerebral Anatomy in Developing Neonates
The study of cerebral anatomy in developing neonates is of great importance for
the understanding of brain development during the early period of life. This
dissertation therefore focuses on three challenges in the modelling of cerebral
anatomy in neonates during brain development. The methods that have been
developed all use Magnetic Resonance Images (MRI) as source data.
To facilitate study of vascular development in the neonatal period, a set of image
analysis algorithms are developed to automatically extract and model cerebral
vessel trees. The whole process consists of cerebral vessel tracking from
automatically placed seed points, vessel tree generation, and vasculature
registration and matching. These algorithms have been tested on clinical Time-of-
Flight (TOF) MR angiographic datasets.
To facilitate study of the neonatal cortex a complete cerebral cortex segmentation
and reconstruction pipeline has been developed. Segmentation of the neonatal
cortex is not effectively done by existing algorithms designed for the adult brain
because the contrast between grey and white matter is reversed. This causes pixels
containing tissue mixtures to be incorrectly labelled by conventional methods. The
neonatal cortical segmentation method that has been developed is based on a novel
expectation-maximization (EM) method with explicit correction for mislabelled
partial volume voxels. Based on the resulting cortical segmentation, an implicit
surface evolution technique is adopted for the reconstruction of the cortex in
neonates. The performance of the method is investigated by performing a detailed
landmark study.
To facilitate study of cortical development, a cortical surface registration algorithm
for aligning the cortical surface is developed. The method first inflates extracted
cortical surfaces and then performs a non-rigid surface registration using free-form
deformations (FFDs) to remove residual alignment. Validation experiments using
data labelled by an expert observer demonstrate that the method can capture local
changes and follow the growth of specific sulcus
Shape analysis of the human brain.
Autism is a complex developmental disability that has dramatically increased in prevalence, having a decisive impact on the health and behavior of children. Methods used to detect and recommend therapies have been much debated in the medical community because of the subjective nature of diagnosing autism. In order to provide an alternative method for understanding autism, the current work has developed a 3-dimensional state-of-the-art shape based analysis of the human brain to aid in creating more accurate diagnostic assessments and guided risk analyses for individuals with neurological conditions, such as autism. Methods: The aim of this work was to assess whether the shape of the human brain can be used as a reliable source of information for determining whether an individual will be diagnosed with autism. The study was conducted using multi-center databases of magnetic resonance images of the human brain. The subjects in the databases were analyzed using a series of algorithms consisting of bias correction, skull stripping, multi-label brain segmentation, 3-dimensional mesh construction, spherical harmonic decomposition, registration, and classification. The software algorithms were developed as an original contribution of this dissertation in collaboration with the BioImaging Laboratory at the University of Louisville Speed School of Engineering. The classification of each subject was used to construct diagnoses and therapeutic risk assessments for each patient. Results: A reliable metric for making neurological diagnoses and constructing therapeutic risk assessment for individuals has been identified. The metric was explored in populations of individuals having autism spectrum disorders, dyslexia, Alzheimers disease, and lung cancer. Conclusion: Currently, the clinical applicability and benefits of the proposed software approach are being discussed by the broader community of doctors, therapists, and parents for use in improving current methods by which autism spectrum disorders are diagnosed and understood
Part decomposition of 3D surfaces
This dissertation describes a general algorithm that automatically decomposes realworld scenes and objects into visual parts. The input to the algorithm is a 3 D triangle mesh that approximates the surfaces of a scene or object. This geometric mesh completely specifies the shape of interest. The output of the algorithm is a set of boundary contours that dissect the mesh into parts where these parts agree with human perception. In this algorithm, shape alone defines the location of a bom1dary contour for a part. The algorithm leverages a human vision theory known as the minima rule that states that human visual perception tends to decompose shapes into parts along lines of negative curvature minima. Specifically, the minima rule governs the location of part boundaries, and as a result the algorithm is known as the Minima Rule Algorithm. Previous computer vision methods have attempted to implement this rule but have used pseudo measures of surface curvature. Thus, these prior methods are not true implementations of the rule. The Minima Rule Algorithm is a three step process that consists of curvature estimation, mesh segmentation, and quality evaluation. These steps have led to three novel algorithms known as Normal Vector Voting, Fast Marching Watersheds, and Part Saliency Metric, respectively. For each algorithm, this dissertation presents both the supporting theory and experimental results. The results demonstrate the effectiveness of the algorithm using both synthetic and real data and include comparisons with previous methods from the research literature. Finally, the dissertation concludes with a summary of the contributions to the state of the art
- …