929 research outputs found

    Part decomposition of 3D surfaces

    Get PDF
    This dissertation describes a general algorithm that automatically decomposes realworld scenes and objects into visual parts. The input to the algorithm is a 3 D triangle mesh that approximates the surfaces of a scene or object. This geometric mesh completely specifies the shape of interest. The output of the algorithm is a set of boundary contours that dissect the mesh into parts where these parts agree with human perception. In this algorithm, shape alone defines the location of a bom1dary contour for a part. The algorithm leverages a human vision theory known as the minima rule that states that human visual perception tends to decompose shapes into parts along lines of negative curvature minima. Specifically, the minima rule governs the location of part boundaries, and as a result the algorithm is known as the Minima Rule Algorithm. Previous computer vision methods have attempted to implement this rule but have used pseudo measures of surface curvature. Thus, these prior methods are not true implementations of the rule. The Minima Rule Algorithm is a three step process that consists of curvature estimation, mesh segmentation, and quality evaluation. These steps have led to three novel algorithms known as Normal Vector Voting, Fast Marching Watersheds, and Part Saliency Metric, respectively. For each algorithm, this dissertation presents both the supporting theory and experimental results. The results demonstrate the effectiveness of the algorithm using both synthetic and real data and include comparisons with previous methods from the research literature. Finally, the dissertation concludes with a summary of the contributions to the state of the art

    Surface-guided computing to analyze subcellular morphology and membrane-associated signals in 3D

    Full text link
    Signal transduction and cell function are governed by the spatiotemporal organization of membrane-associated molecules. Despite significant advances in visualizing molecular distributions by 3D light microscopy, cell biologists still have limited quantitative understanding of the processes implicated in the regulation of molecular signals at the whole cell scale. In particular, complex and transient cell surface morphologies challenge the complete sampling of cell geometry, membrane-associated molecular concentration and activity and the computing of meaningful parameters such as the cofluctuation between morphology and signals. Here, we introduce u-Unwrap3D, a framework to remap arbitrarily complex 3D cell surfaces and membrane-associated signals into equivalent lower dimensional representations. The mappings are bidirectional, allowing the application of image processing operations in the data representation best suited for the task and to subsequently present the results in any of the other representations, including the original 3D cell surface. Leveraging this surface-guided computing paradigm, we track segmented surface motifs in 2D to quantify the recruitment of Septin polymers by blebbing events; we quantify actin enrichment in peripheral ruffles; and we measure the speed of ruffle movement along topographically complex cell surfaces. Thus, u-Unwrap3D provides access to spatiotemporal analyses of cell biological parameters on unconstrained 3D surface geometries and signals.Comment: 49 pages, 10 figure

    Developing DNS Tools to Study Channel Flow Over Realistic Plaque Morphology

    Get PDF
    In a normal coronary artery, the flow is laminar and the velocity is parabolic in nature. Over time, plaques deposit along the artery wall, narrowing the artery and creating an obstruction, a stenosis. As the stenosis grows, the characteristics of the flow change and transition occurs, resulting in turbulent flow distal to the stenosis. To date, direct numerical simulation (DNS) of turbulent flow has been performed in a number of studies to understand how stenosis modifies flow dynamics. However, the effect of the actual shape and size of the obstruction has been disregarded in these DNS studies. An ideal approach is to obtain geometrical information of the stenotic channel using medical imaging methods such as IVUS (Intravascular Ultrasound) and couple them with numerical solvers that simulate the flow in the stenotic channel. The purpose of the present thesis is to demonstrate the feasibility of coupling the IVUS geometry with DNS solver. This preliminary research will provide the necessary tools to achieve the long term goal of developing a framework for the morphological features of the stenosis on the flow modifications in a diseased coronary artery. In the present study, the geometrical information of the stenotic plaque has been provided by the medical imaging team at the Cleveland Clinic Foundation for 42 patients who underwent IVUS. The integration of the geometrical information of the stenotic plaque with the DNS was performed in 3 stages 1) fuzzy logic scheme was used to group the 42 patients into categories, 2) meshing algorithm was generated to interface with the DNS solver, and 3) the existing DNS for channel flow was modified to account for inhomogeneity in the streamwise direction. A plaque classification system was developed using statistical k-means clustering with fuzzy logic. Four distinct morphological categories were found in plaque measurements obtained from the 42 patients. Patients were then assigned a degree of membership to each category based on a fuzzy evaluation system. Flow simulations showed distinct turbulent flow characteristics when comparing the four categories, and similar characteristics within each category. An existing DNS solver that used the fourth-order velocity second-order vorticity formulation of the Navier-Stokes equations was modified to account for inhomogeneity in the streamwise direction. A multigrid method was implemented, using Green\u27s method to compute unknown boundary conditions at the walls in using an influence matrix approach. The inflow is the free stream laminar flow condition; the outflow is computed explicitly with a buffer domain and by parabolizing the Navier Stokes equation. The transitional flow solver was tested using blowing and suction disturbances at the wall to generate the Tollmien-Schlichting waves predicted by linear stability theory. The toolset developed as a part of this thesis demonstrates the feasibility of integrating realistic geometry with DNS. This tool can be used for patient-specific simulation of stenotic flow in coronary and carotid arteries. Additionally, within the field of fluid dynamics, this framework will contribute to the understanding of transition and turbulence in stenotic flows

    3D object retrieval and segmentation: various approaches including 2D poisson histograms and 3D electrical charge distributions.

    Get PDF
    Nowadays 3D models play an important role in many applications: viz. games, cultural heritage, medical imaging etc. Due to the fast growth in the number of available 3D models, understanding, searching and retrieving such models have become interesting fields within computer vision. In order to search and retrieve 3D models, we present two different approaches: one is based on solving the Poisson Equation over 2D silhouettes of the models. This method uses 60 different silhouettes, which are automatically extracted from different viewangles. Solving the Poisson equation for each silhouette assigns a number to each pixel as its signature. Accumulating these signatures generates a final histogram-based descriptor for each silhouette, which we call a SilPH (Silhouette Poisson Histogram). For the second approach, we propose two new robust shape descriptors based on the distribution of charge density on the surface of a 3D model. The Finite Element Method is used to calculate the charge density on each triangular face of each model as a local feature. Then we utilize the Bag-of-Features and concentric sphere frameworks to perform global matching using these local features. In addition to examining the retrieval accuracy of the descriptors in comparison to the state-of-the-art approaches, the retrieval speeds as well as robustness to noise and deformation on different datasets are investigated. On the other hand, to understand new complex models, we have also utilized distribution of electrical charge for proposing a system to decompose models into meaningful parts. Our robust, efficient and fully-automatic segmentation approach is able to specify the segments attached to the main part of a model as well as locating the boundary parts of the segments. The segmentation ability of the proposed system is examined on the standard datasets and its timing and accuracy are compared with the existing state-of-the-art approaches

    Robust feature-based 3D mesh segmentation and visual mask with application to QIM 3D watermarking

    Get PDF
    The last decade has seen the emergence of 3D meshes in industrial, medical and entertainment applications. Many researches, from both the academic and the industrial sectors, have become aware of their intellectual property protection arising with their increasing use. The context of this master thesis is related to the digital rights management (DRM) issues and more particularly to 3D digital watermarking which is a technical tool that by means of hiding secret information can offer copyright protection, content authentication, content tracking (fingerprinting), steganography (secret communication inside another media), content enrichment etc. Up to now, 3D watermarking non-blind schemes have reached good levels in terms of robustness against a large set of attacks which 3D models can undergo (such as noise addition, decimation, reordering, remeshing, etc.). Unfortunately, so far blind 3D watermarking schemes do not present a good resistance to de-synchronization attacks (such as cropping or resampling). This work focuses on improving the Spread Transform Dither Modulation (STDM) application on 3D watermarking, which is an extension of the Quantization Index Modulation (QIM), through both the use of the perceptual model presented, which presents good robustness against noising and smoothing attacks, and the the application of an algorithm which provides robustness noising and smoothing attacks, and the the application of an algorithm which provides robustness against reordering and cropping attacks based on robust feature detection. Similar to other watermarking techniques, imperceptibility constraint is very important for 3D objects watermarking. For this reason, this thesis also explores the perception of the distortions related to the watermark embed process as well as to the alterations produced by the attacks that a mesh can undergo

    Automatic Reconstruction of Parametric, Volumetric Building Models from 3D Point Clouds

    Get PDF
    Planning, construction, modification, and analysis of buildings requires means of representing a building's physical structure and related semantics in a meaningful way. With the rise of novel technologies and increasing requirements in the architecture, engineering and construction (AEC) domain, two general concepts for representing buildings have gained particular attention in recent years. First, the concept of Building Information Modeling (BIM) is increasingly used as a modern means for representing and managing a building's as-planned state digitally, including not only a geometric model but also various additional semantic properties. Second, point cloud measurements are now widely used for capturing a building's as-built condition by means of laser scanning techniques. A particular challenge and topic of current research are methods for combining the strengths of both point cloud measurements and Building Information Modeling concepts to quickly obtain accurate building models from measured data. In this thesis, we present our recent approaches to tackle the intermeshed challenges of automated indoor point cloud interpretation using targeted segmentation methods, and the automatic reconstruction of high-level, parametric and volumetric building models as the basis for further usage in BIM scenarios. In contrast to most reconstruction methods available at the time, we fundamentally base our approaches on BIM principles and standards, and overcome critical limitations of previous approaches in order to reconstruct globally plausible, volumetric, and parametric models.Automatische Rekonstruktion von parametrischen, volumetrischen GebĂ€udemodellen aus 3D Punktwolken FĂŒr die Planung, Konstruktion, Modifikation und Analyse von GebĂ€uden werden Möglichkeiten zur sinnvollen ReprĂ€sentation der physischen GebĂ€udestruktur sowie dazugehöriger Semantik benötigt. Mit dem Aufkommen neuer Technologien und steigenden Anforderungen im Bereich von Architecture, Engineering and Construction (AEC) haben zwei Konzepte fĂŒr die ReprĂ€sentation von GebĂ€uden in den letzten Jahren besondere Aufmerksamkeit erlangt. Erstens wird das Konzept des Building Information Modeling (BIM) zunehmend als ein modernes Mittel zur digitalen Abbildung und Verwaltung "As-Planned"-Zustands von GebĂ€uden verwendet, welches nicht nur ein geometrisches Modell sondern auch verschiedene zusĂ€tzliche semantische Eigenschaften beinhaltet. Zweitens werden Punktwolkenmessungen inzwischen hĂ€ufig zur Aufnahme des "As-Built"-Zustands mittels Laser-Scan-Techniken eingesetzt. Eine besondere Herausforderung und Thema aktueller Forschung ist die Entwicklung von Methoden zur Vereinigung der StĂ€rken von Punktwolken und Konzepten des Building Information Modeling um schnell akkurate GebĂ€udemodelle aus den gemessenen Daten zu erzeugen. In dieser Dissertation prĂ€sentieren wir unsere aktuellen AnsĂ€tze um die miteinander verwobenen Herausforderungen anzugehen, Punktwolken mithilfe geeigneter Segmentierungsmethoden automatisiert zu interpretieren, sowie hochwertige, parametrische und volumetrische GebĂ€udemodelle als Basis fĂŒr die Verwendung im BIM-Umfeld zu rekonstruieren. Im Gegensatz zu den meisten derzeit verfĂŒgbaren Rekonstruktionsverfahren basieren unsere AnsĂ€tze grundlegend auf Prinzipien und Standards aus dem BIM-Umfeld und ĂŒberwinden kritische EinschrĂ€nkungen bisheriger AnsĂ€tze um vollstĂ€ndig plausible, volumetrische und parametrische Modelle zu erzeugen.</p
    • 

    corecore