636 research outputs found

    Ellipse-based Principal Component Analysis for Self-intersecting Curve Reconstruction from Noisy Point Sets

    Get PDF
    Surface reconstruction from cross cuts usually requires curve reconstruction from planar noisy point samples -- The output curves must form a possibly disconnected 1manifold for the surface reconstruction to proceed -- This article describes an implemented algorithm for the reconstruction of planar curves (1manifolds) out of noisy point samples of a sel-fintersecting or nearly sel-fintersecting planar curve C -- C:[a,b]⊂R→R is self-intersecting if C(u)=C(v), u≠v, u,v∈(a,b) (C(u) is the self-intersection point) -- We consider only transversal self-intersections, i.e. those for which the tangents of the intersecting branches at the intersection point do not coincide (C′(u)≠C′(v)) -- In the presence of noise, curves which self-intersect cannot be distinguished from curves which nearly sel fintersect -- Existing algorithms for curve reconstruction out of either noisy point samples or pixel data, do not produce a (possibly disconnected) Piecewise Linear 1manifold approaching the whole point sample -- The algorithm implemented in this work uses Principal Component Analysis (PCA) with elliptic support regions near the selfintersections -- The algorithm was successful in recovering contours out of noisy slice samples of a surface, for the Hand, Pelvis and Skull data sets -- As a test for the correctness of the obtained curves in the slice levels, they were input into an algorithm of surface reconstruction, leading to a reconstructed surface which reproduces the topological and geometrical properties of the original object -- The algorithm robustly reacts not only to statistical noncorrelation at the self-intersections(nonmanifold neighborhoods) but also to occasional high noise at the nonselfintersecting (1manifold) neighborhood

    Object-Based 3-D Reconstruction of Arterial Trees from Magnetic Resonance Angiograms

    Full text link
    By exploiting a priori knowledge of arterial shape and smoothness, subpixel accuracy reconstructions are achieved from only four noisy projection images. The method incorporates a priori knowledge of the structure of branching arteries into a natural optimality criterion that encompasses the entire arterial tree. An efficient optimization algorithm for object estimation is presented, and its performance on simulated, phantom, and in vivo magnetic resonance angiograms is demonstrated. It is shown that accurate reconstruction of bifurcations is achievable with parametric models.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/85841/1/Fessler111.pd

    3D reconstruction of ribcage geometry from biplanar radiographs using a statistical parametric model approach

    Get PDF
    Rib cage 3D reconstruction is an important prerequisite for thoracic spine modelling, particularly for studies of the deformed thorax in adolescent idiopathic scoliosis. This study proposes a new method for rib cage 3D reconstruction from biplanar radiographs, using a statistical parametric model approach. Simplified parametric models were defined at the hierarchical levels of rib cage surface, rib midline and rib surface, and applied on a database of 86 trunks. The resulting parameter database served to statistical models learning which were used to quickly provide a first estimate of the reconstruction from identifications on both radiographs. This solution was then refined by manual adjustments in order to improve the matching between model and image. Accuracy was assessed by comparison with 29 rib cages from CT scans in terms of geometrical parameter differences and in terms of line-to-line error distance between the rib midlines. Intra and inter-observer reproducibility were determined regarding 20 scoliotic patients. The first estimate (mean reconstruction time of 2’30) was sufficient to extract the main rib cage global parameters with a 95% confidence interval lower than 7%, 8%, 2% and 4° for rib cage volume, antero-posterior and lateral maximal diameters and maximal rib hump, respectively. The mean error distance was 5.4 mm (max 35mm) down to 3.6 mm (max 24 mm) after the manual adjustment step (+3’30). The proposed method will improve developments of rib cage finite element modeling and evaluation of clinical outcomes.This work was funded by Paris Tech BiomecAM chair on subject specific muscular skeletal modeling, and we express our acknowledgments to the chair founders: Cotrel foundation, Société générale, Protéor Company and COVEA consortium. We extend your acknowledgements to Alina Badina for medical imaging data, Alexandre Journé for his advices, and Thomas Joubert for his technical support

    Coronal loop detection from solar images and extraction of salient contour groups from cluttered images.

    Get PDF
    This dissertation addresses two different problems: 1) coronal loop detection from solar images: and 2) salient contour group extraction from cluttered images. In the first part, we propose two different solutions to the coronal loop detection problem. The first solution is a block-based coronal loop mining method that detects coronal loops from solar images by dividing the solar image into fixed sized blocks, labeling the blocks as Loop or Non-Loop , extracting features from the labeled blocks, and finally training classifiers to generate learning models that can classify new image blocks. The block-based approach achieves 64% accuracy in IO-fold cross validation experiments. To improve the accuracy and scalability, we propose a contour-based coronal loop detection method that extracts contours from cluttered regions, then labels the contours as Loop and Non-Loop , and extracts geometric features from the labeled contours. The contour-based approach achieves 85% accuracy in IO-fold cross validation experiments, which is a 20% increase compared to the block-based approach. In the second part, we propose a method to extract semi-elliptical open curves from cluttered regions. Our method consists of the following steps: obtaining individual smooth contours along with their saliency measures; then starting from the most salient contour, searching for possible grouping options for each contour; and continuing the grouping until an optimum solution is reached. Our work involved the design and development of a complete system for coronal loop mining in solar images, which required the formulation of new Gestalt perceptual rules and a systematic methodology to select and combine them in a fully automated judicious manner using machine learning techniques that eliminate the need to manually set various weight and threshold values to define an effective cost function. After finding salient contour groups, we close the gaps within the contours in each group and perform B-spline fitting to obtain smooth curves. Our methods were successfully applied on cluttered solar images from TRACE and STEREO/SECCHI to discern coronal loops. Aerial road images were also used to demonstrate the applicability of our grouping techniques to other contour-types in other real applications

    Tracking Extended Objects in Noisy Point Clouds with Application in Telepresence Systems

    Get PDF
    We discuss theory and application of extended object tracking. This task is challenging as sensor noise prevents a correct association of the measurements to their sources on the object, the shape itself might be unknown a priori, and due to occlusion effects, only parts of the object are visible at a given time. We propose an approach to track the parameters of arbitrary objects, which provides new solutions to the above challenges, and marks a significant advance to the state of the art

    Time series analysis of the Bacillus subtilis sporulation network reveals low dimensional chaotic dynamics

    Get PDF
    Chaotic behavior refers to a behavior which, albeit irregular, is generated by an underlying deterministic process. Therefore, a chaotic behavior is potentially controllable. This possibility becomes practically amenable especially when chaos is shown to be low-dimensional, i.e., to be attributable to a small fraction of the total systems components. In this case, indeed, including the major drivers of chaos in a system into the modeling approach allows us to improve predictability of the systems dynamics. Here, we analyzed the numerical simulations of an accurate ordinary differential equation model of the gene network regulating sporulation initiation in Bacillus subtilis to explore whether the non-linearity underlying time series data is due to low-dimensional chaos. Low-dimensional chaos is expectedly common in systems with few degrees of freedom, but rare in systems with many degrees of freedom such as the B. subtilis sporulation network. The estimation of a number of indices, which reflect the chaotic nature of a system, indicates that the dynamics of this network is affected by deterministic chaos. The neat separation between the indices obtained from the time series simulated from the model and those obtained from time series generated by Gaussian white and colored noise confirmed that the B. subtilis sporulation network dynamics is affected by low dimensional chaos rather than by noise. Furthermore, our analysis identifies the principal driver of the networks chaotic dynamics to be sporulation initiation phosphotransferase B (Spo0B). We then analyzed the parameters and the phase space of the system to characterize the instability points of the network dynamics, and, in turn, to identify the ranges of values of Spo0B and of the other drivers of the chaotic dynamics, for which the whole system is highly sensitive to minimal perturbation. In summary, we described an unappreciated source of complexity in the B. subtilis sporulation network by gathering evidence for the chaotic behavior of the system, and by suggesting candidate molecules driving chaos in the system. The results of our chaos analysis can increase our understanding of the intricacies of the regulatory network under analysis, and suggest experimental work to refine our behavior of the mechanisms underlying B. subtilis sporulation initiation control

    Data-Driven Geometric Design Space Exploration and Design Synthesis

    Get PDF
    A design space is the space of all potential design candidates. While the design space can be of any kind, this work focuses on exploring geometric design spaces, where geometric parameters are used to represent designs and will largely affect a given design's functionality or performance (e.g., airfoil, hull, and car body designs). By exploring the design space, we evaluate different design choices and look for desired solutions. However, a design space may have unnecessarily high dimensionality and implicit boundaries, which makes it difficult to explore. Also, if we synthesize new designs by randomly sampling design variables in the high-dimensional design space, there is high chance that the designs are not feasible, as there is correlation between feasible design variables. This dissertation introduces ways of capturing a compact representation (which we call a latent space) that describes the variability of designs, so that we can synthesize designs and explore design options using this compact representation instead of the original high-dimensional design variables. The main research question answered by this dissertation is: how does one effectively learn this compact representation from data and efficiently explore this latent space so that we can quickly find desired design solutions? The word "quickly" here means to eliminate or reduce the iterative ideation, prototyping, and evaluation steps in a conventional design process. This also reduces human intervention, and hence facilitates design automation. This work bridges the gap between machine learning and geometric design in engineering. It contributes new pieces of knowledge within two topics: design space exploration and design synthesis. Specifically, the main contributions are: 1. A method for measuring the intrinsic complexity of a design space based on design data manifolds. 2. Machine learning models that incorporate prior knowledge from the domain of design to improve latent space exploration and design synthesis quality. 3. New design space exploration tools that expand the design space and search for desired designs in an unbounded space. 4. Geometrical design space benchmarks with controllable complexity for testing data-driven design space exploration and design synthesis

    Monte Carlo simulation and data analysis of sky observation mode with the Cherenkov Telescope Array

    Get PDF
    The success of the current generation of Imaging Atmospheric Cherenkov Telescopes (IACTs) has affirmed the importance of gamma-ray astronomy for modern astrophysics. This field focuses on the upper end of the electromagnetic spectrum, with energies in the range of ~ 50 GeV to hundreds of TeV. Gamma rays, due to their neutral charge, are not disturbed by deflection from magnetic fields in their journey across the Universe, and can be traced back through their incoming direction. Thus, they can address to identify the acceleration processes of high-energy particles close to their acceleration sites, and also answer to some of the most fundamental physics issues, like the Lorentz invariance or the mystery of Dark Matter. Currently in the construction stage, the Cherenkov Telescope Array (CTA) is the next generation ground-based observatory for gamma-ray astronomy at very-high energies and it will exceed its predecessors in every aspect, e.g. energy range, sensitivity, field of view or angular resolution. The galactic and extragalactic surveys are two of the main proposed legacy projects of CTA, providing an unbiased view of the Universe at energies above tens of GeV. Surveys provide an immense service to the researchers community in the context of an open observatory, since they constitute versatile datasets that enable the detection of unexpected sources and provide testing ground for new theoretical ideas. Considering Cherenkov telescopes' limited field of view (FoV), the time needed for those science projects is large. The limited duty cycle of about 1000 hours per year of imaging Cherenkov facilities is a strong motivation to try to reduce the observation time needed for the surveys. The huge number of telescopes of CTA with respect to existing instruments, will allow taking full advantage of new pointing modes in which telescopes point slightly offset from one another, like the divergent mode. This pointing mode leads to an increase in the field of view of the sub-array of telescopes with competitive performance compared to normal pointing of the same sub-array. This type of observation should also improve CTA capacity to detect transient events, such as Gamma Ray Bursts (GRBs), thanks to the Large Size Telescope (LST) fast repointing capability and low energy threshold, and the ability to cover more complex patterns of the sky, like mapping a Gravitational Wave (GW) probability sky map or GRB large error boxes. A divergent pointing mode was included in the CTA science requirements document but, apart from a few publications, up to now it has never been the center of an in-depth study. This thesis is devoted to the improvement of the analysis in divergent mode and to study the performance of this modality for different array configurations and number of telescopes, in order to investigate if the performance are competitive compared to normal pointing

    Curve Skeleton and Moments of Area Supported Beam Parametrization in Multi-Objective Compliance Structural Optimization

    Get PDF
    This work addresses the end-to-end virtual automation of structural optimization up to the derivation of a parametric geometry model that can be used for application areas such as additive manufacturing or the verification of the structural optimization result with the finite element method. A holistic design in structural optimization can be achieved with the weighted sum method, which can be automatically parameterized with curve skeletonization and cross-section regression to virtually verify the result and control the local size for additive manufacturing. is investigated in general. In this paper, a holistic design is understood as a design that considers various compliances as an objective function. This parameterization uses the automated determination of beam parameters by so-called curve skeletonization with subsequent cross-section shape parameter estimation based on moments of area, especially for multi-objective optimized shapes. An essential contribution is the linking of the parameterization with the results of the structural optimization, e.g., to include properties such as boundary conditions, load conditions, sensitivities or even density variables in the curve skeleton parameterization. The parameterization focuses on guiding the skeletonization based on the information provided by the optimization and the finite element model. In addition, the cross-section detection considers circular, elliptical, and tensor product spline cross-sections that can be applied to various shape descriptors such as convolutional surfaces, subdivision surfaces, or constructive solid geometry. The shape parameters of these cross-sections are estimated using stiffness distributions, moments of area of 2D images, and convolutional neural networks with a tailored loss function to moments of area. Each final geometry is designed by extruding the cross-section along the appropriate curve segment of the beam and joining it to other beams by using only unification operations. The focus of multi-objective structural optimization considering 1D, 2D and 3D elements is on cases that can be modeled using equations by the Poisson equation and linear elasticity. This enables the development of designs in application areas such as thermal conduction, electrostatics, magnetostatics, potential flow, linear elasticity and diffusion, which can be optimized in combination or individually. Due to the simplicity of the cases defined by the Poisson equation, no experts are required, so that many conceptual designs can be generated and reconstructed by ordinary users with little effort. Specifically for 1D elements, a element stiffness matrices for tensor product spline cross-sections are derived, which can be used to optimize a variety of lattice structures and automatically convert them into free-form surfaces. For 2D elements, non-local trigonometric interpolation functions are used, which should significantly increase interpretability of the density distribution. To further improve the optimization, a parameter-free mesh deformation is embedded so that the compliances can be further reduced by locally shifting the node positions. Finally, the proposed end-to-end optimization and parameterization is applied to verify a linear elasto-static optimization result for and to satisfy local size constraint for the manufacturing with selective laser melting of a heat transfer optimization result for a heat sink of a CPU. For the elasto-static case, the parameterization is adjusted until a certain criterion (displacement) is satisfied, while for the heat transfer case, the manufacturing constraints are satisfied by automatically changing the local size with the proposed parameterization. This heat sink is then manufactured without manual adjustment and experimentally validated to limit the temperature of a CPU to a certain level.:TABLE OF CONTENT III I LIST OF ABBREVIATIONS V II LIST OF SYMBOLS V III LIST OF FIGURES XIII IV LIST OF TABLES XVIII 1. INTRODUCTION 1 1.1 RESEARCH DESIGN AND MOTIVATION 6 1.2 RESEARCH THESES AND CHAPTER OVERVIEW 9 2. PRELIMINARIES OF TOPOLOGY OPTIMIZATION 12 2.1 MATERIAL INTERPOLATION 16 2.2 TOPOLOGY OPTIMIZATION WITH PARAMETER-FREE SHAPE OPTIMIZATION 17 2.3 MULTI-OBJECTIVE TOPOLOGY OPTIMIZATION WITH THE WEIGHTED SUM METHOD 18 3. SIMULTANEOUS SIZE, TOPOLOGY AND PARAMETER-FREE SHAPE OPTIMIZATION OF WIREFRAMES WITH B-SPLINE CROSS-SECTIONS 21 3.1 FUNDAMENTALS IN WIREFRAME OPTIMIZATION 22 3.2 SIZE AND TOPOLOGY OPTIMIZATION WITH PERIODIC B-SPLINE CROSS-SECTIONS 27 3.3 PARAMETER-FREE SHAPE OPTIMIZATION EMBEDDED IN SIZE OPTIMIZATION 32 3.4 WEIGHTED SUM SIZE AND TOPOLOGY OPTIMIZATION 36 3.5 CROSS-SECTION COMPARISON 39 4. NON-LOCAL TRIGONOMETRIC INTERPOLATION IN TOPOLOGY OPTIMIZATION 41 4.1 FUNDAMENTALS IN MATERIAL INTERPOLATIONS 43 4.2 NON-LOCAL TRIGONOMETRIC SHAPE FUNCTIONS 45 4.3 NON-LOCAL PARAMETER-FREE SHAPE OPTIMIZATION WITH TRIGONOMETRIC SHAPE FUNCTIONS 49 4.4 NON-LOCAL AND PARAMETER-FREE MULTI-OBJECTIVE TOPOLOGY OPTIMIZATION 54 5. FUNDAMENTALS IN SKELETON GUIDED SHAPE PARAMETRIZATION IN TOPOLOGY OPTIMIZATION 58 5.1 SKELETONIZATION IN TOPOLOGY OPTIMIZATION 61 5.2 CROSS-SECTION RECOGNITION FOR IMAGES 66 5.3 SUBDIVISION SURFACES 67 5.4 CONVOLUTIONAL SURFACES WITH META BALL KERNEL 71 5.5 CONSTRUCTIVE SOLID GEOMETRY 73 6. CURVE SKELETON GUIDED BEAM PARAMETRIZATION OF TOPOLOGY OPTIMIZATION RESULTS 75 6.1 FUNDAMENTALS IN SKELETON SUPPORTED RECONSTRUCTION 76 6.2 SUBDIVISION SURFACE PARAMETRIZATION WITH PERIODIC B-SPLINE CROSS-SECTIONS 78 6.3 CURVE SKELETONIZATION TAILORED TO TOPOLOGY OPTIMIZATION WITH PRE-PROCESSING 82 6.4 SURFACE RECONSTRUCTION USING LOCAL STIFFNESS DISTRIBUTION 86 7. CROSS-SECTION SHAPE PARAMETRIZATION FOR PERIODIC B-SPLINES 96 7.1 PRELIMINARIES IN B-SPLINE CONTROL GRID ESTIMATION 97 7.2 CROSS-SECTION EXTRACTION OF 2D IMAGES 101 7.3 TENSOR SPLINE PARAMETRIZATION WITH MOMENTS OF AREA 105 7.4 B-SPLINE PARAMETRIZATION WITH MOMENTS OF AREA GUIDED CONVOLUTIONAL NEURAL NETWORK 110 8. FULLY AUTOMATED COMPLIANCE OPTIMIZATION AND CURVE-SKELETON PARAMETRIZATION FOR A CPU HEAT SINK WITH SIZE CONTROL FOR SLM 115 8.1 AUTOMATED 1D THERMAL COMPLIANCE MINIMIZATION, CONSTRAINED SURFACE RECONSTRUCTION AND ADDITIVE MANUFACTURING 118 8.2 AUTOMATED 2D THERMAL COMPLIANCE MINIMIZATION, CONSTRAINT SURFACE RECONSTRUCTION AND ADDITIVE MANUFACTURING 120 8.3 USING THE HEAT SINK PROTOTYPES COOLING A CPU 123 9. CONCLUSION 127 10. OUTLOOK 131 LITERATURE 133 APPENDIX 147 A PREVIOUS STUDIES 147 B CROSS-SECTION PROPERTIES 149 C CASE STUDIES FOR THE CROSS-SECTION PARAMETRIZATION 155 D EXPERIMENTAL SETUP 15
    • …
    corecore