108 research outputs found

    A novel procedure for medial axis reconstruction of vessels from Medical Imaging segmentation

    Get PDF
    A procedure for reconstructing the central axis from diagnostic image processing is presented here, capable of solving the widespread problem of stepped shape effect that characterizes the most common algorithmic tools for processing the central axis for diagnostic imaging applications through the development of an algorithm correcting the spatial coordinates of each point belonging to the axis from the use of a common discrete image skeleton algorithm. The procedure is applied to the central axis traversing the vascular branch of the cerebral system, appropriately reconstructed from the processing of diagnostic images, using investigations of the local intensity values identified in adjacent voxels. The percentage intensity of the degree of adherence to a specific anatomical tissue acts as an attraction pole in the identification of the spatial center on which to place each point of the skeleton crossing the investigated anatomical structure. The results were shown in terms of the number of vessels identified overall compared to the original reference model. The procedure demonstrates high accuracy margin in the correction of the local coordinates of the central points that permits to allocate precise dimensional measurement of the anatomy under examination. The reconstruction of a central axis effectively centered in the region under examination represents a fundamental starting point in deducing, with a high margin of accuracy, key informations of a geometric and dimensional nature that favours the recognition of phenomena of shape alterations ascribable to the presence of clinical pathologies

    Image Processing Techniques for Detecting Chromosome Abnormalities

    Get PDF
    With the increasing use of Fluorescence In Situ Hybridization (FISH) probes as markers for certain genetic sequences, the requirement of a proper image processing framework is becoming a necessity to accurately detect these probe signal locations in relation to the centerline of the chromosome. Automated detection and length measurements based on the centerline relative to the centromere and the telomere coordinates would highly assist in clinical diagnosis of genetic disorders and thus improve its efficiency significantly. Although many image processing techniques have been developed for chromosomal analysis such as ’’karyotype analysis” to assist in laboratory diagnosis, they fail to provide reliable results in segmenting and extracting the centerline of chromosomes due to the high variability in shape of chromosomes on microscope slides. In this thesis we propose a hybrid algorithm that utilizes Gradient Vector Flow active contours, Discrete Curve Evolution based skeleton pruning and morphological thinning to provide a robust and accurate centerline of the chromosome, which is then used for the measurement of the FISH probe signals. Then this centerline information is used to detect the centromere location of the chromosome and the probe signal location distances were measured with respective to these landmarks. The ability to accurately detect FISH probe locations with respective to its centerline and other landmarks can provide the cytogeneticists with detailed information that could lead to a faster diagnosis

    Skeletonization methods for image and volume inpainting

    Get PDF

    Automatic skeletonization and skin attachment for realistic character animation.

    Get PDF
    The realism of character animation is associated with a number of tasks ranging from modelling, skin defonnation, motion generation to rendering. In this research we are concerned with two of them: skeletonization and weight assignment for skin deformation. The fonner is to generate a skeleton, which is placed within the character model and links the motion data to the skin shape of the character. The latter assists the modelling of realistic skin shape when a character is in motion. In the current animation production practice, the task of skeletonization is primarily undertaken by hand, i.e. the animator produces an appropriate skeleton and binds it with the skin model of a character. This is inevitably very time-consuming and costs a lot of labour. In order to improve this issue, in this thesis we present an automatic skeletonization framework. It aims at producing high-quality animatible skeletons without heavy human involvement while allowing the animator to maintain the overall control of the process. In the literature, the tenn skeletonization can have different meanings. Most existing research on skeletonization is in the remit of CAD (Computer Aided Design). Although existing research is of significant reference value to animation, their downside is the skeleton generated is either not appropriate for the particular needs of animation, or the methods are computationally expensive. Although some purpose-build animation skeleton generation techniques exist, unfortunately they rely on complicated post-processing procedures, such as thinning and pruning, which again can be undesirable. The proposed skeletonization framework makes use of a new geometric entity known as the 3D silhouette that is an ordinary silhouette with its depth information recorded. We extract a curve skeleton from two 3D silhouettes of a character detected from its two perpendicular projections. The skeletal joints are identified by down sampling the curve skeleton, leading to the generation of the final animation skeleton. The efficiency and quality are major performance indicators in animation skeleton generation. Our framework achieves the former by providing a 2D solution to the 3D skeletonization problem. Reducing in dimensions brings much faster performances. Experiments and comparisons are carried out to demonstrate the computational simplicity. Its accuracy is also verified via these experiments and comparisons. To link a skeleton to the skin, accordingly we present a skin attachment framework aiming at automatic and reasonable weight distribution. It differs from the conventional algorithms in taking topological information into account during weight computation. An effective range is defined for a joint. Skin vertices located outside the effective range will not be affected by this joint. By this means, we provide a solution to remove the influence of a topologically distant, hence highly likely irrelevant joint on a vertex. A user-defined parameter is also provided in this algorithm, which allows different deformation effects to be obtained according to user's needs. Experiments and comparisons prove that the presented framework results in weight distribution of good quality. Thus it frees animators from tedious manual weight editing. Furthermore, it is flexible to be used with various deformation algorithms

    Curve Skeleton and Moments of Area Supported Beam Parametrization in Multi-Objective Compliance Structural Optimization

    Get PDF
    This work addresses the end-to-end virtual automation of structural optimization up to the derivation of a parametric geometry model that can be used for application areas such as additive manufacturing or the verification of the structural optimization result with the finite element method. A holistic design in structural optimization can be achieved with the weighted sum method, which can be automatically parameterized with curve skeletonization and cross-section regression to virtually verify the result and control the local size for additive manufacturing. is investigated in general. In this paper, a holistic design is understood as a design that considers various compliances as an objective function. This parameterization uses the automated determination of beam parameters by so-called curve skeletonization with subsequent cross-section shape parameter estimation based on moments of area, especially for multi-objective optimized shapes. An essential contribution is the linking of the parameterization with the results of the structural optimization, e.g., to include properties such as boundary conditions, load conditions, sensitivities or even density variables in the curve skeleton parameterization. The parameterization focuses on guiding the skeletonization based on the information provided by the optimization and the finite element model. In addition, the cross-section detection considers circular, elliptical, and tensor product spline cross-sections that can be applied to various shape descriptors such as convolutional surfaces, subdivision surfaces, or constructive solid geometry. The shape parameters of these cross-sections are estimated using stiffness distributions, moments of area of 2D images, and convolutional neural networks with a tailored loss function to moments of area. Each final geometry is designed by extruding the cross-section along the appropriate curve segment of the beam and joining it to other beams by using only unification operations. The focus of multi-objective structural optimization considering 1D, 2D and 3D elements is on cases that can be modeled using equations by the Poisson equation and linear elasticity. This enables the development of designs in application areas such as thermal conduction, electrostatics, magnetostatics, potential flow, linear elasticity and diffusion, which can be optimized in combination or individually. Due to the simplicity of the cases defined by the Poisson equation, no experts are required, so that many conceptual designs can be generated and reconstructed by ordinary users with little effort. Specifically for 1D elements, a element stiffness matrices for tensor product spline cross-sections are derived, which can be used to optimize a variety of lattice structures and automatically convert them into free-form surfaces. For 2D elements, non-local trigonometric interpolation functions are used, which should significantly increase interpretability of the density distribution. To further improve the optimization, a parameter-free mesh deformation is embedded so that the compliances can be further reduced by locally shifting the node positions. Finally, the proposed end-to-end optimization and parameterization is applied to verify a linear elasto-static optimization result for and to satisfy local size constraint for the manufacturing with selective laser melting of a heat transfer optimization result for a heat sink of a CPU. For the elasto-static case, the parameterization is adjusted until a certain criterion (displacement) is satisfied, while for the heat transfer case, the manufacturing constraints are satisfied by automatically changing the local size with the proposed parameterization. This heat sink is then manufactured without manual adjustment and experimentally validated to limit the temperature of a CPU to a certain level.:TABLE OF CONTENT III I LIST OF ABBREVIATIONS V II LIST OF SYMBOLS V III LIST OF FIGURES XIII IV LIST OF TABLES XVIII 1. INTRODUCTION 1 1.1 RESEARCH DESIGN AND MOTIVATION 6 1.2 RESEARCH THESES AND CHAPTER OVERVIEW 9 2. PRELIMINARIES OF TOPOLOGY OPTIMIZATION 12 2.1 MATERIAL INTERPOLATION 16 2.2 TOPOLOGY OPTIMIZATION WITH PARAMETER-FREE SHAPE OPTIMIZATION 17 2.3 MULTI-OBJECTIVE TOPOLOGY OPTIMIZATION WITH THE WEIGHTED SUM METHOD 18 3. SIMULTANEOUS SIZE, TOPOLOGY AND PARAMETER-FREE SHAPE OPTIMIZATION OF WIREFRAMES WITH B-SPLINE CROSS-SECTIONS 21 3.1 FUNDAMENTALS IN WIREFRAME OPTIMIZATION 22 3.2 SIZE AND TOPOLOGY OPTIMIZATION WITH PERIODIC B-SPLINE CROSS-SECTIONS 27 3.3 PARAMETER-FREE SHAPE OPTIMIZATION EMBEDDED IN SIZE OPTIMIZATION 32 3.4 WEIGHTED SUM SIZE AND TOPOLOGY OPTIMIZATION 36 3.5 CROSS-SECTION COMPARISON 39 4. NON-LOCAL TRIGONOMETRIC INTERPOLATION IN TOPOLOGY OPTIMIZATION 41 4.1 FUNDAMENTALS IN MATERIAL INTERPOLATIONS 43 4.2 NON-LOCAL TRIGONOMETRIC SHAPE FUNCTIONS 45 4.3 NON-LOCAL PARAMETER-FREE SHAPE OPTIMIZATION WITH TRIGONOMETRIC SHAPE FUNCTIONS 49 4.4 NON-LOCAL AND PARAMETER-FREE MULTI-OBJECTIVE TOPOLOGY OPTIMIZATION 54 5. FUNDAMENTALS IN SKELETON GUIDED SHAPE PARAMETRIZATION IN TOPOLOGY OPTIMIZATION 58 5.1 SKELETONIZATION IN TOPOLOGY OPTIMIZATION 61 5.2 CROSS-SECTION RECOGNITION FOR IMAGES 66 5.3 SUBDIVISION SURFACES 67 5.4 CONVOLUTIONAL SURFACES WITH META BALL KERNEL 71 5.5 CONSTRUCTIVE SOLID GEOMETRY 73 6. CURVE SKELETON GUIDED BEAM PARAMETRIZATION OF TOPOLOGY OPTIMIZATION RESULTS 75 6.1 FUNDAMENTALS IN SKELETON SUPPORTED RECONSTRUCTION 76 6.2 SUBDIVISION SURFACE PARAMETRIZATION WITH PERIODIC B-SPLINE CROSS-SECTIONS 78 6.3 CURVE SKELETONIZATION TAILORED TO TOPOLOGY OPTIMIZATION WITH PRE-PROCESSING 82 6.4 SURFACE RECONSTRUCTION USING LOCAL STIFFNESS DISTRIBUTION 86 7. CROSS-SECTION SHAPE PARAMETRIZATION FOR PERIODIC B-SPLINES 96 7.1 PRELIMINARIES IN B-SPLINE CONTROL GRID ESTIMATION 97 7.2 CROSS-SECTION EXTRACTION OF 2D IMAGES 101 7.3 TENSOR SPLINE PARAMETRIZATION WITH MOMENTS OF AREA 105 7.4 B-SPLINE PARAMETRIZATION WITH MOMENTS OF AREA GUIDED CONVOLUTIONAL NEURAL NETWORK 110 8. FULLY AUTOMATED COMPLIANCE OPTIMIZATION AND CURVE-SKELETON PARAMETRIZATION FOR A CPU HEAT SINK WITH SIZE CONTROL FOR SLM 115 8.1 AUTOMATED 1D THERMAL COMPLIANCE MINIMIZATION, CONSTRAINED SURFACE RECONSTRUCTION AND ADDITIVE MANUFACTURING 118 8.2 AUTOMATED 2D THERMAL COMPLIANCE MINIMIZATION, CONSTRAINT SURFACE RECONSTRUCTION AND ADDITIVE MANUFACTURING 120 8.3 USING THE HEAT SINK PROTOTYPES COOLING A CPU 123 9. CONCLUSION 127 10. OUTLOOK 131 LITERATURE 133 APPENDIX 147 A PREVIOUS STUDIES 147 B CROSS-SECTION PROPERTIES 149 C CASE STUDIES FOR THE CROSS-SECTION PARAMETRIZATION 155 D EXPERIMENTAL SETUP 15

    Human Metaphase Chromosome Analysis using Image Processing

    Get PDF
    Development of an effective human metaphase chromosome analysis algorithm can optimize expert time usage by increasing the efficiency of many clinical diagnosis processes. Although many methods exist in the literature, they are only applicable for limited morphological variations and are specific to the staining method used during cell preparation. They are also highly influenced by irregular chromosome boundaries as well as the presence of artifacts such as premature sister chromatid separation. Therefore an algorithm is proposed in this research which can operate with any morphological variation of the chromosome across images from multiple staining methods. The proposed algorithm is capable of calculating the segmentation outline, the centerline (which gives the chromosome length), partitioning of the telomere regions and the centromere location of a given chromosome. The algorithm also detects and corrects for the sister chromatid separation artifact in metaphase cell images. A metric termed the Candidate Based Centromere Confidence (CBCC) is proposed to accompany each centromere detection result of the proposed method, giving an indication of the confidence the algorithm has on a given localization. The proposed method was first tested for the ability of calculating an accurate width profile against a centerline based method [1] using 226 chromosomes. A statistical analysis of the centromere detection error values proved that the proposed method can accurately locate centromere locations with statistical significance. Furthermore, the proposed method performed more consistently across different staining methods in comparison to the centerline based approach. When tested with a larger data set of 1400 chromosomes collected from a set of DAPI (4\u27,6-diamidino-2-phenylindole) and Giemsa stained cell images, the proposed candidate based centromere detection algorithm was able to accurately localize 1220 centromere locations yielding a detection accuracy of 87%

    Efficient Image Segmentation and Segment-Based Analysis in Computer Vision Applications

    Get PDF
    This dissertation focuses on efficient image segmentation and segment-based object recognition in computer vision applications. Special attention is devoted to analyzing shape, of particular importance for our two applications: plant species identification from leaf photos, and object classification in remote sensing images. Additionally, both problems are bound by efficiency, constraining the choice of applicable methods: leaf recognition results are to be used within an interactive system, while remote sensing image analysis must scale well over very large image sets. Leafsnap was the first mobile app to provide automatic recognition of tree species, currently counting with over 1.7 million downloads. We present an overview of the mobile app and corresponding back end recognition system, as well as a preliminary analysis of user-submitted data. More than 1.7 million valid leaf photos have been uploaded by users, 1.3 million of which are GPS-tagged. We then focus on the problem of segmenting photos of leaves taken against plain light-colored backgrounds. These types of photos are used in practice within Leafsnap for tree species recognition. A good segmentation is essential in order to make use of the distinctive shape of leaves for recognition. We present a comparative experimental evaluation of several segmentation methods, including quantitative and qualitative results. We then introduce a custom-tailored leaf segmentation method that shows superior performance while maintaining computational efficiency. The other contribution of this work is a set of attributes for analysis of image segments. The set of attributes is designed for use in knowledge-based systems, so they are selected to be intuitive and easily describable. The attributes can also be computed efficiently, to allow applicability across different problems. We experiment with several descriptive measures from the literature and encounter certain limitations, leading us to introduce new attribute formulations and more efficient computational methods. Finally, we experiment with the attribute set on our two applications: plant species identification from leaf photos and object recognition in remote sensing images

    Skeletonization methods for image and volume inpainting

    Get PDF
    Image and shape restoration techniques are increasingly important in computer graphics. Many types of restoration techniques have been proposed in the 2D image-processing and according to our knowledge only one to volumetric data. Well-known examples of such techniques include digital inpainting, denoising, and morphological gap filling. However efficient and effective, such methods have several limitations with respect to the shape, size, distribution, and nature of the defects they can find and eliminate. We start by studying the use of 2D skeletons for the restoration of two-dimensional images. To this end, we show that skeletons are useful and efficient for volumetric data reconstruction. To explore our hypothesis in the 3D case, we first overview the existing state-of-the-art in 3D skeletonization methods, and conclude that no such method provides us with the features required by efficient and effective practical usage. We next propose a novel method for 3D skeletonization, and show how it complies with our desired quality requirements, which makes it thereby suitable for volumetric data reconstruction context. The joint results of our study show that skeletons are indeed effective tools to design a variety of shape restoration methods. Separately, our results show that suitable algorithms and implementations can be conceived to yield high end-to-end performance and quality of skeleton-based restoration methods. Finally, our practical applications can generate competitive results when compared to application areas such as digital hair removal and wire artifact removal

    Skeletonization and segmentation of binary voxel shapes

    Get PDF
    Preface. This dissertation is the result of research that I conducted between January 2005 and December 2008 in the Visualization research group of the Technische Universiteit Eindhoven. I am pleased to have the opportunity to thank a number of people that made this work possible. I owe my sincere gratitude to Alexandru Telea, my supervisor and first promotor. I did not consider pursuing a PhD until my Master’s project, which he also supervised. Due to our pleasant collaboration from which I learned quite a lot, I became convinced that becoming a doctoral student would be the right thing to do for me. Indeed, I can say it has greatly increased my knowledge and professional skills. Alex, thank you for our interesting discussions and the freedom you gave me in conducting my research. You made these four years a pleasant experience. I am further grateful to Jack vanWijk, my second promotor. Our monthly discussions were insightful, and he continuously encouraged me to take a more formal and scientific stance. I would also like to thank Prof. Jan de Graaf from the department of mathematics for our discussions on some of my conjectures. His mathematical rigor was inspiring. I am greatly indebted to the Netherlands Organisation for Scientific Research (NWO) for funding my PhD project (grant number 612.065.414). I thank Prof. Kaleem Siddiqi, Prof. Mark de Berg, and Dr. Remco Veltkamp for taking part in the core doctoral committee and Prof. Deborah Silver and Prof. Jos Roerdink for participating in the extended committee. Our Visualization group provides a great atmosphere to do research in. In particular, I would like to thank my fellow doctoral students Frank van Ham, Hannes Pretorius, Lucian Voinea, Danny Holten, Koray Duhbaci, Yedendra Shrinivasan, Jing Li, NielsWillems, and Romain Bourqui. They enabled me to take my mind of research from time to time, by discussing political and economical affairs, and more trivial topics. Furthermore, I would like to thank the senior researchers of our group, Huub van de Wetering, Kees Huizing, and Michel Westenberg. In particular, I thank Andrei Jalba for our fruitful collaboration in the last part of my work. On a personal level, I would like to thank my parents and sister for their love and support over the years, my friends for providing distractions outside of the office, and Michelle for her unconditional love and ability to light up my mood when needed
    • …
    corecore