554 research outputs found

    Colon centreline calculation for CT colonography using optimised 3D opological thinning

    Get PDF
    CT colonography is an emerging technique for colorectal cancer screening. This technique facilitates noninvasive imaging of the colon interior by generating virtual reality models of the colon lumen. Manual navigation through these models is a slow and tedious process. It is possible to automate navigation by calculating the centreline of the colon lumen. There are numerous well documented approaches for centreline calculation. Many of these techniques have been developed as alternatives to 3D topological thinning which has been discounted by others due to its computationally intensive nature. This paper describes a fully automated, optimised version of 3D topological thinning that has been specifically developed for calculating the centreline of the human colon

    Morphological operations in image processing and analysis

    Get PDF
    Morphological operations applied in image processing and analysis are becoming increasingly important in today\u27s technology. Morphological operations which are based on set theory, can extract object features by suitable shape (structuring elements). Morphological filters are combinations of morphological operations that transform an image into a quantitative description of its geometrical structure which based on structuring elements. Important applications of morphological operations are shape description, shape recognition, nonlinear filtering, industrial parts inspection, and medical image processing. In this dissertation, basic morphological operations are reviewed, algorithms and theorems are presented for solving problems in distance transformation, skeletonization, recognition, and nonlinear filtering. A skeletonization algorithm using the maxima-tracking method is introduced to generate a connected skeleton. A modified algorithm is proposed to eliminate non-significant short branches. The back propagation morphology is introduced to reach the roots of morphological filters in only two-scan. The definitions and properties of back propagation morphology are discussed. The two-scan distance transformation is proposed to illustrate the advantage of this new definition. G-spectrum (geometric spectrum) which based upon the cardinality of a set of non-overlapping segments in an image using morphological operations is presented to be a useful tool not only for shape description but also for shape recognition. The G-spectrum is proven to be translation-, rotation-, and scaling-invariant. The shape likeliness based on G-spectrum is defined as a measurement in shape recognition. Experimental results are also illustrated. Soft morphological operations which are found to be less sensitive to additive noise and to small variations are the combinations of order statistic and morphological operations. Soft morphological operations commute with thresholding and obey threshold superposition. This threshold decomposition property allows gray-scale signals to be decomposed into binary signals which can be processed by only logic gates in parallel and then binary results can be combined to produce the equivalent output. Thus the implementation and analysis of function-processing soft morphological operations can be done by focusing only on the case of sets which not only are much easier to deal with because their definitions involve only counting the points instead of sorting numbers, but also allow logic gates implementation and parallel pipelined architecture leading to real-time implementation. In general, soft opening and closing are not idempotent operations, but under some constraints the soft opening and closing can be idempotent and the proof is given. The idempotence property gives us the idea of how to choose the structuring element sets and the value of index such that the soft morphological filters will reach the root signals without iterations. Finally, summary and future research of this dissertation are provided

    A Parallel Thinning Algorithm for Grayscale Images

    Get PDF
    International audienceGrayscale skeletonization offers an interesting alternative to traditional skeletonization following a binarization. It is well known that parallel algorithms for skeletonization outperform sequential ones in terms of quality of results, yet no general and well defined framework has been proposed until now for parallel grayscale thinning. We introduce in this paper a parallel thinning algorithm for grayscale images, and prove its topological soundness based on properties of the critical kernels framework. The algorithm and its proof, given here in the 2D case, are also valid in 3D. Some applications are sketched in conclusion

    A new approach for centerline extraction in handwritten strokes: an application to the constitution of a code book

    Get PDF
    International audienceWe present in this paper a new method of analysis and decomposition of handwritten documents into glyphs (graphemes) and their associated code book. The different techniques that are involved in this paper are inspired by image processing methods in a large sense and mathematical models implying graph coloring. Our approaches provide firstly a rapid and detailed characterization of handwritten shapes based on dynamic tracking of the handwriting (curvature, thickness, direction, etc.) and also a very efficient analysis method for the categorization of basic shapes (graphemes). The tools that we have produced enable paleographers to study quickly and more accurately a large volume of manuscripts and to extract a large number of characteristics that are specific to an individual or an era

    An Unified Multiscale Framework for Planar, Surface, and Curve Skeletonization

    Get PDF
    Computing skeletons of 2D shapes, and medial surface and curve skeletons of 3D shapes, is a challenging task. In particular, there is no unified framework that detects all types of skeletons using a single model, and also produces a multiscale representation which allows to progressively simplify, or regularize, all skeleton types. In this paper, we present such a framework. We model skeleton detection and regularization by a conservative mass transport process from a shape's boundary to its surface skeleton, next to its curve skeleton, and finally to the shape center. The resulting density field can be thresholded to obtain a multiscale representation of progressively simplified surface, or curve, skeletons. We detail a numerical implementation of our framework which is demonstrably stable and has high computational efficiency. We demonstrate our framework on several complex 2D and 3D shapes

    Skeletonization and segmentation of binary voxel shapes

    Get PDF
    Preface. This dissertation is the result of research that I conducted between January 2005 and December 2008 in the Visualization research group of the Technische Universiteit Eindhoven. I am pleased to have the opportunity to thank a number of people that made this work possible. I owe my sincere gratitude to Alexandru Telea, my supervisor and first promotor. I did not consider pursuing a PhD until my Master’s project, which he also supervised. Due to our pleasant collaboration from which I learned quite a lot, I became convinced that becoming a doctoral student would be the right thing to do for me. Indeed, I can say it has greatly increased my knowledge and professional skills. Alex, thank you for our interesting discussions and the freedom you gave me in conducting my research. You made these four years a pleasant experience. I am further grateful to Jack vanWijk, my second promotor. Our monthly discussions were insightful, and he continuously encouraged me to take a more formal and scientific stance. I would also like to thank Prof. Jan de Graaf from the department of mathematics for our discussions on some of my conjectures. His mathematical rigor was inspiring. I am greatly indebted to the Netherlands Organisation for Scientific Research (NWO) for funding my PhD project (grant number 612.065.414). I thank Prof. Kaleem Siddiqi, Prof. Mark de Berg, and Dr. Remco Veltkamp for taking part in the core doctoral committee and Prof. Deborah Silver and Prof. Jos Roerdink for participating in the extended committee. Our Visualization group provides a great atmosphere to do research in. In particular, I would like to thank my fellow doctoral students Frank van Ham, Hannes Pretorius, Lucian Voinea, Danny Holten, Koray Duhbaci, Yedendra Shrinivasan, Jing Li, NielsWillems, and Romain Bourqui. They enabled me to take my mind of research from time to time, by discussing political and economical affairs, and more trivial topics. Furthermore, I would like to thank the senior researchers of our group, Huub van de Wetering, Kees Huizing, and Michel Westenberg. In particular, I thank Andrei Jalba for our fruitful collaboration in the last part of my work. On a personal level, I would like to thank my parents and sister for their love and support over the years, my friends for providing distractions outside of the office, and Michelle for her unconditional love and ability to light up my mood when needed

    PORTABLE MULTI-CAMERA SYSTEM: FROM FAST TUNNEL MAPPING TO SEMI-AUTOMATIC SPACE DECOMPOSITION AND CROSS-SECTION EXTRACTION

    Get PDF
    The paper outlines the first steps of a research project focused on the digitalization of underground tunnels for the mining industry. The aim is to solve the problem of rapidly, semi-automatically, efficiently, and reliably digitizing complex and meandering tunnels. A handheld multi-camera photogrammetric tool is used for the survey phase, which allows for the rapid acquisition of the image dataset needed to produce the 3D data. Moreover, since often, automatic, and fast acquisitions are not supported by easy-to-use tools to access and use the data at an operational level, a second aim of the research is to define a method able to arrange and organise the gathered data so that it would be easily accessible. The proposed approach is to compute the 3D skeleton of the surveyed environment by employing tools developed for the analysis of vascular networks in medical imagery. From the computed skeletonization of the underground tunnels, a method is proposed to automatically extrapolate valuable information such as cross-sections, decomposed portions of the tunnel, and the referenced images from the photogrammetric survey. The long-term research goal is to create an effective workflow, both at the hardware and software level, that can reduce computation times, process large amounts of data, and reduce dependency on high levels of experience

    Mathematical Methods for the Quantification of Actin-Filaments in Microscopic Images

    Get PDF
    In cell biology confocal laser scanning microscopic images of the actin filament of human osteoblasts are produced to assess the cell development. This thesis aims at an advanced approach for accurate quantitative measurements about the morphology of the bright-ridge set of these microscopic images and thus about the actin filament. Therefore automatic preprocessing, tagging and quantification interplay to approximate the capabilities of the human observer to intuitively recognize the filaments correctly. Numerical experiments with random models confirm the accuracy of this approach
    corecore