33 research outputs found

    Fast colon centreline calculation using optimised 3D topological thinning

    Get PDF
    Topological thinning can be used to accurately identify the central path through a computer model of the colon generated using computed tomography colonography. The central path can subsequently be used to simplify the task of navigation within the colon model. Unfortunately standard topological thinning is an extremely inefficient process. We present an optimised version of topological thinning that significantly improves the performance of centreline calculation without compromising the accuracy of the result. This is achieved by using lookup tables to reduce the computational burden associated with the thinning process

    Aquatics reconstruction software: the design of a diagnostic tool based on computer vision algorithms

    Get PDF
    Computer vision methods can be applied to a variety of medical and surgical applications, and many techniques and algorithms are available that can be used to recover 3D shapes and information from images range and volume data. Complex practical applications, however, are rarely approachable with a single technique, and require detailed analysis on how they can be subdivided in subtasks that are computationally treatable and that, at the same time, allow for the appropriate level of user-interaction. In this paper we show an example of a complex application where, following criteria of efficiency, reliability and user friendliness, several computer vision techniques have been selected and customized to build a system able to support diagnosis and endovascular treatment of Abdominal Aortic Aneurysms. The system reconstructs the geometrical representation of four different structures related to the aorta (vessel lumen, thrombus, calcifications and skeleton) from CT angiography data. In this way it supports the three dimensional measurements required for a careful geometrical evaluation of the vessel, that is fundamental to decide if the treatment is necessary and to perform, in this case, its planning. The system has been realized within the European trial AQUATICS (IST-1999-20226 EUTIST-M WP 12), and it has been widely tested on clinical data

    Skeletonization of Noisy Images via the Method of Moment

    Get PDF

    THREE-DIMENSIONAL MODELING OF SHRUBS BASED ON LIDAR POINT CLOUDS

    Get PDF
    This paper proposes a method for constructing 3D models of shrubs with high accuracy from 3D point clouds acquired by terrestrial laser scanning (TLS). Since the shrub point cloud obtained by LiDAR scanning contains a large amount of redundant data, the focus of this method is to segment the branches and leaves of the shrub point cloud first, which can remove the noise and make the branch skeleton of the shrub stand out. Secondly, a triangulation network is constructed for the segmented branch points, and a minimum spanning tree (MST) is established from the triangulation network as the initial shrub skeleton of the input point cloud. Then, the redundant branches are removed by merging adjacent points and edges to simplify the initial skeleton. Finally, the 3D model of a shrub is constructed by a cylindrical fitting algorithm based on robust principal component analysis (RPCA). Experiments on different types of shrubs from different data sources demonstrate the effectiveness and robustness of the proposed method. The 3D models of shrubs can be further applied to the accurate estimation of shrub attributes and urban landscape visualization

    Vascular Modeling from Volumetric Diagnostic Data: A Review

    Get PDF
    Reconstruction of vascular trees from digital diagnostic images is a challenging task in the development of tools for simulation and procedural planning for clinical use. Improvements in quality and resolution of acquisition modalities are constantly increasing the fields of application of computer assisted techniques for vascular modeling and a lot of Computer Vision and Computer Graphics research groups are currently active in the field, developing methodologies, algorithms and software prototypes able to recover models of branches of human vascular system from different kinds of input images. Reconstruction methods can be extremely different according to image type, accuracy requirements and level of automation. Some technologies have been validated and are available on medical workstation, others have still to be validated in clinical environments. It is difficult, therefore, to give a complete overview of the different approach used and results obtained, this paper just presents a short review including some examples of the principal reconstruction approaches proposed for vascular reconstruction, showing also the contribution given to the field by the Medical Application Area of CRS4, where methods to recover vascular models have been implemented and used for blood flow analysis, quantitative diagnosis and surgical planning tools based on Virtual Reality

    A review of the quantification and classification of pigmented skin lesions: from dedicated to hand-held devices

    Get PDF
    In recent years, the incidence of skin cancer caseshas risen, worldwide, mainly due to the prolonged exposure toharmful ultraviolet radiation. Concurrently, the computerassistedmedical diagnosis of skin cancer has undergone majoradvances, through an improvement in the instrument and detectiontechnology, and the development of algorithms to processthe information. Moreover, because there has been anincreased need to store medical data, for monitoring, comparativeand assisted-learning purposes, algorithms for data processingand storage have also become more efficient in handlingthe increase of data. In addition, the potential use ofcommon mobile devices to register high-resolution imagesof skin lesions has also fueled the need to create real-timeprocessing algorithms that may provide a likelihood for thedevelopment of malignancy. This last possibility allows evennon-specialists to monitor and follow-up suspected skin cancercases. In this review, we present the major steps in the preprocessing,processing and post-processing of skin lesion images,with a particular emphasis on the quantification andclassification of pigmented skin lesions. We further reviewand outline the future challenges for the creation of minimum-feature,automated and real-time algorithms for the detectionof skin cancer from images acquired via common mobiledevices

    A four-dimensional probabilistic atlas of the human brain

    Get PDF
    The authors describe the development of a four-dimensional atlas and reference system that includes both macroscopic and microscopic information on structure and function of the human brain in persons between the ages of 18 and 90 years. Given the presumed large but previously unquantified degree of structural and functional variance among normal persons in the human population, the basis for this atlas and reference system is probabilistic. Through the efforts of the International Consortium for Brain Mapping (ICBM), 7,000 subjects will be included in the initial phase of database and atlas development. For each subject, detailed demographic, clinical, behavioral, and imaging information is being collected. In addition, 5,800 subjects will contribute DNA for the purpose of determining genotype-phenotype-behavioral correlations. The process of developing the strategies, algorithms, data collection methods, validation approaches, database structures, and distribution of results is described in this report. Examples of applications of the approach are described for the normal brain in both adults and children as well as in patients with schizophrenia. This project should provide new insights into the relationship between microscopic and macroscopic structure and function in the human brain and should have important implications in basic neuroscience, clinical diagnostics, and cerebral disorders

    A Combined Skeleton Model

    Get PDF
    Skeleton representations are a fundamental way of representing a variety of solid models. They are particularly important for representing certain biological models and are often key to visualizing such data. Several methods exist for extracting skeletal models from 3D data sets. Unfortunately, there is usually not a single correct definition for what makes a good skeleton, and different methods will produce different skeletal models from a given input. Furthermore, for many scanned data sets, there also is inherent noise and loss of data in the scanning process that can reduce ability to identify a skeleton. In this document, I propose a method for combining multiple algorithms' skeleton results into a single composite skeletal model. This model leverages various aspects of the geometric and topological information contained in the different input skeletal models to form a single result that may limit the error introduced by particular inputs by means of a confidence function. Using such an uncertainty based model, one can better understand, refine, and de-noise/simplify the skeletal structure. The following pages describe methods for forming this composite model and also examples of applying it to some real-world data sets
    corecore