1,014 research outputs found

    Coronary Artery Segmentation and Motion Modelling

    No full text
    Conventional coronary artery bypass surgery requires invasive sternotomy and the use of a cardiopulmonary bypass, which leads to long recovery period and has high infectious potential. Totally endoscopic coronary artery bypass (TECAB) surgery based on image guided robotic surgical approaches have been developed to allow the clinicians to conduct the bypass surgery off-pump with only three pin holes incisions in the chest cavity, through which two robotic arms and one stereo endoscopic camera are inserted. However, the restricted field of view of the stereo endoscopic images leads to possible vessel misidentification and coronary artery mis-localization. This results in 20-30% conversion rates from TECAB surgery to the conventional approach. We have constructed patient-specific 3D + time coronary artery and left ventricle motion models from preoperative 4D Computed Tomography Angiography (CTA) scans. Through temporally and spatially aligning this model with the intraoperative endoscopic views of the patient's beating heart, this work assists the surgeon to identify and locate the correct coronaries during the TECAB precedures. Thus this work has the prospect of reducing the conversion rate from TECAB to conventional coronary bypass procedures. This thesis mainly focus on designing segmentation and motion tracking methods of the coronary arteries in order to build pre-operative patient-specific motion models. Various vessel centreline extraction and lumen segmentation algorithms are presented, including intensity based approaches, geometric model matching method and morphology-based method. A probabilistic atlas of the coronary arteries is formed from a group of subjects to facilitate the vascular segmentation and registration procedures. Non-rigid registration framework based on a free-form deformation model and multi-level multi-channel large deformation diffeomorphic metric mapping are proposed to track the coronary motion. The methods are applied to 4D CTA images acquired from various groups of patients and quantitatively evaluated

    Quantitative image analysis in cardiac CT angiography

    Get PDF

    Quantitative image analysis in cardiac CT angiography

    Get PDF

    Automatic Ultrasound Scanning

    Get PDF

    Coronary Artery Calcium Quantification in Contrast-enhanced Computed Tomography Angiography

    Get PDF
    Coronary arteries are the blood vessels supplying oxygen-rich blood to the heart muscles. Coronary artery calcium (CAC), which is the total amount of calcium deposited in these arteries, indicates the presence or the future risk of coronary artery diseases. Quantification of CAC is done by using computed tomography (CT) scan which uses attenuation of x-ray by different tissues in the body to generate three-dimensional images. Calcium can be easily spotted in the CT images because of its higher opacity to x-ray compared to that of the surrounding tissue. However, the arteries cannot be identified easily in the CT images. Therefore, a second scan is done after injecting a patient with an x-ray opaque dye known as contrast material which makes different chambers of the heart and the coronary arteries visible in the CT scan. This procedure is known as computed tomography angiography (CTA) and is performed to assess the morphology of the arteries in order to rule out any blockage in the arteries. The CT scan done without the use of contrast material (non-contrast-enhanced CT) can be eliminated if the calcium can be quantified accurately from the CTA images. However, identification of calcium in CTA images is difficult because of the proximity of the calcium and the contrast material and their overlapping intensity range. In this dissertation first we compare the calcium quantification by using a state-of-the-art non-contrast-enhanced CT scan method to conventional methods suggesting optimal quantification parameters. Then we develop methods to accurately quantify calcium from the CTA images. The methods include novel algorithms for extracting centerline of an artery, calculating the threshold of calcium adaptively based on the intensity of contrast along the artery, calculating the amount of calcium in mixed intensity range, and segmenting the artery and the outer wall. The accuracy of the calcium quantification from CTA by using our methods is higher than the non-contrast-enhanced CT thus potentially eliminating the need of the non-contrast-enhanced CT scan. The implications are that the total time required for the CT scan procedure, and the patient\u27s exposure to x-ray radiation are reduced

    Development of an Atlas-Based Segmentation of Cranial Nerves Using Shape-Aware Discrete Deformable Models for Neurosurgical Planning and Simulation

    Get PDF
    Twelve pairs of cranial nerves arise from the brain or brainstem and control our sensory functions such as vision, hearing, smell and taste as well as several motor functions to the head and neck including facial expressions and eye movement. Often, these cranial nerves are difficult to detect in MRI data, and thus represent problems in neurosurgery planning and simulation, due to their thin anatomical structure, in the face of low imaging resolution as well as image artifacts. As a result, they may be at risk in neurosurgical procedures around the skull base, which might have dire consequences such as the loss of eyesight or hearing and facial paralysis. Consequently, it is of great importance to clearly delineate cranial nerves in medical images for avoidance in the planning of neurosurgical procedures and for targeting in the treatment of cranial nerve disorders. In this research, we propose to develop a digital atlas methodology that will be used to segment the cranial nerves from patient image data. The atlas will be created from high-resolution MRI data based on a discrete deformable contour model called 1-Simplex mesh. Each of the cranial nerves will be modeled using its centerline and radius information where the centerline is estimated in a semi-automatic approach by finding a shortest path between two user-defined end points. The cranial nerve atlas is then made more robust by integrating a Statistical Shape Model so that the atlas can identify and segment nerves from images characterized by artifacts or low resolution. To the best of our knowledge, no such digital atlas methodology exists for segmenting nerves cranial nerves from MRI data. Therefore, our proposed system has important benefits to the neurosurgical community

    Extraction of protein profiles from primary neurons using active contour models and wavelets

    Get PDF
    AbstractThe function of complex networks in the nervous system relies on the proper formation of neuronal contacts and their remodeling. To decipher the molecular mechanisms underlying these processes, it is essential to establish unbiased automated tools allowing the correlation of neurite morphology and the subcellular distribution of molecules by quantitative means.We developed NeuronAnalyzer2D, a plugin for ImageJ, which allows the extraction of neuronal cell morphologies from two dimensional high resolution images, and in particular their correlation with protein profiles determined by indirect immunostaining of primary neurons. The prominent feature of our approach is the ability to extract subcellular distributions of distinct biomolecules along neurites. To extract the complete areas of neurons, required for this analysis, we employ active contours with a new distance based energy. For locating the structural parts of neurons and various morphological parameters we adopt a wavelet based approach. The presented approach is able to extract distinctive profiles of several proteins and reports detailed morphology measurements on neurites.We compare the detected neurons from NeuronAnalyzer2D with those obtained by NeuriteTracer and Vaa3D-Neuron, two popular tools for automatic neurite tracing. The distinctive profiles extracted for several proteins, for example, of the mRNA binding protein ZBP1, and a comparative evaluation of the neuron segmentation results proves the high quality of the quantitative data and proves its practical utility for biomedical analyses

    An overview of touchless 2D fingerprint recognition

    Get PDF
    Touchless fingerprint recognition represents a rapidly growing field of research which has been studied for more than a decade. Through a touchless acquisition process, many issues of touch-based systems are circumvented, e.g., the presence of latent fingerprints or distortions caused by pressing fingers on a sensor surface. However, touchless fingerprint recognition systems reveal new challenges. In particular, a reliable detection and focusing of a presented finger as well as an appropriate preprocessing of the acquired finger image represent the most crucial tasks. Also, further issues, e.g., interoperability between touchless and touch-based fingerprints or presentation attack detection, are currently investigated by different research groups. Many works have been proposed so far to put touchless fingerprint recognition into practice. Published approaches range from self identification scenarios with commodity devices, e.g., smartphones, to high performance on-the-move deployments paving the way for new fingerprint recognition application scenarios.This work summarizes the state-of-the-art in the field of touchless 2D fingerprint recognition at each stage of the recognition process. Additionally, technical considerations and trade-offs of the presented methods are discussed along with open issues and challenges. An overview of available research resources completes the work

    RibSeg v2: A Large-scale Benchmark for Rib Labeling and Anatomical Centerline Extraction

    Full text link
    Automatic rib labeling and anatomical centerline extraction are common prerequisites for various clinical applications. Prior studies either use in-house datasets that are inaccessible to communities, or focus on rib segmentation that neglects the clinical significance of rib labeling. To address these issues, we extend our prior dataset (RibSeg) on the binary rib segmentation task to a comprehensive benchmark, named RibSeg v2, with 660 CT scans (15,466 individual ribs in total) and annotations manually inspected by experts for rib labeling and anatomical centerline extraction. Based on the RibSeg v2, we develop a pipeline including deep learning-based methods for rib labeling, and a skeletonization-based method for centerline extraction. To improve computational efficiency, we propose a sparse point cloud representation of CT scans and compare it with standard dense voxel grids. Moreover, we design and analyze evaluation metrics to address the key challenges of each task. Our dataset, code, and model are available online to facilitate open research at https://github.com/M3DV/RibSegComment: 10 pages, 6 figures, journa

    Robust signatures for 3D face registration and recognition

    Get PDF
    PhDBiometric authentication through face recognition has been an active area of research for the last few decades, motivated by its application-driven demand. The popularity of face recognition, compared to other biometric methods, is largely due to its minimum requirement of subject co-operation, relative ease of data capture and similarity to the natural way humans distinguish each other. 3D face recognition has recently received particular interest since three-dimensional face scans eliminate or reduce important limitations of 2D face images, such as illumination changes and pose variations. In fact, three-dimensional face scans are usually captured by scanners through the use of a constant structured-light source, making them invariant to environmental changes in illumination. Moreover, a single 3D scan also captures the entire face structure and allows for accurate pose normalisation. However, one of the biggest challenges that still remain in three-dimensional face scans is the sensitivity to large local deformations due to, for example, facial expressions. Due to the nature of the data, deformations bring about large changes in the 3D geometry of the scan. In addition to this, 3D scans are also characterised by noise and artefacts such as spikes and holes, which are uncommon with 2D images and requires a pre-processing stage that is speci c to the scanner used to capture the data. The aim of this thesis is to devise a face signature that is compact in size and overcomes the above mentioned limitations. We investigate the use of facial regions and landmarks towards a robust and compact face signature, and we study, implement and validate a region-based and a landmark-based face signature. Combinations of regions and landmarks are evaluated for their robustness to pose and expressions, while the matching scheme is evaluated for its robustness to noise and data artefacts
    • …
    corecore