282 research outputs found

    Multi-modality cardiac image computing: a survey

    Get PDF
    Multi-modality cardiac imaging plays a key role in the management of patients with cardiovascular diseases. It allows a combination of complementary anatomical, morphological and functional information, increases diagnosis accuracy, and improves the efficacy of cardiovascular interventions and clinical outcomes. Fully-automated processing and quantitative analysis of multi-modality cardiac images could have a direct impact on clinical research and evidence-based patient management. However, these require overcoming significant challenges including inter-modality misalignment and finding optimal methods to integrate information from different modalities. This paper aims to provide a comprehensive review of multi-modality imaging in cardiology, the computing methods, the validation strategies, the related clinical workflows and future perspectives. For the computing methodologies, we have a favored focus on the three tasks, i.e., registration, fusion and segmentation, which generally involve multi-modality imaging data, either combining information from different modalities or transferring information across modalities. The review highlights that multi-modality cardiac imaging data has the potential of wide applicability in the clinic, such as trans-aortic valve implantation guidance, myocardial viability assessment, and catheter ablation therapy and its patient selection. Nevertheless, many challenges remain unsolved, such as missing modality, modality selection, combination of imaging and non-imaging data, and uniform analysis and representation of different modalities. There is also work to do in defining how the well-developed techniques fit in clinical workflows and how much additional and relevant information they introduce. These problems are likely to continue to be an active field of research and the questions to be answered in the future

    Context-aware learning for robot-assisted endovascular catheterization

    Get PDF
    Endovascular intervention has become a mainstream treatment of cardiovascular diseases. However, multiple challenges remain such as unwanted radiation exposures, limited two-dimensional image guidance, insufficient force perception and haptic cues. Fast evolving robot-assisted platforms improve the stability and accuracy of instrument manipulation. The master-slave system also removes radiation to the operator. However, the integration of robotic systems into the current surgical workflow is still debatable since repetitive, easy tasks have little value to be executed by the robotic teleoperation. Current systems offer very low autonomy, potential autonomous features could bring more benefits such as reduced cognitive workloads and human error, safer and more consistent instrument manipulation, ability to incorporate various medical imaging and sensing modalities. This research proposes frameworks for automated catheterisation with different machine learning-based algorithms, includes Learning-from-Demonstration, Reinforcement Learning, and Imitation Learning. Those frameworks focused on integrating context for tasks in the process of skill learning, hence achieving better adaptation to different situations and safer tool-tissue interactions. Furthermore, the autonomous feature was applied to next-generation, MR-safe robotic catheterisation platform. The results provide important insights into improving catheter navigation in the form of autonomous task planning, self-optimization with clinical relevant factors, and motivate the design of intelligent, intuitive, and collaborative robots under non-ionizing image modalities.Open Acces

    Enhanced feature selection algorithm for pneumonia detection

    Get PDF
    Pneumonia is a type of lung disease that can be detected using X-ray images. The analysis of chest X-ray images is an active research area in medical image analysis and computer-aided radiology. This research aims to improve the accuracy and efficiency of radiologists' work by providing a technique for identifying and categorizing diseases. More attention should be given to applying machine learning approaches to develop a robust chest X-ray image classification method. The typical method for detecting Pneumonia is through chest X-ray images, but analyzing these images can be complex and requires the expertise of a radiographer. This paper demonstrates the feasibility of detecting the disease using chest X-ray images as datasets and a Support Vector Machine combined with a Naive Bayesian classifier, with PCA and GA as feature selection methods. The selected features are essential for training many classifiers. The proposed system achieved an accuracy of 92.26%, using 91% of the principal component. The study's result suggests that using PCA and GA for feature selection in chest X-ray image classification can achieve a good accuracy of 97.44%. Further research is needed to explore the use of other data mining models and care components to improve the accuracy and effectiveness of the system

    Automatic image analysis of C-arm Computed Tomography images for ankle joint surgeries

    Get PDF
    Open reduction and internal fixation is a standard procedure in ankle surgery for treating a fractured fibula. Since fibula fractures are often accompanied by an injury of the syndesmosis complex, it is essential to restore the correct relative pose of the fibula relative to the adjoining tibia for the ligaments to heal. Otherwise, the patient might experience instability of the ankle leading to arthritis and ankle pain and ultimately revision surgery. Incorrect positioning referred to as malreduction of the fibula is assumed to be one of the major causes of unsuccessful ankle surgery. 3D C-arm imaging is the current standard procedure for revealing malreduction of fractures in the operating room. However, intra-operative visual inspection of the reduction result is complicated due to high inter-individual variation of the ankle anatomy and rather based on the subjective experience of the surgeon. A contralateral side comparison with the patient’s uninjured ankle is recommended but has not been integrated into clinical routine due to the high level of radiation exposure it incurs. This thesis presents the first approach towards a computer-assisted intra-operative contralateral side comparison of the ankle joint. The focus of this thesis was the design, development and validation of a software-based prototype for a fully automatic intra-operative assistance system for orthopedic surgeons. The implementation does not require an additional 3D C-arm scan of the uninjured ankle, thus reducing time consumption and cumulative radiation dose. A 3D statistical shape model (SSM) is used to reconstruct a 3D surface model from three 2D fluoroscopic projections representing the uninjured ankle. To this end, a 3D SSM segmentation is performed on the 3D image of the injured ankle to gain prior knowledge of the ankle. A 3D convolutional neural network (CNN) based initialization method was developed and its outcome was incorporated into the SSM adaption step. Segmentation quality was shown to be improved in terms of accuracy and robustness compared to the pure intensity-based SSM. This allows us to overcome the limitations of the previously proposed methods, namely inaccuracy due to metal artifacts and the lack of device-to-patient orientation of the C-arm. A 2D-CNN is employed to extract semantic knowledge from all fluoroscopic projection images. This step of the pipeline both creates features for the subsequent reconstruction and also helps to pre-initialize the 3D-SSM without user interaction. A 2D-3D multi-bone reconstruction method has been developed which uses distance maps of the 2D features for fast and accurate correspondence optimization and SSM adaption. This is the central and most crucial component of the workflow. This is the first time that a bone reconstruction method has been applied to the complex ankle joint and the first reconstruction method using CNN based segmentations as features. The reconstructed 3D-SSM of the uninjured ankle can be back-projected and visualized in a workflow-oriented manner to procure clear visualization of the region of interest, which is essential for the evaluation of the reduction result. The surgeon can thus directly compare an overlay of the contralateral ankle with the injured ankle. The developed methods were evaluated individually using data sets acquired during a cadaver study and representative clinical data acquired during fibular reduction. A hierarchical evaluation was designed to assess the inaccuracies of the system on different levels and to identify major sources of error. The overall evaluation performed on eleven challenging clinical datasets acquired for manual contralateral side comparison showed that the system is capable of accurately reconstructing 3D surface models of the uninjured ankle solely using three projection images. A mean Hausdorff distance of 1.72 mm was measured when comparing the reconstruction result to the ground truth segmentation and almost achieved the high required clinical accuracy of 1-2 mm. The overall error of the pipeline was mainly attributed to inaccuracies in the 2D-CNN segmentation. The consistency of these results requires further validation on a larger dataset. The workflow proposed in this thesis establishes the first approach to enable automatic computer-assisted contralateral side comparison in ankle surgery. The feasibility of the proposed approach was proven on a limited amount of clinical cases and has already yielded good results. The next important step is to alleviate the identified bottlenecks in the approach by providing more training data in order to further improve the accuracy. In conclusion, the new approach presented gives the chance to guide the surgeon during the reduction process, improve the surgical outcome while avoiding additional radiation exposure and reduce the number of revision surgeries in the long term

    U-Net and its variants for medical image segmentation: theory and applications

    Full text link
    U-net is an image segmentation technique developed primarily for medical image analysis that can precisely segment images using a scarce amount of training data. These traits provide U-net with a very high utility within the medical imaging community and have resulted in extensive adoption of U-net as the primary tool for segmentation tasks in medical imaging. The success of U-net is evident in its widespread use in all major image modalities from CT scans and MRI to X-rays and microscopy. Furthermore, while U-net is largely a segmentation tool, there have been instances of the use of U-net in other applications. As the potential of U-net is still increasing, in this review we look at the various developments that have been made in the U-net architecture and provide observations on recent trends. We examine the various innovations that have been made in deep learning and discuss how these tools facilitate U-net. Furthermore, we look at image modalities and application areas where U-net has been applied.Comment: 42 pages, in IEEE Acces

    Medical Image Segmentation by Deep Convolutional Neural Networks

    Get PDF
    Medical image segmentation is a fundamental and critical step for medical image analysis. Due to the complexity and diversity of medical images, the segmentation of medical images continues to be a challenging problem. Recently, deep learning techniques, especially Convolution Neural Networks (CNNs) have received extensive research and achieve great success in many vision tasks. Specifically, with the advent of Fully Convolutional Networks (FCNs), automatic medical image segmentation based on FCNs is a promising research field. This thesis focuses on two medical image segmentation tasks: lung segmentation in chest X-ray images and nuclei segmentation in histopathological images. For the lung segmentation task, we investigate several FCNs that have been successful in semantic and medical image segmentation. We evaluate the performance of these different FCNs on three publicly available chest X-ray image datasets. For the nuclei segmentation task, since the challenges of this task are difficulty in segmenting the small, overlapping and touching nuclei, and limited ability of generalization to nuclei in different organs and tissue types, we propose a novel nuclei segmentation approach based on a two-stage learning framework and Deep Layer Aggregation (DLA). We convert the original binary segmentation task into a two-step task by adding nuclei-boundary prediction (3-classes) as an intermediate step. To solve our two-step task, we design a two-stage learning framework by stacking two U-Nets. The first stage estimates nuclei and their coarse boundaries while the second stage outputs the final fine-grained segmentation map. Furthermore, we also extend the U-Nets with DLA by iteratively merging features across different levels. We evaluate our proposed method on two public diverse nuclei datasets. The experimental results show that our proposed approach outperforms many standard segmentation architectures and recently proposed nuclei segmentation methods, and can be easily generalized across different cell types in various organs

    Analýza a zpracování rentgenových snímků

    Get PDF
    The aim of this master thesis was to design and test new machine learning methods designed to remove unwanted parts in X-ray images, in the case of this work it is mainly ribs. This modification should help with the subsequent analysis. The work focuses not only on the design of the new methods but also on their inter-comparison over both the annotated dataset and the dataset provided by Olomouc University Hospital. In addition to the results for each approach, I would like to mention the high performance of the proposed generative adversarial network, which achieves excellent results compared to other approaches.Cílem této práce bylo navrhnout a otestovat nové metody strojového učení určené k odstranění nechtěných částí v rentgenových snímcích, v případě této práce se jedná hlavně o žebra. Tato úprava by měla pomoct s následnou analýzou. Práce se zaměřuje nejen na návrh nových metod ale také na jejich vzájemné srovnání nad jak anotovanou datovou sadou tak datovou sadou poskytnutou Fakultní nemocnicí Olomouc. Kromě výsledků pro jednotlivé přístupy bych rád zmínil vysoký výkon navržené generativní soupeřící sítě, která dosahuje oproti ostatním přístupům excelentních výsledků.460 - Katedra informatikyvýborn

    Deep learning in medical image registration: introduction and survey

    Full text link
    Image registration (IR) is a process that deforms images to align them with respect to a reference space, making it easier for medical practitioners to examine various medical images in a standardized reference frame, such as having the same rotation and scale. This document introduces image registration using a simple numeric example. It provides a definition of image registration along with a space-oriented symbolic representation. This review covers various aspects of image transformations, including affine, deformable, invertible, and bidirectional transformations, as well as medical image registration algorithms such as Voxelmorph, Demons, SyN, Iterative Closest Point, and SynthMorph. It also explores atlas-based registration and multistage image registration techniques, including coarse-fine and pyramid approaches. Furthermore, this survey paper discusses medical image registration taxonomies, datasets, evaluation measures, such as correlation-based metrics, segmentation-based metrics, processing time, and model size. It also explores applications in image-guided surgery, motion tracking, and tumor diagnosis. Finally, the document addresses future research directions, including the further development of transformers

    U-net and its variants for medical image segmentation: A review of theory and applications

    Get PDF
    U-net is an image segmentation technique developed primarily for image segmentation tasks. These traits provide U-net with a high utility within the medical imaging community and have resulted in extensive adoption of U-net as the primary tool for segmentation tasks in medical imaging. The success of U-net is evident in its widespread use in nearly all major image modalities, from CT scans and MRI to Xrays and microscopy. Furthermore, while U-net is largely a segmentation tool, there have been instances of the use of U-net in other applications. Given that U-net’s potential is still increasing, this narrative literature review examines the numerous developments and breakthroughs in the U-net architecture and provides observations on recent trends. We also discuss the many innovations that have advanced in deep learning and discuss how these tools facilitate U-net. In addition, we review the different image modalities and application areas that have been enhanced by U-net
    corecore