300 research outputs found
A Survey on Deep Learning in Medical Image Analysis
Deep learning algorithms, in particular convolutional networks, have rapidly
become a methodology of choice for analyzing medical images. This paper reviews
the major deep learning concepts pertinent to medical image analysis and
summarizes over 300 contributions to the field, most of which appeared in the
last year. We survey the use of deep learning for image classification, object
detection, segmentation, registration, and other tasks and provide concise
overviews of studies per application area. Open challenges and directions for
future research are discussed.Comment: Revised survey includes expanded discussion section and reworked
introductory section on common deep architectures. Added missed papers from
before Feb 1st 201
3D Multimodal Image Registration: Application to equine PET and CT images
Positron Emission Tomography (PET) is being widely used in veterinary
medicine in recent years. Although it was limited to small animals because of its
classical design and the large amount of radionuclide doses required, PET imaging in
horses became possible with the introduction of a portable PET scanner developed
by Brain Biosciences Inc. It was observed that this new modality could capture
abnormalities like lesions that Computed Tomography (CT), Magnetic Resonance
Imaging (MRI) and other modalities could not. Since 2016, PET imaging in horses
is being studied and analysed.
While PET provides functional information characterizing the activity of lesions,
it is useful to combine information from other modalities like CT and match
the structural information to develop an accurate spatial representation of the data.
Since biochemical changes occur much earlier than structural changes, this helps
detect lesions and tumours during the early stages. Multimodal image registration
is used to achieve this goal. A series of steps are proposed to automate the process
of registration of equine PET and CT images. Multimodal image registration using landmark-based and intensity-based techniques are studied. It is observed
that a few tissues are not imaged in the PET, which makes image segmentation,
an important preprocessing step in the registration process. A study of the segmentation
algorithms relevant to the field of medical imaging is presented. The
performance of segmentation algorithms improved with the extent of manual interaction
and intensity-based registration gave the smallest time complexity with
reasonable accuracy
Advancing fluorescent contrast agent recovery methods for surgical guidance applications
Fluorescence-guided surgery (FGS) utilizes fluorescent contrast agents and specialized optical instruments to assist surgeons in intraoperatively identifying tissue-specific characteristics, such as perfusion, malignancy, and molecular function. In doing so, FGS represents a powerful surgical navigation tool for solving clinical challenges not easily addressed by other conventional imaging methods. With growing translational efforts, major hurdles within the FGS field include: insufficient tools for understanding contrast agent uptake behaviors, the inability to image tissue beyond a couple millimeters, and lastly, performance limitations of currently-approved contrast agents in accurately and rapidly labeling disease. The developments presented within this thesis aim to address such shortcomings.
Current preclinical fluorescence imaging tools often sacrifice either 3D scale or spatial resolution. To address this gap in high-resolution, whole-body preclinical imaging tools available, the crux of this work lays on the development of a hyperspectral cryo-imaging system and image-processing techniques to accurately recapitulate high-resolution, 3D biodistributions in whole-animal experiments. Specifically, the goal is to correct each cryo-imaging dataset such that it becomes a useful reporter for whole-body biodistributions in relevant disease models.
To investigate potential benefits of seeing deeper during FGS, we investigated short-wave infrared imaging (SWIR) for recovering fluorescence beyond the conventional top few millimeters. Through phantom, preclinical, and clinical SWIR imaging, we were able to 1) validate the capability of SWIR imaging with conventional NIR-I fluorophores, 2) demonstrate the translational benefits of SWIR-ICG angiography in a large animal model, and 3) detect micro-dose levels of an EGFR-targeted NIR-I probe during a Phase 0 clinical trial.
Lastly, we evaluated contrast agent performances for FGS glioma resection and breast cancer margin assessment. To evaluate glioma-labeling performance of untargeted contrast agents, 3D agent biodistributions were compared voxel-by-voxel to gold-standard Gd-MRI and pathology slides. Finally, building on expertise in dual-probe ratiometric imaging at Dartmouth, a 10-pt clinical pilot study was carried out to assess the technique’s efficacy for rapid margin assessment.
In summary, this thesis serves to advance FGS by introducing novel fluorescence imaging devices, techniques, and agents which overcome challenges in understanding whole-body agent biodistributions, recovering agent distributions at greater depths, and verifying agents’ performance for specific FGS applications
Discrete Visual Perception
International audienceComputational vision and biomedical image have made tremendous progress of the past decade. This is mostly due the development of efficient learning and inference algorithms which allow better, faster and richer modeling of visual perception tasks. Graph-based representations are among the most prominent tools to address such perception through the casting of perception as a graph optimization problem. In this paper, we briefly introduce the interest of such representations, discuss their strength and limitations and present their application to address a variety of problems in computer vision and biomedical image analysis
Recommended from our members
A Hybrid Similarity Measure Framework for Multimodal Medical Image Registration
Medical imaging is widely used today to facilitate both disease diagnosis and treatment planning practice, with a key prerequisite being the systematic process of medical image registration (MIR) to align either mono or multimodal images of different anatomical parts of the human body. MIR utilises a similarity measure (SM) to quantify the level of spatial alignment and is particularly demanding due to the presence of inherent modality characteristics like intensity non-uniformities (INU) in magnetic resonance images and large homogeneous non-vascular regions in retinal images. While various intensity and feature-based SMs exist for MIR, mutual information (MI) has become established because of its computational efficiency and ability to register multimodal images. It is however, very sensitive to interpolation artefacts in the presence of INU with noise and can be compromised when overlapping areas are small. Recently MI-based hybrid variants which combine regional features with intensity have emerged, though these incur high dimensionality and large computational overheads.
To address these challenges and secure accurate, efficient and robust registration of images containing high INU, noise and large homogeneous regions, this thesis presents a new hybrid SM framework for 2D multimodal rigid MIR. The framework consistently provides superior quantitative and qualitative performance, while offering a uniquely flexible design trade-off between registration accuracy and computational time. It makes three significant technical contributions to the field: i) An expectation maximisation-based principal component analysis with mutual information (EMPCA-MI) framework incorporating neighbourhood feature information; ii) Two innovative enhancements to reduce information redundancy and improve MI computational efficiency; and iii) an adaptive algorithm to select the most significant principal components for feature selection.
The thesis findings conclusively confirm the hybrid SM framework offers an accurate and robust 2D registration solution for challenging multimodal medical imaging datasets, while its inherent flexibility means it can also be extended to the 3D registration domain
Proceedings of the Second International Workshop on Mathematical Foundations of Computational Anatomy (MFCA'08) - Geometrical and Statistical Methods for Modelling Biological Shape Variability
International audienceThe goal of computational anatomy is to analyze and to statistically model the anatomy of organs in different subjects. Computational anatomic methods are generally based on the extraction of anatomical features or manifolds which are then statistically analyzed, often through a non-linear registration. There are nowadays a growing number of methods that can faithfully deal with the underlying biomechanical behavior of intra-subject deformations. However, it is more difficult to relate the anatomies of different subjects. In the absence of any justified physical model, diffeomorphisms provide a general mathematical framework that enforce topological consistency. Working with such infinite dimensional space raises some deep computational and mathematical problems, in particular for doing statistics. Likewise, modeling the variability of surfaces leads to rely on shape spaces that are much more complex than for curves. To cope with these, different methodological and computational frameworks have been proposed (e.g. smooth left-invariant metrics, focus on well-behaved subspaces of diffeomorphisms, modeling surfaces using courants, etc.) The goal of the Mathematical Foundations of Computational Anatomy (MFCA) workshop is to foster the interactions between the mathematical community around shapes and the MICCAI community around computational anatomy applications. It targets more particularly researchers investigating the combination of statistical and geometrical aspects in the modeling of the variability of biological shapes. The workshop aims at being a forum for the exchange of the theoretical ideas and a source of inspiration for new methodological developments in computational anatomy. A special emphasis is put on theoretical developments, applications and results being welcomed as illustrations. Following the very successful first edition of this workshop in 2006 (see http://www.inria.fr/sophia/asclepios/events/MFCA06/), the second edition was held in New-York on September 6, in conjunction with MICCAI 2008. Contributions were solicited in Riemannian and group theoretical methods, Geometric measurements of the anatomy, Advanced statistics on deformations and shapes, Metrics for computational anatomy, Statistics of surfaces. 34 submissions were received, among which 9 were accepted to MICCAI and had to be withdrawn from the workshop. Each of the remaining 25 paper was reviewed by three members of the program committee. To guaranty a high level program, 16 papers only were selected
Convolutional neural networks for the segmentation of small rodent brain MRI
Image segmentation is a common step in the analysis of preclinical brain MRI, often performed manually. This is a time-consuming procedure subject to inter- and intra- rater variability. A possible alternative is the use of automated, registration-based segmentation, which suffers from a bias owed to the limited capacity of registration to adapt to pathological conditions such as Traumatic Brain Injury (TBI). In this work a novel method is developed for the segmentation of small rodent brain MRI based on Convolutional Neural Networks (CNNs). The experiments here presented show how CNNs provide a fast, robust and accurate alternative to both manual and registration-based methods. This is demonstrated by accurately segmenting three large datasets of MRI scans of healthy and Huntington disease model mice, as well as TBI rats. MU-Net and MU-Net-R,
the CCNs here presented, achieve human-level accuracy while eliminating intra-rater variability, alleviating the biases of registration-based segmentation, and with an inference time of less than one second per scan. Using these segmentation masks I designed a geometric construction to extract 39 parameters describing the position and orientation of the hippocampus, and later used them to classify epileptic vs. non-epileptic rats with a balanced accuracy of 0.80, five months after TBI. This clinically transferable geometric
approach detects subjects at high-risk of post-traumatic epilepsy, paving the way towards subject stratification for antiepileptogenesis studies
- …