400 research outputs found

    HeMIS: Hetero-Modal Image Segmentation

    Full text link
    We introduce a deep learning image segmentation framework that is extremely robust to missing imaging modalities. Instead of attempting to impute or synthesize missing data, the proposed approach learns, for each modality, an embedding of the input image into a single latent vector space for which arithmetic operations (such as taking the mean) are well defined. Points in that space, which are averaged over modalities available at inference time, can then be further processed to yield the desired segmentation. As such, any combinatorial subset of available modalities can be provided as input, without having to learn a combinatorial number of imputation models. Evaluated on two neurological MRI datasets (brain tumors and MS lesions), the approach yields state-of-the-art segmentation results when provided with all modalities; moreover, its performance degrades remarkably gracefully when modalities are removed, significantly more so than alternative mean-filling or other synthesis approaches.Comment: Accepted as an oral presentation at MICCAI 201

    Identifying chromophore fingerprints of brain tumor tissue on hyperspectral imaging using principal component analysis

    Get PDF
    Hyperspectral imaging (HSI) is an optical technique that processes the electromagnetic spectrum at a multitude of monochromatic, adjacent frequency bands. The wide-bandwidth spectral signature of a target object's reflectance allows fingerprinting its physical, biochemical, and physiological properties. HSI has been applied for various applications, such as remote sensing and biological tissue analysis. Recently, HSI was also used to differentiate between healthy and pathological tissue under operative conditions in a surgery room on patients diagnosed with brain tumors. In this article, we perform a statistical analysis of the brain tumor patients' HSI scans from the HELICoiD dataset with the aim of identifying the correlation between reflectance spectra and absorption spectra of tissue chromophores. By using the principal component analysis (PCA), we determine the most relevant spectral features for intra- and inter-tissue class differentiation. Furthermore, we demonstrate that such spectral features are correlated with the spectra of cytochrome, i.e., the chromophore highly involved in (hyper) metabolic processes. Identifying such fingerprints of chromophores in reflectance spectra is a key step for automated molecular profiling and, eventually, expert-free biomarker discovery

    Measurement of the reaction \gamma p \TO K^ + \Lambda(1520) at photon energies up to 2.65 GeV

    Full text link
    The reaction \gamma p \TO K^+\Lambda(1520) was measured in the energy range from threshold to 2.65 GeV with the SAPHIR detector at the electron stretcher facility ELSA in Bonn. The Λ(1520)\Lambda(1520) production cross section was analyzed in the decay modes pKpK^-, nKˉ0n \bar{K}^0, Σ±π\Sigma^{\pm}\pi^{\mp}, and Λπ+π\Lambda\pi^+\pi^- as a function of the photon energy and the squared four-momentum transfer tt. While the cross sections for the inclusive reactions rise steadily with energy, the cross section of the process \gamma p \TO K^+\Lambda(1520) peaks at a photon energy of about 2.0 GeV, falls off exponentially with tt, and shows a slope flattening with increasing photon energy. The angular distributions in the tt-channel helicity system indicate neither a KK nor a KK^\star exchange dominance. The interpretation of the Λ(1520)\Lambda(1520) as a Σ(1385)π\Sigma(1385)\pi molecule is not supported.Comment: 11 pages, 16 figures, 4 table

    Deep learning-based parameter mapping for joint relaxation and diffusion tensor MR Fingerprinting

    Full text link
    Magnetic Resonance Fingerprinting (MRF) enables the simultaneous quantification of multiple properties of biological tissues. It relies on a pseudo-random acquisition and the matching of acquired signal evolutions to a precomputed dictionary. However, the dictionary is not scalable to higher-parametric spaces, limiting MRF to the simultaneous mapping of only a small number of parameters (proton density, T1 and T2 in general). Inspired by diffusion-weighted SSFP imaging, we present a proof-of-concept of a novel MRF sequence with embedded diffusion-encoding gradients along all three axes to efficiently encode orientational diffusion and T1 and T2 relaxation. We take advantage of a convolutional neural network (CNN) to reconstruct multiple quantitative maps from this single, highly undersampled acquisition. We bypass expensive dictionary matching by learning the implicit physical relationships between the spatiotemporal MRF data and the T1, T2 and diffusion tensor parameters. The predicted parameter maps and the derived scalar diffusion metrics agree well with state-of-the-art reference protocols. Orientational diffusion information is captured as seen from the estimated primary diffusion directions. In addition to this, the joint acquisition and reconstruction framework proves capable of preserving tissue abnormalities in multiple sclerosis lesions

    Brain Tumor Segmentation from Multi-Spectral MR Image Data Using Random Forest Classifier

    Get PDF
    The development of brain tumor segmentation techniques based on multi-spectral MR image data has relevant impact on the clinical practice via better diagnosis, radiotherapy planning and follow-up studies. This task is also very challenging due to the great variety of tumor appearances, the presence of several noise effects, and the differences in scanner sensitivity. This paper proposes an automatic procedure trained to distinguish gliomas from normal brain tissues in multi-spectral MRI data. The procedure is based on a random forest (RF) classifier, which uses 80 computed features beside the four observed ones, including morphological ones, gradients, and Gabor wavelet features. The intermediary segmentation outcome provided by the RF is fed to a twofold post-processing, which regularizes the shape of detected tumors and enhances the segmentation accuracy. The performance of the procedure was evaluated using the 274 records of the BraTS 2015 train data set. The achieved overall Dice scores between 85-86% represent highly accurate segmentation

    Whole Brain Vessel Graphs: A Dataset and Benchmark for Graph Learning and Neuroscience (VesselGraph)

    Full text link
    Biological neural networks define the brain function and intelligence of humans and other mammals, and form ultra-large, spatial, structured graphs. Their neuronal organization is closely interconnected with the spatial organization of the brain's microvasculature, which supplies oxygen to the neurons and builds a complementary spatial graph. This vasculature (or the vessel structure) plays an important role in neuroscience; for example, the organization of (and changes to) vessel structure can represent early signs of various pathologies, e.g. Alzheimer's disease or stroke. Recently, advances in tissue clearing have enabled whole brain imaging and segmentation of the entirety of the mouse brain's vasculature. Building on these advances in imaging, we are presenting an extendable dataset of whole-brain vessel graphs based on specific imaging protocols. Specifically, we extract vascular graphs using a refined graph extraction scheme leveraging the volume rendering engine Voreen and provide them in an accessible and adaptable form through the OGB and PyTorch Geometric dataloaders. Moreover, we benchmark numerous state-of-the-art graph learning algorithms on the biologically relevant tasks of vessel prediction and vessel classification using the introduced vessel graph dataset. Our work paves a path towards advancing graph learning research into the field of neuroscience. Complementarily, the presented dataset raises challenging graph learning research questions for the machine learning community, in terms of incorporating biological priors into learning algorithms, or in scaling these algorithms to handle sparse,spatial graphs with millions of nodes and edges. All datasets and code are available for download at this https UR

    Measurement of gamma p --> K+ Lambda and gamma p --> K+ Sigma0 at photon energies up to 2.6 GeV

    Full text link
    The reactions gamma p --> K+ Lambda and gamma p --> K+ Sigma0 were measured in the energy range from threshold up to a photon energy of 2.6 GeV. The data were taken with the SAPHIR detector at the electron stretcher facility, ELSA. Results on cross sections and hyperon polarizations are presented as a function of kaon production angle and photon energy. The total cross section for Lambda production rises steeply with energy close to threshold, whereas the Sigma0 cross section rises slowly to a maximum at about E_gamma = 1.45 GeV. Cross sections together with their angular decompositions into Legendre polynomials suggest contributions from resonance production for both reactions. In general, the induced polarization of Lambda has negative values in the kaon forward direction and positive values in the backward direction. The magnitude varies with energy. The polarization of Sigma0 follows a similar angular and energy dependence as that of Lambda, but with opposite sign.Comment: 21 pages, 25 figures, submitted to Eur. Phys. J.

    Joint 3D estimation of vehicles and scene flow

    Get PDF
    driving. While much progress has been made in recent years, imaging conditions in natural outdoor environments are still very challenging for current reconstruction and recognition methods. In this paper, we propose a novel unified approach which reasons jointly about 3D scene flow as well as the pose, shape and motion of vehicles in the scene. Towards this goal, we incorporate a deformable CAD model into a slanted-plane conditional random field for scene flow estimation and enforce shape consistency between the rendered 3D models and the parameters of all superpixels in the image. The association of superpixels to objects is established by an index variable which implicitly enables model selection. We evaluate our approach on the challenging KITTI scene flow dataset in terms of object and scene flow estimation. Our results provide a prove of concept and demonstrate the usefulness of our method. © 2015 Copernicus GmbH. All Rights Reserved

    A for-loop is all you need. For solving the inverse problem in the case of personalized tumor growth modeling

    Get PDF
    Solving the inverse problem is the key step in evaluating the capacity of a physical model to describe real phenomena. In medical image computing, it aligns with the classical theme of image-based model personalization. Traditionally, a solution to the problem is obtained by performing either sampling or variational inference based methods. Both approaches aim to identify a set of free physical model parameters that results in a simulation best matching an empirical observation. When applied to brain tumor modeling, one of the instances of image-based model personalization in medical image computing, the overarching drawback of the methods is the time complexity of finding such a set. In a clinical setting with limited time between imaging and diagnosis or even intervention, this time complexity may prove critical. As the history of quantitative science is the history of compression (Schmidhuber and Fridman, 2018), we align in this paper with the historical tendency and propose a method compressing complex traditional strategies for solving an inverse problem into a simple database query task. We evaluated different ways of performing the database query task assessing the trade-off between accuracy and execution time. On the exemplary task of brain tumor growth modeling, we prove that the proposed method achieves one order speed-up compared to existing approaches for solving the inverse problem. The resulting compute time offers critical means for relying on more complex and, hence, realistic models, for integrating image preprocessing and inverse modeling even deeper, or for implementing the current model into a clinical workflow. The code is available at https://github.com/IvanEz/for-loop-tumor
    corecore