684 research outputs found
A Survey on Deep Learning in Medical Image Analysis
Deep learning algorithms, in particular convolutional networks, have rapidly
become a methodology of choice for analyzing medical images. This paper reviews
the major deep learning concepts pertinent to medical image analysis and
summarizes over 300 contributions to the field, most of which appeared in the
last year. We survey the use of deep learning for image classification, object
detection, segmentation, registration, and other tasks and provide concise
overviews of studies per application area. Open challenges and directions for
future research are discussed.Comment: Revised survey includes expanded discussion section and reworked
introductory section on common deep architectures. Added missed papers from
before Feb 1st 201
U-Net and its variants for medical image segmentation: theory and applications
U-net is an image segmentation technique developed primarily for medical
image analysis that can precisely segment images using a scarce amount of
training data. These traits provide U-net with a very high utility within the
medical imaging community and have resulted in extensive adoption of U-net as
the primary tool for segmentation tasks in medical imaging. The success of
U-net is evident in its widespread use in all major image modalities from CT
scans and MRI to X-rays and microscopy. Furthermore, while U-net is largely a
segmentation tool, there have been instances of the use of U-net in other
applications. As the potential of U-net is still increasing, in this review we
look at the various developments that have been made in the U-net architecture
and provide observations on recent trends. We examine the various innovations
that have been made in deep learning and discuss how these tools facilitate
U-net. Furthermore, we look at image modalities and application areas where
U-net has been applied.Comment: 42 pages, in IEEE Acces
Focal Spot, Winter 2006/2007
https://digitalcommons.wustl.edu/focal_spot_archives/1104/thumbnail.jp
A Review on Data Fusion of Multidimensional Medical and Biomedical Data
Data fusion aims to provide a more accurate description of a sample than any one source of data alone. At the same time, data fusion minimizes the uncertainty of the results by combining data from multiple sources. Both aim to improve the characterization of samples and might improve clinical diagnosis and prognosis. In this paper, we present an overview of the advances achieved over the last decades in data fusion approaches in the context of the medical and biomedical fields. We collected approaches for interpreting multiple sources of data in different combinations: image to image, image to biomarker, spectra to image, spectra to spectra, spectra to biomarker, and others. We found that the most prevalent combination is the image-to-image fusion and that most data fusion approaches were applied together with deep learning or machine learning methods
Segmentation of biological images containing multitarget labeling using the jelly filling framework
Biomedical imaging when combined with digital image analysis is capable of quantitative morphological and physiological characterizations of biological structures. Recent fluorescence microscopy techniques can collect hundreds of focal plane images from deeper tissue volumes, thus enabling characterization of three-dimensional (3-D) biological structures at subcellular resolution. Automatic analysis methods are required to obtain quantitative, objective, and reproducible measurements of biological quantities. However, these images typically contain many artifacts such as poor edge details, nonuniform brightness, and distortions that vary along different axes, all of which complicate the automatic image analysis. Another challenge is due to "multitarget labeling," in which a single probe labels multiple biological entities in acquired images. We present a "jelly filling" method for segmentation of 3-D biological images containing multitarget labeling. Intuitively, our iterative segmentation method is based on filling disjoint tubule regions of an image with a jelly-like fluid. This helps in the detection of components that are "floating" within a labeled jelly. Experimental results show that our proposed method is effective in segmenting important biological quantities
- …