5,073 research outputs found
Deep Lesion Graphs in the Wild: Relationship Learning and Organization of Significant Radiology Image Findings in a Diverse Large-scale Lesion Database
Radiologists in their daily work routinely find and annotate significant
abnormalities on a large number of radiology images. Such abnormalities, or
lesions, have collected over years and stored in hospitals' picture archiving
and communication systems. However, they are basically unsorted and lack
semantic annotations like type and location. In this paper, we aim to organize
and explore them by learning a deep feature representation for each lesion. A
large-scale and comprehensive dataset, DeepLesion, is introduced for this task.
DeepLesion contains bounding boxes and size measurements of over 32K lesions.
To model their similarity relationship, we leverage multiple supervision
information including types, self-supervised location coordinates and sizes.
They require little manual annotation effort but describe useful attributes of
the lesions. Then, a triplet network is utilized to learn lesion embeddings
with a sequential sampling strategy to depict their hierarchical similarity
structure. Experiments show promising qualitative and quantitative results on
lesion retrieval, clustering, and classification. The learned embeddings can be
further employed to build a lesion graph for various clinically useful
applications. We propose algorithms for intra-patient lesion matching and
missing annotation mining. Experimental results validate their effectiveness.Comment: Accepted by CVPR2018. DeepLesion url adde
A Survey on Deep Learning in Medical Image Analysis
Deep learning algorithms, in particular convolutional networks, have rapidly
become a methodology of choice for analyzing medical images. This paper reviews
the major deep learning concepts pertinent to medical image analysis and
summarizes over 300 contributions to the field, most of which appeared in the
last year. We survey the use of deep learning for image classification, object
detection, segmentation, registration, and other tasks and provide concise
overviews of studies per application area. Open challenges and directions for
future research are discussed.Comment: Revised survey includes expanded discussion section and reworked
introductory section on common deep architectures. Added missed papers from
before Feb 1st 201
Regularized Newton Methods for X-ray Phase Contrast and General Imaging Problems
Like many other advanced imaging methods, x-ray phase contrast imaging and
tomography require mathematical inversion of the observed data to obtain
real-space information. While an accurate forward model describing the
generally nonlinear image formation from a given object to the observations is
often available, explicit inversion formulas are typically not known. Moreover,
the measured data might be insufficient for stable image reconstruction, in
which case it has to be complemented by suitable a priori information. In this
work, regularized Newton methods are presented as a general framework for the
solution of such ill-posed nonlinear imaging problems. For a proof of
principle, the approach is applied to x-ray phase contrast imaging in the
near-field propagation regime. Simultaneous recovery of the phase- and
amplitude from a single near-field diffraction pattern without homogeneity
constraints is demonstrated for the first time. The presented methods further
permit all-at-once phase contrast tomography, i.e. simultaneous phase retrieval
and tomographic inversion. We demonstrate the potential of this approach by
three-dimensional imaging of a colloidal crystal at 95 nm isotropic resolution.Comment: (C)2016 Optical Society of America. One print or electronic copy may
be made for personal use only. Systematic reproduction and distribution,
duplication of any material in this paper for a fee or for commercial
purposes, or modifications of the content of this paper are prohibite
Deep learning in computational microscopy
We propose to use deep convolutional neural networks (DCNNs) to perform 2D and 3D computational imaging. Specifically, we investigate three different applications. We first try to solve the 3D inverse scattering problem based on learning a huge number of training target and speckle pairs. We also demonstrate a new DCNN architecture to perform Fourier ptychographic Microscopy (FPM) reconstruction, which achieves high-resolution phase recovery with considerably less data than standard FPM. Finally, we employ DCNN models that can predict focused 2D fluorescent microscopic images from blurred images captured at overfocused or underfocused planes.Published versio
Estimation de l'enveloppe et de la frรฉquence locales par les opรฉrateurs de Teager-Kaiser en interfรฉromรฉtrie en lumiรจre blanche.
In this work, a new method for surface extraction in white light scanning interferometry (WLSI) is introduced. The proposed extraction scheme is based on the Teager-Kaiser energy operator and its extended versions. This non-linear class of operators is helpful to extract the local instantaneous envelope and frequency of any narrow band AM-FM signal. Namely, the combination of the envelope and frequency information, allows effective surface extraction by an iterative re-estimation of the phase in association with a new correlation technique, based on a recent TK crossenergy operator. Through the experiments, it is shown that the proposed method produces substantially effective results in term of surface extraction compared to the peak fringe scanning technique, the five step phase shifting algorithm and the continuous wavelet transform based method. In addition, the results obtained show the robustness of the proposed method to noise and to the fluctuations of the carrier frequency
Automatic 3D bi-ventricular segmentation of cardiac images by a shape-refined multi-task deep learning approach
Deep learning approaches have achieved state-of-the-art performance in
cardiac magnetic resonance (CMR) image segmentation. However, most approaches
have focused on learning image intensity features for segmentation, whereas the
incorporation of anatomical shape priors has received less attention. In this
paper, we combine a multi-task deep learning approach with atlas propagation to
develop a shape-constrained bi-ventricular segmentation pipeline for short-axis
CMR volumetric images. The pipeline first employs a fully convolutional network
(FCN) that learns segmentation and landmark localisation tasks simultaneously.
The architecture of the proposed FCN uses a 2.5D representation, thus combining
the computational advantage of 2D FCNs networks and the capability of
addressing 3D spatial consistency without compromising segmentation accuracy.
Moreover, the refinement step is designed to explicitly enforce a shape
constraint and improve segmentation quality. This step is effective for
overcoming image artefacts (e.g. due to different breath-hold positions and
large slice thickness), which preclude the creation of anatomically meaningful
3D cardiac shapes. The proposed pipeline is fully automated, due to network's
ability to infer landmarks, which are then used downstream in the pipeline to
initialise atlas propagation. We validate the pipeline on 1831 healthy subjects
and 649 subjects with pulmonary hypertension. Extensive numerical experiments
on the two datasets demonstrate that our proposed method is robust and capable
of producing accurate, high-resolution and anatomically smooth bi-ventricular
3D models, despite the artefacts in input CMR volumes
SPNet: Deep 3D Object Classification and Retrieval using Stereographic Projection
ํ์๋
ผ๋ฌธ(์์ฌ)--์์ธ๋ํ๊ต ๋ํ์ :๊ณต๊ณผ๋ํ ์ ๊ธฐยท์ปดํจํฐ๊ณตํ๋ถ,2019. 8. ์ด๊ฒฝ๋ฌด.๋ณธ ๋
ผ๋ฌธ์์๋ 3D ๋ฌผ์ฒด๋ถ๋ฅ ๋ฌธ์ ๋ฅผ ํจ์จ์ ์ผ๋ก ํด๊ฒฐํ๊ธฐ์ํ์ฌ ์
์ฒดํ๋ฒ์ ํฌ์ฌ๋ฅผ ํ์ฉํ ๋ชจ๋ธ์ ์ ์ํ๋ค. ๋จผ์ ์
์ฒดํ๋ฒ์ ํฌ์ฌ๋ฅผ ์ฌ์ฉํ์ฌ 3D ์
๋ ฅ ์์์ 2D ํ๋ฉด ์ด๋ฏธ์ง๋ก ๋ณํํ๋ค. ๋ํ, ๊ฐ์ฒด์ ์นดํ
๊ณ ๋ฆฌ๋ฅผ ์ถ์ ํ๊ธฐ ์ํ์ฌ ์์ 2Dํฉ์ฑ๊ณฑ์ ์
ฉ๋ง(CNN)์ ์ ์ํ๊ณ , ๋ค์ค์์ ์ผ๋ก๋ถํฐ ์ป์ ๊ฐ์ฒด ์นดํ
๊ณ ๋ฆฌ์ ์ถ์ ๊ฐ๋ค์ ๊ฒฐํฉํ์ฌ ์ฑ๋ฅ์ ๋์ฑ ํฅ์์ํค๋ ์์๋ธ ๋ฐฉ๋ฒ์ ์ ์ํ๋ค. ์ด๋ฅผ์ํด (1) ์
์ฒดํ๋ฒํฌ์ฌ๋ฅผ ํ์ฉํ์ฌ 3D ๊ฐ์ฒด๋ฅผ 2D ํ๋ฉด ์ด๋ฏธ์ง๋ก ๋ณํํ๊ณ (2) ๋ค์ค์์ ์์๋ค์ ํน์ง์ ์ ํ์ต (3) ํจ๊ณผ์ ์ด๊ณ ๊ฐ์ธํ ์์ ์ ํน์ง์ ์ ์ ๋ณํ ํ (4) ๋ค์ค์์ ์์๋ธ์ ํตํ ์ฑ๋ฅ์ ํฅ์์ํค๋ 4๋จ๊ณ๋ก ๊ตฌ์ฑ๋ ํ์ต๋ฐฉ๋ฒ์ ์ ์ํ๋ค. ๋ณธ ๋
ผ๋ฌธ์์๋ ์คํ๊ฒฐ๊ณผ๋ฅผ ํตํด ์ ์ํ๋ ๋ฐฉ๋ฒ์ด ๋งค์ฐ ์ ์ ๋ชจ๋ธ์ ํ์ต ๋ณ์์ GPU ๋ฉ๋ชจ๋ฆฌ๋ฅผ ์ฌ์ฉํ๋๊ณผ ๋์์ ๊ฐ์ฒด ๋ถ๋ฅ ๋ฐ ๊ฒ์์์์ ์ฐ์ํ ์ฑ๋ฅ์ ๋ณด์ด๊ณ ์์์ ์ฆ๋ช
ํ์๋ค.We propose an efficient Stereographic Projection Neural Network (SPNet) for learning representations of 3D objects. We first transform a 3D input volume into a 2D planar image using stereographic projection. We then present a shallow 2D convolutional neural network (CNN) to estimate the object category followed by view ensemble, which combines the responses from multiple views of the object to further enhance the predictions. Specifically, the proposed approach consists of four stages: (1) Stereographic projection of a 3D object, (2) view-specific feature learning, (3) view selection and (4) view ensemble. The proposed approach performs comparably to the state-of-the-art methods while having substantially lower GPU memory as well as network parameters. Despite its lightness, the experiments on 3D object classification and shape retrievals demonstrate the high performance of the proposed method.1 INTRODUCTION
2 Related Work
2.1 Point cloud-based methods
2.2 3D model-based methods
2.3 2D/2.5D image-based methods
3 Proposed Stereographic Projection Network
3.1 Stereographic Representation
3.2 Network Architecture
3.3 View Selection
3.4 View Ensemble
4 Experimental Evaluation
4.1 Datasets
4.2 Training
4.3 Choice of Stereographic Projection
4.4 Test on View Selection Schemes
4.5 3D Object Classification
4.6 Shape Retrieval
4.7 Implementation
5 ConclusionsMaste
- โฆ