8,291 research outputs found
Medical Image Segmentation for Mobile Electronic Patient Charts Using Numerical Modeling of IoT
Internet of Things (IoT) brings telemedicine a new chance. This enables the specialist to consult the patient’s condition despite the fact that they are in different places. Medical image segmentation is needed for analysis, storage, and protection of medical image in telemedicine. Therefore, a variety of methods have been researched for fast and accurate medical image segmentation. Performing segmentation in various organs, the accurate judgment of the region is needed in medical image. However, the removal of region occurs by the lack of information to determine the region in a small region. In this paper, we researched how to reconstruct segmentation region in a small region in order to improve the segmentation results. We generated predicted segmentation of slices using volume data with linear equation and proposed improvement method for small regions using the predicted segmentation. In order to verify the performance of the proposed method, lung region by chest CT images was segmented. As a result of experiments, volume data segmentation accuracy rose from 0.978 to 0.981 and from 0.281 to 0.187 with a standard deviation improvement confirmed
Extraction of Airways with Probabilistic State-space Models and Bayesian Smoothing
Segmenting tree structures is common in several image processing
applications. In medical image analysis, reliable segmentations of airways,
vessels, neurons and other tree structures can enable important clinical
applications. We present a framework for tracking tree structures comprising of
elongated branches using probabilistic state-space models and Bayesian
smoothing. Unlike most existing methods that proceed with sequential tracking
of branches, we present an exploratory method, that is less sensitive to local
anomalies in the data due to acquisition noise and/or interfering structures.
The evolution of individual branches is modelled using a process model and the
observed data is incorporated into the update step of the Bayesian smoother
using a measurement model that is based on a multi-scale blob detector.
Bayesian smoothing is performed using the RTS (Rauch-Tung-Striebel) smoother,
which provides Gaussian density estimates of branch states at each tracking
step. We select likely branch seed points automatically based on the response
of the blob detection and track from all such seed points using the RTS
smoother. We use covariance of the marginal posterior density estimated for
each branch to discriminate false positive and true positive branches. The
method is evaluated on 3D chest CT scans to track airways. We show that the
presented method results in additional branches compared to a baseline method
based on region growing on probability images.Comment: 10 pages. Pre-print of the paper accepted at Workshop on Graphs in
Biomedical Image Analysis. MICCAI 2017. Quebec Cit
Segmentation of Three-dimensional Images with Parametric Active Surfaces and Topology Changes
In this paper, we introduce a novel parametric method for segmentation of
three-dimensional images. We consider a piecewise constant version of the
Mumford-Shah and the Chan-Vese functionals and perform a region-based
segmentation of 3D image data. An evolution law is derived from energy
minimization problems which push the surfaces to the boundaries of 3D objects
in the image. We propose a parametric scheme which describes the evolution of
parametric surfaces. An efficient finite element scheme is proposed for a
numerical approximation of the evolution equations. Since standard parametric
methods cannot handle topology changes automatically, an efficient method is
presented to detect, identify and perform changes in the topology of the
surfaces. One main focus of this paper are the algorithmic details to handle
topology changes like splitting and merging of surfaces and change of the genus
of a surface. Different artificial images are studied to demonstrate the
ability to detect the different types of topology changes. Finally, the
parametric method is applied to segmentation of medical 3D images
Graph Refinement based Airway Extraction using Mean-Field Networks and Graph Neural Networks
Graph refinement, or the task of obtaining subgraphs of interest from
over-complete graphs, can have many varied applications. In this work, we
extract trees or collection of sub-trees from image data by, first deriving a
graph-based representation of the volumetric data and then, posing the tree
extraction as a graph refinement task. We present two methods to perform graph
refinement. First, we use mean-field approximation (MFA) to approximate the
posterior density over the subgraphs from which the optimal subgraph of
interest can be estimated. Mean field networks (MFNs) are used for inference
based on the interpretation that iterations of MFA can be seen as feed-forward
operations in a neural network. This allows us to learn the model parameters
using gradient descent. Second, we present a supervised learning approach using
graph neural networks (GNNs) which can be seen as generalisations of MFNs.
Subgraphs are obtained by training a GNN-based graph refinement model to
directly predict edge probabilities. We discuss connections between the two
classes of methods and compare them for the task of extracting airways from 3D,
low-dose, chest CT data. We show that both the MFN and GNN models show
significant improvement when compared to one baseline method, that is similar
to a top performing method in the EXACT'09 Challenge, and a 3D U-Net based
airway segmentation model, in detecting more branches with fewer false
positives.Comment: Accepted for publication at Medical Image Analysis. 14 page
Cancer diagnosis using deep learning: A bibliographic review
In this paper, we first describe the basics of the field of cancer diagnosis, which includes steps of cancer diagnosis followed by the typical classification methods used by doctors, providing a historical idea of cancer classification techniques to the readers. These methods include Asymmetry, Border, Color and Diameter (ABCD) method, seven-point detection method, Menzies method, and pattern analysis. They are used regularly by doctors for cancer diagnosis, although they are not considered very efficient for obtaining better performance. Moreover, considering all types of audience, the basic evaluation criteria are also discussed. The criteria include the receiver operating characteristic curve (ROC curve), Area under the ROC curve (AUC), F1 score, accuracy, specificity, sensitivity, precision, dice-coefficient, average accuracy, and Jaccard index. Previously used methods are considered inefficient, asking for better and smarter methods for cancer diagnosis. Artificial intelligence and cancer diagnosis are gaining attention as a way to define better diagnostic tools. In particular, deep neural networks can be successfully used for intelligent image analysis. The basic framework of how this machine learning works on medical imaging is provided in this study, i.e., pre-processing, image segmentation and post-processing. The second part of this manuscript describes the different deep learning techniques, such as convolutional neural networks (CNNs), generative adversarial models (GANs), deep autoencoders (DANs), restricted Boltzmann’s machine (RBM), stacked autoencoders (SAE), convolutional autoencoders (CAE), recurrent neural networks (RNNs), long short-term memory (LTSM), multi-scale convolutional neural network (M-CNN), multi-instance learning convolutional neural network (MIL-CNN). For each technique, we provide Python codes, to allow interested readers to experiment with the cited algorithms on their own diagnostic problems. The third part of this manuscript compiles the successfully applied deep learning models for different types of cancers. Considering the length of the manuscript, we restrict ourselves to the discussion of breast cancer, lung cancer, brain cancer, and skin cancer. The purpose of this bibliographic review is to provide researchers opting to work in implementing deep learning and artificial neural networks for cancer diagnosis a knowledge from scratch of the state-of-the-art achievements
Automatic Pulmonary Nodule Detection in CT Scans Using Convolutional Neural Networks Based on Maximum Intensity Projection
Accurate pulmonary nodule detection is a crucial step in lung cancer
screening. Computer-aided detection (CAD) systems are not routinely used by
radiologists for pulmonary nodule detection in clinical practice despite their
potential benefits. Maximum intensity projection (MIP) images improve the
detection of pulmonary nodules in radiological evaluation with computed
tomography (CT) scans. Inspired by the clinical methodology of radiologists, we
aim to explore the feasibility of applying MIP images to improve the
effectiveness of automatic lung nodule detection using convolutional neural
networks (CNNs). We propose a CNN-based approach that takes MIP images of
different slab thicknesses (5 mm, 10 mm, 15 mm) and 1 mm axial section slices
as input. Such an approach augments the two-dimensional (2-D) CT slice images
with more representative spatial information that helps discriminate nodules
from vessels through their morphologies. Our proposed method achieves
sensitivity of 92.67% with 1 false positive per scan and sensitivity of 94.19%
with 2 false positives per scan for lung nodule detection on 888 scans in the
LIDC-IDRI dataset. The use of thick MIP images helps the detection of small
pulmonary nodules (3 mm-10 mm) and results in fewer false positives.
Experimental results show that utilizing MIP images can increase the
sensitivity and lower the number of false positives, which demonstrates the
effectiveness and significance of the proposed MIP-based CNNs framework for
automatic pulmonary nodule detection in CT scans. The proposed method also
shows the potential that CNNs could gain benefits for nodule detection by
combining the clinical procedure.Comment: Submitted to IEEE TM
- …