24,731 research outputs found
Cell Segmentation and Tracking using CNN-Based Distance Predictions and a Graph-Based Matching Strategy
The accurate segmentation and tracking of cells in microscopy image sequences
is an important task in biomedical research, e.g., for studying the development
of tissues, organs or entire organisms. However, the segmentation of touching
cells in images with a low signal-to-noise-ratio is still a challenging
problem. In this paper, we present a method for the segmentation of touching
cells in microscopy images. By using a novel representation of cell borders,
inspired by distance maps, our method is capable to utilize not only touching
cells but also close cells in the training process. Furthermore, this
representation is notably robust to annotation errors and shows promising
results for the segmentation of microscopy images containing in the training
data underrepresented or not included cell types. For the prediction of the
proposed neighbor distances, an adapted U-Net convolutional neural network
(CNN) with two decoder paths is used. In addition, we adapt a graph-based cell
tracking algorithm to evaluate our proposed method on the task of cell
tracking. The adapted tracking algorithm includes a movement estimation in the
cost function to re-link tracks with missing segmentation masks over a short
sequence of frames. Our combined tracking by detection method has proven its
potential in the IEEE ISBI 2020 Cell Tracking Challenge
(http://celltrackingchallenge.net/) where we achieved as team KIT-Sch-GE
multiple top three rankings including two top performances using a single
segmentation model for the diverse data sets.Comment: 25 pages, 14 figures, methods of the team KIT-Sch-GE for the IEEE
ISBI 2020 Cell Tracking Challeng
A Survey on Deep Learning in Medical Image Analysis
Deep learning algorithms, in particular convolutional networks, have rapidly
become a methodology of choice for analyzing medical images. This paper reviews
the major deep learning concepts pertinent to medical image analysis and
summarizes over 300 contributions to the field, most of which appeared in the
last year. We survey the use of deep learning for image classification, object
detection, segmentation, registration, and other tasks and provide concise
overviews of studies per application area. Open challenges and directions for
future research are discussed.Comment: Revised survey includes expanded discussion section and reworked
introductory section on common deep architectures. Added missed papers from
before Feb 1st 201
Position resolution and particle identification with the ATLAS EM calorimeter
In the years between 2000 and 2002 several pre-series and series modules of
the ATLAS EM barrel and end-cap calorimeter were exposed to electron, photon
and pion beams. The performance of the calorimeter with respect to its finely
segmented first sampling has been studied. The polar angle resolution has been
found to be in the range 50-60 mrad/sqrt(E (GeV)). The neutral pion rejection
has been measured to be about 3.5 for 90% photon selection efficiency at pT=50
GeV/c. Electron-pion separation studies have indicated that a pion fake rate of
(0.07-0.5)% can be achieved while maintaining 90% electron identification
efficiency for energies up to 40 GeV.Comment: 32 pages, 22 figures, to be published in NIM
Scene Parsing with Multiscale Feature Learning, Purity Trees, and Optimal Covers
Scene parsing, or semantic segmentation, consists in labeling each pixel in
an image with the category of the object it belongs to. It is a challenging
task that involves the simultaneous detection, segmentation and recognition of
all the objects in the image.
The scene parsing method proposed here starts by computing a tree of segments
from a graph of pixel dissimilarities. Simultaneously, a set of dense feature
vectors is computed which encodes regions of multiple sizes centered on each
pixel. The feature extractor is a multiscale convolutional network trained from
raw pixels. The feature vectors associated with the segments covered by each
node in the tree are aggregated and fed to a classifier which produces an
estimate of the distribution of object categories contained in the segment. A
subset of tree nodes that cover the image are then selected so as to maximize
the average "purity" of the class distributions, hence maximizing the overall
likelihood that each segment will contain a single object. The convolutional
network feature extractor is trained end-to-end from raw pixels, alleviating
the need for engineered features. After training, the system is parameter free.
The system yields record accuracies on the Stanford Background Dataset (8
classes), the Sift Flow Dataset (33 classes) and the Barcelona Dataset (170
classes) while being an order of magnitude faster than competing approaches,
producing a 320 \times 240 image labeling in less than 1 second.Comment: 9 pages, 4 figures - Published in 29th International Conference on
Machine Learning (ICML 2012), Jun 2012, Edinburgh, United Kingdo
- …