1,010 research outputs found
Three-Dimensional GPU-Accelerated Active Contours for Automated Localization of Cells in Large Images
Cell segmentation in microscopy is a challenging problem, since cells are
often asymmetric and densely packed. This becomes particularly challenging for
extremely large images, since manual intervention and processing time can make
segmentation intractable. In this paper, we present an efficient and highly
parallel formulation for symmetric three-dimensional (3D) contour evolution
that extends previous work on fast two-dimensional active contours. We provide
a formulation for optimization on 3D images, as well as a strategy for
accelerating computation on consumer graphics hardware. The proposed software
takes advantage of Monte-Carlo sampling schemes in order to speed up
convergence and reduce thread divergence. Experimental results show that this
method provides superior performance for large 2D and 3D cell segmentation
tasks when compared to existing methods on large 3D brain images
Full-resolution Lung Nodule Segmentation from Chest X-ray Images using Residual Encoder-Decoder Networks
Lung cancer is the leading cause of cancer death and early diagnosis is
associated with a positive prognosis. Chest X-ray (CXR) provides an inexpensive
imaging mode for lung cancer diagnosis. Suspicious nodules are difficult to
distinguish from vascular and bone structures using CXR. Computer vision has
previously been proposed to assist human radiologists in this task, however,
leading studies use down-sampled images and computationally expensive methods
with unproven generalization. Instead, this study localizes lung nodules using
efficient encoder-decoder neural networks that process full resolution images
to avoid any signal loss resulting from down-sampling. Encoder-decoder networks
are trained and tested using the JSRT lung nodule dataset. The networks are
used to localize lung nodules from an independent external CXR dataset.
Sensitivity and false positive rates are measured using an automated framework
to eliminate any observer subjectivity. These experiments allow for the
determination of the optimal network depth, image resolution and pre-processing
pipeline for generalized lung nodule localization. We find that nodule
localization is influenced by subtlety, with more subtle nodules being detected
in earlier training epochs. Therefore, we propose a novel self-ensemble model
from three consecutive epochs centered on the validation optimum. This ensemble
achieved a sensitivity of 85% in 10-fold internal testing with false positives
of 8 per image. A sensitivity of 81% is achieved at a false positive rate of 6
following morphological false positive reduction. This result is comparable to
more computationally complex systems based on linear and spatial filtering, but
with a sub-second inference time that is faster than other methods. The
proposed algorithm achieved excellent generalization results against an
external dataset with sensitivity of 77% at a false positive rate of 7.6
LEyes: A Lightweight Framework for Deep Learning-Based Eye Tracking using Synthetic Eye Images
Deep learning has bolstered gaze estimation techniques, but real-world
deployment has been impeded by inadequate training datasets. This problem is
exacerbated by both hardware-induced variations in eye images and inherent
biological differences across the recorded participants, leading to both
feature and pixel-level variance that hinders the generalizability of models
trained on specific datasets. While synthetic datasets can be a solution, their
creation is both time and resource-intensive. To address this problem, we
present a framework called Light Eyes or "LEyes" which, unlike conventional
photorealistic methods, only models key image features required for video-based
eye tracking using simple light distributions. LEyes facilitates easy
configuration for training neural networks across diverse gaze-estimation
tasks. We demonstrate that models trained using LEyes are consistently on-par
or outperform other state-of-the-art algorithms in terms of pupil and CR
localization across well-known datasets. In addition, a LEyes trained model
outperforms the industry standard eye tracker using significantly more
cost-effective hardware. Going forward, we are confident that LEyes will
revolutionize synthetic data generation for gaze estimation models, and lead to
significant improvements of the next generation video-based eye trackers.Comment: 32 pages, 8 figure
Principal component-based image segmentation: a new approach to outline in vitro cell colonies
publishedVersio
- …