107,070 research outputs found
Automatic segmentation of skin cells in multiphoton data using multi-stage merging
We propose a novel automatic segmentation algorithm that separates the components of human skin cells from the rest of the tissue in fluorescence data of three-dimensional scans using non-invasive multiphoton tomography. The algorithm encompasses a multi-stage merging on preprocessed superpixel images to ensure independence from a single empirical global threshold. This leads to a high robustness of the segmentation considering the depth-dependent data characteristics, which include variable contrasts and cell sizes. The subsequent classification of cell cytoplasm and nuclei are based on a cell model described by a set of four features. Two novel features, a relationship between outer cell and inner nucleus (OCIN) and a stability index, were derived. The OCIN feature describes the topology of the model, while the stability index indicates segment quality in the multi-stage merging process. These two new features, combined with the local gradient magnitude and compactness, are used for the model-based fuzzy evaluation of the cell segments. We exemplify our approach on an image stack with 200 Ă— 200 Ă— 100 ÎĽm3, including the skin layers of the stratum spinosum and the stratum basale of a healthy volunteer. Our image processing pipeline contributes to the fully automated classification of human skin cells in multiphoton data and provides a basis for the detection of skin cancer using non-invasive optical biopsy
Salient Objects in Clutter: Bringing Salient Object Detection to the Foreground
We provide a comprehensive evaluation of salient object detection (SOD)
models. Our analysis identifies a serious design bias of existing SOD datasets
which assumes that each image contains at least one clearly outstanding salient
object in low clutter. The design bias has led to a saturated high performance
for state-of-the-art SOD models when evaluated on existing datasets. The
models, however, still perform far from being satisfactory when applied to
real-world daily scenes. Based on our analyses, we first identify 7 crucial
aspects that a comprehensive and balanced dataset should fulfill. Then, we
propose a new high quality dataset and update the previous saliency benchmark.
Specifically, our SOC (Salient Objects in Clutter) dataset, includes images
with salient and non-salient objects from daily object categories. Beyond
object category annotations, each salient image is accompanied by attributes
that reflect common challenges in real-world scenes. Finally, we report
attribute-based performance assessment on our dataset.Comment: ECCV 201
Automated Visual Fin Identification of Individual Great White Sharks
This paper discusses the automated visual identification of individual great
white sharks from dorsal fin imagery. We propose a computer vision photo ID
system and report recognition results over a database of thousands of
unconstrained fin images. To the best of our knowledge this line of work
establishes the first fully automated contour-based visual ID system in the
field of animal biometrics. The approach put forward appreciates shark fins as
textureless, flexible and partially occluded objects with an individually
characteristic shape. In order to recover animal identities from an image we
first introduce an open contour stroke model, which extends multi-scale region
segmentation to achieve robust fin detection. Secondly, we show that
combinatorial, scale-space selective fingerprinting can successfully encode fin
individuality. We then measure the species-specific distribution of visual
individuality along the fin contour via an embedding into a global `fin space'.
Exploiting this domain, we finally propose a non-linear model for individual
animal recognition and combine all approaches into a fine-grained
multi-instance framework. We provide a system evaluation, compare results to
prior work, and report performance and properties in detail.Comment: 17 pages, 16 figures. To be published in IJCV. Article replaced to
update first author contact details and to correct a Figure reference on page
Improving Unsupervised Defect Segmentation by Applying Structural Similarity to Autoencoders
Convolutional autoencoders have emerged as popular methods for unsupervised
defect segmentation on image data. Most commonly, this task is performed by
thresholding a pixel-wise reconstruction error based on an distance.
This procedure, however, leads to large residuals whenever the reconstruction
encompasses slight localization inaccuracies around edges. It also fails to
reveal defective regions that have been visually altered when intensity values
stay roughly consistent. We show that these problems prevent these approaches
from being applied to complex real-world scenarios and that it cannot be easily
avoided by employing more elaborate architectures such as variational or
feature matching autoencoders. We propose to use a perceptual loss function
based on structural similarity which examines inter-dependencies between local
image regions, taking into account luminance, contrast and structural
information, instead of simply comparing single pixel values. It achieves
significant performance gains on a challenging real-world dataset of
nanofibrous materials and a novel dataset of two woven fabrics over the state
of the art approaches for unsupervised defect segmentation that use pixel-wise
reconstruction error metrics
- …