360 research outputs found
Automated detection of extended sources in radio maps: progress from the SCORPIO survey
Automated source extraction and parameterization represents a crucial
challenge for the next-generation radio interferometer surveys, such as those
performed with the Square Kilometre Array (SKA) and its precursors. In this
paper we present a new algorithm, dubbed CAESAR (Compact And Extended Source
Automated Recognition), to detect and parametrize extended sources in radio
interferometric maps. It is based on a pre-filtering stage, allowing image
denoising, compact source suppression and enhancement of diffuse emission,
followed by an adaptive superpixel clustering stage for final source
segmentation. A parameterization stage provides source flux information and a
wide range of morphology estimators for post-processing analysis. We developed
CAESAR in a modular software library, including also different methods for
local background estimation and image filtering, along with alternative
algorithms for both compact and diffuse source extraction. The method was
applied to real radio continuum data collected at the Australian Telescope
Compact Array (ATCA) within the SCORPIO project, a pathfinder of the ASKAP-EMU
survey. The source reconstruction capabilities were studied over different test
fields in the presence of compact sources, imaging artefacts and diffuse
emission from the Galactic plane and compared with existing algorithms. When
compared to a human-driven analysis, the designed algorithm was found capable
of detecting known target sources and regions of diffuse emission,
outperforming alternative approaches over the considered fields.Comment: 15 pages, 9 figure
Multi-utility Learning: Structured-output Learning with Multiple Annotation-specific Loss Functions
Structured-output learning is a challenging problem; particularly so because
of the difficulty in obtaining large datasets of fully labelled instances for
training. In this paper we try to overcome this difficulty by presenting a
multi-utility learning framework for structured prediction that can learn from
training instances with different forms of supervision. We propose a unified
technique for inferring the loss functions most suitable for quantifying the
consistency of solutions with the given weak annotation. We demonstrate the
effectiveness of our framework on the challenging semantic image segmentation
problem for which a wide variety of annotations can be used. For instance, the
popular training datasets for semantic segmentation are composed of images with
hard-to-generate full pixel labellings, as well as images with easy-to-obtain
weak annotations, such as bounding boxes around objects, or image-level labels
that specify which object categories are present in an image. Experimental
evaluation shows that the use of annotation-specific loss functions
dramatically improves segmentation accuracy compared to the baseline system
where only one type of weak annotation is used
Content-sensitive superpixel generation with boundary adjustment.
Superpixel segmentation has become a crucial tool in many image processing and computer vision applications. In this paper, a novel content-sensitive superpixel generation algorithm with boundary adjustment is proposed. First, the image local entropy was used to measure the amount of information in the image, and the amount of information was evenly distributed to each seed. It placed more seeds to achieve the lower under-segmentation in content-dense regions, and placed the fewer seeds to increase computational efficiency in content-sparse regions. Second, the Prim algorithm was adopted to generate uniform superpixels efficiently. Third, a boundary adjustment strategy with the adaptive distance further optimized the superpixels to improve the performance of the superpixel. Experimental results on the Berkeley Segmentation Database show that our method outperforms competing methods under evaluation metrics
Superpixels: An Evaluation of the State-of-the-Art
Superpixels group perceptually similar pixels to create visually meaningful
entities while heavily reducing the number of primitives for subsequent
processing steps. As of these properties, superpixel algorithms have received
much attention since their naming in 2003. By today, publicly available
superpixel algorithms have turned into standard tools in low-level vision. As
such, and due to their quick adoption in a wide range of applications,
appropriate benchmarks are crucial for algorithm selection and comparison.
Until now, the rapidly growing number of algorithms as well as varying
experimental setups hindered the development of a unifying benchmark. We
present a comprehensive evaluation of 28 state-of-the-art superpixel algorithms
utilizing a benchmark focussing on fair comparison and designed to provide new
insights relevant for applications. To this end, we explicitly discuss
parameter optimization and the importance of strictly enforcing connectivity.
Furthermore, by extending well-known metrics, we are able to summarize
algorithm performance independent of the number of generated superpixels,
thereby overcoming a major limitation of available benchmarks. Furthermore, we
discuss runtime, robustness against noise, blur and affine transformations,
implementation details as well as aspects of visual quality. Finally, we
present an overall ranking of superpixel algorithms which redefines the
state-of-the-art and enables researchers to easily select appropriate
algorithms and the corresponding implementations which themselves are made
publicly available as part of our benchmark at
davidstutz.de/projects/superpixel-benchmark/
Left/Right Hand Segmentation in Egocentric Videos
Wearable cameras allow people to record their daily activities from a
user-centered (First Person Vision) perspective. Due to their favorable
location, wearable cameras frequently capture the hands of the user, and may
thus represent a promising user-machine interaction tool for different
applications. Existent First Person Vision methods handle hand segmentation as
a background-foreground problem, ignoring two important facts: i) hands are not
a single "skin-like" moving element, but a pair of interacting cooperative
entities, ii) close hand interactions may lead to hand-to-hand occlusions and,
as a consequence, create a single hand-like segment. These facts complicate a
proper understanding of hand movements and interactions. Our approach extends
traditional background-foreground strategies, by including a
hand-identification step (left-right) based on a Maxwell distribution of angle
and position. Hand-to-hand occlusions are addressed by exploiting temporal
superpixels. The experimental results show that, in addition to a reliable
left/right hand-segmentation, our approach considerably improves the
traditional background-foreground hand-segmentation
Stochastic Methods for Fine-Grained Image Segmentation and Uncertainty Estimation in Computer Vision
In this dissertation, we exploit concepts of probability theory, stochastic methods and machine learning to address three existing limitations of deep learning-based models for image understanding. First, although convolutional neural networks (CNN) have substantially improved the state of the art in image understanding, conventional CNNs provide segmentation masks that poorly adhere to object boundaries, a critical limitation for many potential applications. Second, training deep learning models requires large amounts of carefully selected and annotated data, but large-scale annotation of image segmentation datasets is often prohibitively expensive. And third, conventional deep learning models also lack the capability of uncertainty estimation, which compromises both decision making and model interpretability. To address these limitations, we introduce the Region Growing Refinement (RGR) algorithm, an unsupervised post-processing algorithm that exploits Monte Carlo sampling and pixel similarities to propagate high-confidence labels into regions of low-confidence classification. The probabilistic Region Growing Refinement (pRGR) provides RGR with a rigorous mathematical foundation that exploits concepts of Bayesian estimation and variance reduction techniques. Experiments demonstrate both the effectiveness of (p)RGR for the refinement of segmentation predictions, as well as its suitability for uncertainty estimation, since its variance estimates obtained in the Monte Carlo iterations are highly correlated with segmentation accuracy. We also introduce FreeLabel, an intuitive open-source web interface that exploits RGR to allow users to obtain high-quality segmentation masks with just a few freehand scribbles, in a matter of seconds. Designed to benefit the computer vision community, FreeLabel can be used for both crowdsourced or private annotation and has a modular structure that can be easily adapted for any image dataset. The practical relevance of methods developed in this dissertation are illustrated through applications on agricultural and healthcare-related domains. We have combined RGR and modern CNNs for fine segmentation of fruit flowers, motivated by the importance of automated bloom intensity estimation for optimization of fruit orchard management and, possibly, automatizing procedures such as flower thinning and pollination. We also exploited an early version of FreeLabel to annotate novel datasets for segmentation of fruit flowers, which are currently publicly available. Finally, this dissertation also describes works on fine segmentation and gaze estimation for images collected from assisted living environments, with the ultimate goal of assisting geriatricians in evaluating health status of patients in such facilities
- …