7,289 research outputs found
Unconstrained salient object detection via proposal subset optimization
We aim at detecting salient objects in unconstrained images. In unconstrained images, the number of salient objects (if any) varies from image to image, and is not given. We present a salient object detection system that directly outputs a compact set of detection windows, if any, for an input image. Our system leverages a Convolutional-Neural-Network model to generate location proposals of salient objects. Location proposals tend to be highly overlapping and noisy. Based on the Maximum a Posteriori principle, we propose a novel subset optimization framework to generate a compact set of detection windows out of noisy proposals. In experiments, we show that our subset optimization formulation greatly enhances the performance of our system, and our system attains 16-34% relative improvement in Average Precision compared with the state-of-the-art on three challenging salient object datasets.http://openaccess.thecvf.com/content_cvpr_2016/html/Zhang_Unconstrained_Salient_Object_CVPR_2016_paper.htmlPublished versio
Towards the Success Rate of One: Real-time Unconstrained Salient Object Detection
In this work, we propose an efficient and effective approach for
unconstrained salient object detection in images using deep convolutional
neural networks. Instead of generating thousands of candidate bounding boxes
and refining them, our network directly learns to generate the saliency map
containing the exact number of salient objects. During training, we convert the
ground-truth rectangular boxes to Gaussian distributions that better capture
the ROI regarding individual salient objects. During inference, the network
predicts Gaussian distributions centered at salient objects with an appropriate
covariance, from which bounding boxes are easily inferred. Notably, our network
performs saliency map prediction without pixel-level annotations, salient
object detection without object proposals, and salient object subitizing
simultaneously, all in a single pass within a unified framework. Extensive
experiments show that our approach outperforms existing methods on various
datasets by a large margin, and achieves more than 100 fps with VGG16 network
on a single GPU during inference
Automated detection of extended sources in radio maps: progress from the SCORPIO survey
Automated source extraction and parameterization represents a crucial
challenge for the next-generation radio interferometer surveys, such as those
performed with the Square Kilometre Array (SKA) and its precursors. In this
paper we present a new algorithm, dubbed CAESAR (Compact And Extended Source
Automated Recognition), to detect and parametrize extended sources in radio
interferometric maps. It is based on a pre-filtering stage, allowing image
denoising, compact source suppression and enhancement of diffuse emission,
followed by an adaptive superpixel clustering stage for final source
segmentation. A parameterization stage provides source flux information and a
wide range of morphology estimators for post-processing analysis. We developed
CAESAR in a modular software library, including also different methods for
local background estimation and image filtering, along with alternative
algorithms for both compact and diffuse source extraction. The method was
applied to real radio continuum data collected at the Australian Telescope
Compact Array (ATCA) within the SCORPIO project, a pathfinder of the ASKAP-EMU
survey. The source reconstruction capabilities were studied over different test
fields in the presence of compact sources, imaging artefacts and diffuse
emission from the Galactic plane and compared with existing algorithms. When
compared to a human-driven analysis, the designed algorithm was found capable
of detecting known target sources and regions of diffuse emission,
outperforming alternative approaches over the considered fields.Comment: 15 pages, 9 figure
2D Reconstruction of Small Intestine's Interior Wall
Examining and interpreting of a large number of wireless endoscopic images
from the gastrointestinal tract is a tiresome task for physicians. A practical
solution is to automatically construct a two dimensional representation of the
gastrointestinal tract for easy inspection. However, little has been done on
wireless endoscopic image stitching, let alone systematic investigation. The
proposed new wireless endoscopic image stitching method consists of two main
steps to improve the accuracy and efficiency of image registration. First, the
keypoints are extracted by Principle Component Analysis and Scale Invariant
Feature Transform (PCA-SIFT) algorithm and refined with Maximum Likelihood
Estimation SAmple Consensus (MLESAC) outlier removal to find the most reliable
keypoints. Second, the optimal transformation parameters obtained from first
step are fed to the Normalised Mutual Information (NMI) algorithm as an initial
solution. With modified Marquardt-Levenberg search strategy in a multiscale
framework, the NMI can find the optimal transformation parameters in the
shortest time. The proposed methodology has been tested on two different
datasets - one with real wireless endoscopic images and another with images
obtained from Micro-Ball (a new wireless cubic endoscopy system with six image
sensors). The results have demonstrated the accuracy and robustness of the
proposed methodology both visually and quantitatively.Comment: Journal draf
Visual Saliency Estimation and Its Applications
The human visual system can automatically emphasize some parts of the image and ignore the other parts when seeing an image or a scene. Visual Saliency Estimation (VSE) aims to imitate this functionality of the human visual system to estimate the degree of human attention attracted by different image regions and locate the salient object. The study of VSE will help us explore the way human visual systems extract objects from an image. It has wide applications, such as robot navigation, video surveillance, object tracking, self-driving, etc.
The current VSE approaches on natural images models generic visual stimuli based on lower-level image features, e.g., locations, local/global contrast, and feature correlation. However, existing models still suffered from some drawbacks. First, these methods fail in the cases when the objects are near the image borders. Second, due to imperfect model assumptions, many methods cannot achieve good results when the images have complicated backgrounds. In this work, I focuses on solving these challenges on the natural images by proposing a new framework with more robust task-related priors, and I apply the framework to low-quality biomedical images.
The new framework formulates VSE on natural images as a quadratic program (QP) problem. It proposes an adaptive center-based bias hypothesis to replace the most common image center-based center-bias, which is much more robust even when the objects are far away from the image center. Second, it models a new smoothness term to force similar color having similar saliency statistics, which is more robust than that based on region dissimilarity when the image has a complicated background or low contrast. The new approach achieves the best performance among 11 latest methods on three public datasets. Three approaches based on the framework by integrating both high-level domain-knowledge and robust low-level saliency assumptions are utilized to imitate the radiologists\u27 attention to detect breast tumors from breast ultrasound images
- …