24,892 research outputs found
Ranking News-Quality Multimedia
News editors need to find the photos that best illustrate a news piece and
fulfill news-media quality standards, while being pressed to also find the most
recent photos of live events. Recently, it became common to use social-media
content in the context of news media for its unique value in terms of immediacy
and quality. Consequently, the amount of images to be considered and filtered
through is now too much to be handled by a person. To aid the news editor in
this process, we propose a framework designed to deliver high-quality,
news-press type photos to the user. The framework, composed of two parts, is
based on a ranking algorithm tuned to rank professional media highly and a
visual SPAM detection module designed to filter-out low-quality media. The core
ranking algorithm is leveraged by aesthetic, social and deep-learning semantic
features. Evaluation showed that the proposed framework is effective at finding
high-quality photos (true-positive rate) achieving a retrieval MAP of 64.5% and
a classification precision of 70%.Comment: To appear in ICMR'1
The ALHAMBRA Survey: Bayesian Photometric Redshifts with 23 bands for 3 squared degrees
The ALHAMBRA (Advance Large Homogeneous Area Medium Band Redshift
Astronomical) survey has observed 8 different regions of the sky, including
sections of the COSMOS, DEEP2, ELAIS, GOODS-N, SDSS and Groth fields using a
new photometric system with 20 contiguous ~ filters covering the
optical range, combining them with deep imaging. The observations,
carried out with the Calar Alto 3.5m telescope using the wide field (0.25 sq.
deg FOV) optical camera LAICA and the NIR instrument Omega-2000, correspond to
~700hrs on-target science images. The photometric system was designed to
maximize the effective depth of the survey in terms of accurate spectral-type
and photo-zs estimation along with the capability of identification of
relatively faint emission lines. Here we present multicolor photometry and
photo-zs for ~438k galaxies, detected in synthetic F814W images, complete down
to I~24.5 AB, taking into account realistic noise estimates, and correcting by
PSF and aperture effects with the ColorPro software. The photometric ZP have
been calibrated using stellar transformation equations and refined internally,
using a new technique based on the highly robust photometric redshifts measured
for emission line galaxies. We calculate photometric redshifts with the BPZ2
code, which includes new empirically calibrated templates and priors. Our
photo-zs have a precision of for I<22.5 and 1.4% for
22.5<I<24.5. Precisions of less than 0.5% are reached for the brighter
spectroscopic sample, showing the potential of medium-band photometric surveys.
The global shows a mean redshift =0.56 for I=0.86 for
I<24.5 AB. The data presented here covers an effective area of 2.79 sq. deg,
split into 14 strips of 58.5'x15.5' and represents ~32 hrs of on-target.Comment: The catalog data and a full resolution version of this paper is
available at https://cloud.iaa.csic.es/alhambra
A PatchMatch-based Dense-field Algorithm for Video Copy-Move Detection and Localization
We propose a new algorithm for the reliable detection and localization of
video copy-move forgeries. Discovering well crafted video copy-moves may be
very difficult, especially when some uniform background is copied to occlude
foreground objects. To reliably detect both additive and occlusive copy-moves
we use a dense-field approach, with invariant features that guarantee
robustness to several post-processing operations. To limit complexity, a
suitable video-oriented version of PatchMatch is used, with a multiresolution
search strategy, and a focus on volumes of interest. Performance assessment
relies on a new dataset, designed ad hoc, with realistic copy-moves and a wide
variety of challenging situations. Experimental results show the proposed
method to detect and localize video copy-moves with good accuracy even in
adverse conditions
An Evaluation of Popular Copy-Move Forgery Detection Approaches
A copy-move forgery is created by copying and pasting content within the same
image, and potentially post-processing it. In recent years, the detection of
copy-move forgeries has become one of the most actively researched topics in
blind image forensics. A considerable number of different algorithms have been
proposed focusing on different types of postprocessed copies. In this paper, we
aim to answer which copy-move forgery detection algorithms and processing steps
(e.g., matching, filtering, outlier detection, affine transformation
estimation) perform best in various postprocessing scenarios. The focus of our
analysis is to evaluate the performance of previously proposed feature sets. We
achieve this by casting existing algorithms in a common pipeline. In this
paper, we examined the 15 most prominent feature sets. We analyzed the
detection performance on a per-image basis and on a per-pixel basis. We created
a challenging real-world copy-move dataset, and a software framework for
systematic image manipulation. Experiments show, that the keypoint-based
features SIFT and SURF, as well as the block-based DCT, DWT, KPCA, PCA and
Zernike features perform very well. These feature sets exhibit the best
robustness against various noise sources and downsampling, while reliably
identifying the copied regions.Comment: Main paper: 14 pages, supplemental material: 12 pages, main paper
appeared in IEEE Transaction on Information Forensics and Securit
Two-View Matching with View Synthesis Revisited
Wide-baseline matching focussing on problems with extreme viewpoint change is
considered. We introduce the use of view synthesis with affine-covariant
detectors to solve such problems and show that matching with the Hessian-Affine
or MSER detectors outperforms the state-of-the-art ASIFT.
To minimise the loss of speed caused by view synthesis, we propose the
Matching On Demand with view Synthesis algorithm (MODS) that uses progressively
more synthesized images and more (time-consuming) detectors until reliable
estimation of geometry is possible. We show experimentally that the MODS
algorithm solves problems beyond the state-of-the-art and yet is comparable in
speed to standard wide-baseline matchers on simpler problems.
Minor contributions include an improved method for tentative correspondence
selection, applicable both with and without view synthesis and a view synthesis
setup greatly improving MSER robustness to blur and scale change that increase
its running time by 10% only.Comment: 25 pages, 14 figure
Anatomy-specific classification of medical images using deep convolutional nets
Automated classification of human anatomy is an important prerequisite for
many computer-aided diagnosis systems. The spatial complexity and variability
of anatomy throughout the human body makes classification difficult. "Deep
learning" methods such as convolutional networks (ConvNets) outperform other
state-of-the-art methods in image classification tasks. In this work, we
present a method for organ- or body-part-specific anatomical classification of
medical images acquired using computed tomography (CT) with ConvNets. We train
a ConvNet, using 4,298 separate axial 2D key-images to learn 5 anatomical
classes. Key-images were mined from a hospital PACS archive, using a set of
1,675 patients. We show that a data augmentation approach can help to enrich
the data set and improve classification performance. Using ConvNets and data
augmentation, we achieve anatomy-specific classification error of 5.9 % and
area-under-the-curve (AUC) values of an average of 0.998 in testing. We
demonstrate that deep learning can be used to train very reliable and accurate
classifiers that could initialize further computer-aided diagnosis.Comment: Presented at: 2015 IEEE International Symposium on Biomedical
Imaging, April 16-19, 2015, New York Marriott at Brooklyn Bridge, NY, US
- …