890 research outputs found
Sequential optimization for efficient high-quality object proposal generation
We are motivated by the need for a generic object proposal generation algorithm which achieves good balance between object detection recall, proposal localization quality and computational efficiency. We propose a novel object proposal algorithm, BING ++, which inherits the virtue of good computational efficiency of BING [1] but significantly improves its proposal localization quality. At high level we formulate the problem of object proposal generation from a novel probabilistic perspective, based on which our BING++ manages to improve the localization quality by employing edges and segments to estimate object boundaries and update the proposals sequentially. We propose learning the parameters efficiently by searching for approximate solutions in a quantized parameter space for complexity reduction. We demonstrate the generalization of BING++ with the same fixed parameters across different object classes and datasets. Empirically our BING++ can run at half speed of BING on CPU, but significantly improve the localization quality by 18.5 and 16.7 percent on both VOC2007 and Microhsoft COCO datasets, respectively. Compared with other state-of-the-art approaches, BING++ can achieve comparable performance, but run significantly faster
Sequential Optimization for Efficient High-Quality Object Proposal Generation
We are motivated by the need for a generic object proposal generation
algorithm which achieves good balance between object detection recall, proposal
localization quality and computational efficiency. We propose a novel object
proposal algorithm, BING++, which inherits the virtue of good computational
efficiency of BING but significantly improves its proposal localization
quality. At high level we formulate the problem of object proposal generation
from a novel probabilistic perspective, based on which our BING++ manages to
improve the localization quality by employing edges and segments to estimate
object boundaries and update the proposals sequentially. We propose learning
the parameters efficiently by searching for approximate solutions in a
quantized parameter space for complexity reduction. We demonstrate the
generalization of BING++ with the same fixed parameters across different object
classes and datasets. Empirically our BING++ can run at half speed of BING on
CPU, but significantly improve the localization quality by 18.5% and 16.7% on
both VOC2007 and Microhsoft COCO datasets, respectively. Compared with other
state-of-the-art approaches, BING++ can achieve comparable performance, but run
significantly faster.Comment: Accepted by TPAM
Wormholes and masses for Goldstone bosons
There exist non-trivial stationary points of the Euclidean action for an
axion particle minimally coupled to Einstein gravity, dubbed wormholes. They
explicitly break the continuos global shift symmetry of the axion in a
non-perturbative way, and generate an effective potential that may compete with
QCD depending on the value of the axion decay constant. In this paper, we
explore both theoretical and phenomenological aspects of this issue. On the
theory side, we address the problem of stability of the wormhole solutions, and
we show that the spectrum of the quadratic action features only positive
eigenvalues. On the phenomenological side, we discuss, beside the obvious
application to the QCD axion, relevant consequences for models with ultralight
dark matter, black hole superradiance, and the relaxation of the electroweak
scale. We conclude discussing wormhole solutions for a generic coset and the
potential they generate.Comment: 50 pages, 15 figures. v2: minor changes, refs adde
Sample selection via clustering to construct support vector-like classifiers
This paper explores the possibility of constructing RBF classifiers which, somewhat like support vector machines, use a reduced number of samples as centroids, by means of selecting samples in a direct way. Because sample selection is viewed as a hard computational problem, this selection is done after a previous vector quantization: this way obtaining also other similar machines using centroids selected from those that are learned in a supervised manner. Several forms of designing these machines are considered, in particular with respect to sample selection; as well as some different criteria to train them. Simulation results for well-known classification problems show very good performance of the corresponding designs, improving that of support vector machines and reducing substantially their number of units. This shows that our interest in selecting samples (or centroids) in an efficient manner is justified. Many new research avenues appear from these experiments and discussions, as suggested in our conclusions.Publicad
Object Detection in 20 Years: A Survey
Object detection, as of one the most fundamental and challenging problems in
computer vision, has received great attention in recent years. Its development
in the past two decades can be regarded as an epitome of computer vision
history. If we think of today's object detection as a technical aesthetics
under the power of deep learning, then turning back the clock 20 years we would
witness the wisdom of cold weapon era. This paper extensively reviews 400+
papers of object detection in the light of its technical evolution, spanning
over a quarter-century's time (from the 1990s to 2019). A number of topics have
been covered in this paper, including the milestone detectors in history,
detection datasets, metrics, fundamental building blocks of the detection
system, speed up techniques, and the recent state of the art detection methods.
This paper also reviews some important detection applications, such as
pedestrian detection, face detection, text detection, etc, and makes an in-deep
analysis of their challenges as well as technical improvements in recent years.Comment: This work has been submitted to the IEEE TPAMI for possible
publicatio
Testing quantum mechanics: a statistical approach
As experiments continue to push the quantum-classical boundary using
increasingly complex dynamical systems, the interpretation of experimental data
becomes more and more challenging: when the observations are noisy, indirect,
and limited, how can we be sure that we are observing quantum behavior? This
tutorial highlights some of the difficulties in such experimental tests of
quantum mechanics, using optomechanics as the central example, and discusses
how the issues can be resolved using techniques from statistics and insights
from quantum information theory.Comment: v1: 2 pages; v2: invited tutorial for Quantum Measurements and
Quantum Metrology, substantial expansion of v1, 19 pages; v3: accepted; v4:
corrected some errors, publishe
Dynamical Boson Stars
The idea of stable, localized bundles of energy has strong appeal as a model
for particles. In the 1950s John Wheeler envisioned such bundles as smooth
configurations of electromagnetic energy that he called {\em geons}, but none
were found. Instead, particle-like solutions were found in the late 1960s with
the addition of a scalar field, and these were given the name {\em boson
stars}. Since then, boson stars find use in a wide variety of models as sources
of dark matter, as black hole mimickers, in simple models of binary systems,
and as a tool in finding black holes in higher dimensions with only a single
killing vector. We discuss important varieties of boson stars, their dynamic
properties, and some of their uses, concentrating on recent efforts.Comment: 79 pages, 25 figures, invited review for Living Reviews in
Relativity; major revision in 201
DiverGet: A Search-Based Software Testing Approach for Deep Neural Network Quantization Assessment
Quantization is one of the most applied Deep Neural Network (DNN) compression
strategies, when deploying a trained DNN model on an embedded system or a cell
phone. This is owing to its simplicity and adaptability to a wide range of
applications and circumstances, as opposed to specific Artificial Intelligence
(AI) accelerators and compilers that are often designed only for certain
specific hardware (e.g., Google Coral Edge TPU). With the growing demand for
quantization, ensuring the reliability of this strategy is becoming a critical
challenge. Traditional testing methods, which gather more and more genuine data
for better assessment, are often not practical because of the large size of the
input space and the high similarity between the original DNN and its quantized
counterpart. As a result, advanced assessment strategies have become of
paramount importance. In this paper, we present DiverGet, a search-based
testing framework for quantization assessment. DiverGet defines a space of
metamorphic relations that simulate naturally-occurring distortions on the
inputs. Then, it optimally explores these relations to reveal the disagreements
among DNNs of different arithmetic precision. We evaluate the performance of
DiverGet on state-of-the-art DNNs applied to hyperspectral remote sensing
images. We chose the remote sensing DNNs as they're being increasingly deployed
at the edge (e.g., high-lift drones) in critical domains like climate change
research and astronomy. Our results show that DiverGet successfully challenges
the robustness of established quantization techniques against
naturally-occurring shifted data, and outperforms its most recent concurrent,
DiffChaser, with a success rate that is (on average) four times higher.Comment: Accepted for publication in The Empirical Software Engineering
Journal (EMSE
- …