19,881 research outputs found
Exploring strong-field deviations from general relativity via gravitational waves
Two new observational windows have been opened to strong gravitational
physics: gravitational waves, and very long baseline interferometry. This
suggests observational searches for new phenomena in this regime, and in
particular for those necessary to make black hole evolution consistent with
quantum mechanics. We describe possible features of "compact quantum objects"
that replace classical black holes in a consistent quantum theory, and
approaches to observational tests for these using gravitational waves. This is
an example of a more general problem of finding consistent descriptions of
deviations from general relativity, which can be tested via gravitational wave
detection. Simple models for compact modifications to classical black holes are
described via an effective stress tensor, possibly with an effective equation
of state. A general discussion is given of possible observational signatures,
and of their dependence on properties of the colliding objects. The possibility
that departures from classical behavior are restricted to the near-horizon
regime raises the question of whether these will be obscured in gravitational
wave signals, due to their mutual interaction in a binary coalescence being
deep in the mutual gravitational well. Numerical simulation with such simple
models will be useful to clarify the sensitivity of gravitational wave
observation to such highly compact departures from classical black holes.Comment: 20 pages, 9 figures. v2: references and CERN preprint number adde
Video matching using DC-image and local features
This paper presents a suggested framework for video matching based on local features extracted from the DCimage of MPEG compressed videos, without decompression. The relevant arguments and supporting evidences are discussed for developing video similarity techniques that works directly on compressed videos, without decompression, and especially utilising small size images. Two experiments are carried to support the above. The first is comparing between the DC-image and I-frame, in terms of matching performance and the corresponding computation complexity. The second experiment compares between using local features and global features in video matching, especially in the compressed domain and with the small size images. The results confirmed that the use of DC-image, despite its highly reduced size, is promising as it produces at least similar (if not better) matching precision, compared to the full I-frame. Also, using SIFT, as a local feature, outperforms precision of most of the standard global features. On the other hand, its computation complexity is relatively higher, but it is still within the realtime margin. There are also various optimisations that can be done to improve this computation complexity
Review of Person Re-identification Techniques
Person re-identification across different surveillance cameras with disjoint
fields of view has become one of the most interesting and challenging subjects
in the area of intelligent video surveillance. Although several methods have
been developed and proposed, certain limitations and unresolved issues remain.
In all of the existing re-identification approaches, feature vectors are
extracted from segmented still images or video frames. Different similarity or
dissimilarity measures have been applied to these vectors. Some methods have
used simple constant metrics, whereas others have utilised models to obtain
optimised metrics. Some have created models based on local colour or texture
information, and others have built models based on the gait of people. In
general, the main objective of all these approaches is to achieve a
higher-accuracy rate and lowercomputational costs. This study summarises
several developments in recent literature and discusses the various available
methods used in person re-identification. Specifically, their advantages and
disadvantages are mentioned and compared.Comment: Published 201
Phenomenology of D-Brane Inflation with General Speed of Sound
A characteristic of D-brane inflation is that fluctuations in the inflaton
field can propagate at a speed significantly less than the speed of light. This
yields observable effects that are distinct from those of single-field slow
roll inflation, such as a modification of the inflationary consistency relation
and a potentially large level of non-Gaussianities. We present a numerical
algorithm that extends the inflationary flow formalism to models with general
speed of sound. For an ensemble of D-brane inflation models parameterized by
the Hubble parameter and the speed of sound as polynomial functions of the
inflaton field, we give qualitative predictions for the key inflationary
observables. We discuss various consistency relations for D-brane inflation,
and compare the qualitative shapes of the warp factors we derive from the
numerical models with analytical warp factors considered in the literature.
Finally, we derive and apply a generalized microphysical bound on the inflaton
field variation during brane inflation. While a large number of models are
consistent with current cosmological constraints, almost all of these models
violate the compactification constraint on the field range in four-dimensional
Planck units. If the field range bound is to hold, then models with a
detectable level of non-Gaussianity predict a blue scalar spectral index, and a
tensor component that is far below the detection limit of any future
experiment.Comment: 23 pages, 11 figures, v2: version accepted by PRD; minor
clarifications and references added to the text. Higher resolution figures
are available in the published version. v3: post-publication correction of
typo in Eq. 87. No results/conclusions change
Fast and Accurate 3D Face Recognition Using Registration to an Intrinsic Coordinate System and Fusion of Multiple Region classifiers
In this paper we present a new robust approach for 3D face registration to an intrinsic coordinate system of the face. The intrinsic coordinate system is defined by the vertical symmetry plane through the nose, the tip of the nose and the slope of the bridge of the nose. In addition, we propose a 3D face classifier based on the fusion of many dependent region classifiers for overlapping face regions. The region classifiers use PCA-LDA for feature extraction and the likelihood ratio as a matching score. Fusion is realised using straightforward majority voting for the identification scenario. For verification, a voting approach is used as well and the decision is defined by comparing the number of votes to a threshold. Using the proposed registration method combined with a classifier consisting of 60 fused region classifiers we obtain a 99.0% identification rate on the all vs first identification test of the FRGC v2 data. A verification rate of 94.6% at FAR=0.1% was obtained for the all vs all verification test on the FRGC v2 data using fusion of 120 region classifiers. The first is the highest reported performance and the second is in the top-5 of best performing systems on these tests. In addition, our approach is much faster than other methods, taking only 2.5 seconds per image for registration and less than 0.1 ms per comparison. Because we apply feature extraction using PCA and LDA, the resulting template size is also very small: 6 kB for 60 region classifiers
3D Registration of Aerial and Ground Robots for Disaster Response: An Evaluation of Features, Descriptors, and Transformation Estimation
Global registration of heterogeneous ground and aerial mapping data is a
challenging task. This is especially difficult in disaster response scenarios
when we have no prior information on the environment and cannot assume the
regular order of man-made environments or meaningful semantic cues. In this
work we extensively evaluate different approaches to globally register UGV
generated 3D point-cloud data from LiDAR sensors with UAV generated point-cloud
maps from vision sensors. The approaches are realizations of different
selections for: a) local features: key-points or segments; b) descriptors:
FPFH, SHOT, or ESF; and c) transformation estimations: RANSAC or FGR.
Additionally, we compare the results against standard approaches like applying
ICP after a good prior transformation has been given. The evaluation criteria
include the distance which a UGV needs to travel to successfully localize, the
registration error, and the computational cost. In this context, we report our
findings on effectively performing the task on two new Search and Rescue
datasets. Our results have the potential to help the community take informed
decisions when registering point-cloud maps from ground robots to those from
aerial robots.Comment: Awarded Best Paper at the 15th IEEE International Symposium on
Safety, Security, and Rescue Robotics 2017 (SSRR 2017
- ā¦