29,298 research outputs found
Flight Dynamics-based Recovery of a UAV Trajectory using Ground Cameras
We propose a new method to estimate the 6-dof trajectory of a flying object
such as a quadrotor UAV within a 3D airspace monitored using multiple fixed
ground cameras. It is based on a new structure from motion formulation for the
3D reconstruction of a single moving point with known motion dynamics. Our main
contribution is a new bundle adjustment procedure which in addition to
optimizing the camera poses, regularizes the point trajectory using a prior
based on motion dynamics (or specifically flight dynamics). Furthermore, we can
infer the underlying control input sent to the UAV's autopilot that determined
its flight trajectory.
Our method requires neither perfect single-view tracking nor appearance
matching across views. For robustness, we allow the tracker to generate
multiple detections per frame in each video. The true detections and the data
association across videos is estimated using robust multi-view triangulation
and subsequently refined during our bundle adjustment procedure. Quantitative
evaluation on simulated data and experiments on real videos from indoor and
outdoor scenes demonstrates the effectiveness of our method
Matching and recovering 3D people from multiple views
© 2022 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes,creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.This paper introduces an approach to simultaneously match and recover 3D people from multiple calibrated cameras. To this end, we present an affinity measure between 2D detections across different views that enforces an uncertainty geometric consistency. This similarity is then exploited by a novel multi-view matching algorithm to cluster the detections, being robust against partial observations as well as bad detections and without assuming any prior about the number of people in the scene. After that, the multi-view correspondences are used in order to efficiently infer the 3D pose of each body by means of a 3D pictorial structure model in combination with physico-geometric constraints. Our algorithm is thoroughly evaluated on challenging scenarios where several human bodies are performing different activities which involve complex motions, producing large occlusions in some views and noisy observations. We outperform state-of-the-art results in terms of matching and 3D reconstruction.Peer ReviewedPostprint (author's final draft
3D-Matched-Filter Galaxy Cluster Finder I: Selection Functions and CFHTLS Deep Clusters
We present an optimised galaxy cluster finder, 3D-Matched-Filter (3D-MF),
which utilises galaxy cluster radial profiles, luminosity functions and
redshift information to detect galaxy clusters in optical surveys. This method
is an improvement over other matched-filter methods, most notably through
implementing redshift slicing of the data to significantly reduce line-of-sight
projections and related false positives. We apply our method to the
Canada-France-Hawaii Telescope Legacy Survey (CFHTLS) Deep fields, finding ~170
galaxy clusters per square degree in the 0.2 <= z <= 1.0 redshift range. Future
surveys such as LSST and JDEM can exploit 3D-MF's automated methodology to
produce complete and reliable galaxy cluster catalogues. We determine the
reliability and accuracy of the statistical approach of our method through a
thorough analysis of mock data from the Millennium Simulation. We detect
clusters with 100% completeness for M_200 >= 3.0x10^(14)M_sun, 88% completeness
for M_200 >= 1.0x10^(14)M_sun, and 72% completeness well into the 10^(13)M_sun
cluster mass range. We show a 36% multiple detection rate for cluster masses >=
1.5x10^(13)M_sun and a 16% false detection rate for galaxy clusters >~
5x10^(13)M_sun, reporting that for clusters with masses <~ 5x10^(13)M_sun false
detections may increase up to ~24%. Utilising these selection functions we
conclude that our galaxy cluster catalogue is the most complete CFHTLS Deep
cluster catalogue to date.Comment: 18 pages, 17 figures, 5 tables; v2: added Fig 5, minor edits to match
version published in MNRA
Achieving Low-Complexity Maximum-Likelihood Detection for the 3D MIMO Code
The 3D MIMO code is a robust and efficient space-time block code (STBC) for
the distributed MIMO broadcasting but suffers from high maximum-likelihood (ML)
decoding complexity. In this paper, we first analyze some properties of the 3D
MIMO code to show that the 3D MIMO code is fast-decodable. It is proved that
the ML decoding performance can be achieved with a complexity of O(M^{4.5})
instead of O(M^8) in quasi static channel with M-ary square QAM modulations.
Consequently, we propose a simplified ML decoder exploiting the unique
properties of 3D MIMO code. Simulation results show that the proposed
simplified ML decoder can achieve much lower processing time latency compared
to the classical sphere decoder with Schnorr-Euchner enumeration
Photon-Efficient Computational 3D and Reflectivity Imaging with Single-Photon Detectors
Capturing depth and reflectivity images at low light levels from active
illumination of a scene has wide-ranging applications. Conventionally, even
with single-photon detectors, hundreds of photon detections are needed at each
pixel to mitigate Poisson noise. We develop a robust method for estimating
depth and reflectivity using on the order of 1 detected photon per pixel
averaged over the scene. Our computational imager combines physically accurate
single-photon counting statistics with exploitation of the spatial correlations
present in real-world reflectivity and 3D structure. Experiments conducted in
the presence of strong background light demonstrate that our computational
imager is able to accurately recover scene depth and reflectivity, while
traditional maximum-likelihood based imaging methods lead to estimates that are
highly noisy. Our framework increases photon efficiency 100-fold over
traditional processing and also improves, somewhat, upon first-photon imaging
under a total acquisition time constraint in raster-scanned operation. Thus our
new imager will be useful for rapid, low-power, and noise-tolerant active
optical imaging, and its fixed dwell time will facilitate parallelization
through use of a detector array.Comment: 11 pages, 8 figure
- …