30,527 research outputs found
CMS Pixel Detector Upgrade
The present Compact Muon Solenoid silicon pixel tracking system has been
designed for a peak luminosity of 1034cm-2s-1 and total dose corresponding to
two years of the Large Hadron Collider (LHC) operation. With the steady
increase of the luminosity expected at the LHC, a new pixel detector with four
barrel layers and three endcap disks is being designed. We will present the key
points of the design: the new geometry, which minimizes the material budget and
increases the tracking points, and the development of a fast digital readout
architecture, which ensures readout efficiency even at high rate. The expected
performances for tracking and vertexing of the new pixel detector are also
addressed.Comment: 5 pages, 7 figures, Proceedings of the DPF-2011 Conference,
Providence, RI, August 8-13, 201
Attention and Anticipation in Fast Visual-Inertial Navigation
We study a Visual-Inertial Navigation (VIN) problem in which a robot needs to
estimate its state using an on-board camera and an inertial sensor, without any
prior knowledge of the external environment. We consider the case in which the
robot can allocate limited resources to VIN, due to tight computational
constraints. Therefore, we answer the following question: under limited
resources, what are the most relevant visual cues to maximize the performance
of visual-inertial navigation? Our approach has four key ingredients. First, it
is task-driven, in that the selection of the visual cues is guided by a metric
quantifying the VIN performance. Second, it exploits the notion of
anticipation, since it uses a simplified model for forward-simulation of robot
dynamics, predicting the utility of a set of visual cues over a future time
horizon. Third, it is efficient and easy to implement, since it leads to a
greedy algorithm for the selection of the most relevant visual cues. Fourth, it
provides formal performance guarantees: we leverage submodularity to prove that
the greedy selection cannot be far from the optimal (combinatorial) selection.
Simulations and real experiments on agile drones show that our approach ensures
state-of-the-art VIN performance while maintaining a lean processing time. In
the easy scenarios, our approach outperforms appearance-based feature selection
in terms of localization errors. In the most challenging scenarios, it enables
accurate visual-inertial navigation while appearance-based feature selection
fails to track robot's motion during aggressive maneuvers.Comment: 20 pages, 7 figures, 2 table
- …