55,583 research outputs found
Computation-Communication Trade-offs and Sensor Selection in Real-time Estimation for Processing Networks
Recent advances in electronics are enabling substantial processing to be
performed at each node (robots, sensors) of a networked system. Local
processing enables data compression and may mitigate measurement noise, but it
is still slower compared to a central computer (it entails a larger
computational delay). However, while nodes can process the data in parallel,
the centralized computational is sequential in nature. On the other hand, if a
node sends raw data to a central computer for processing, it incurs
communication delay. This leads to a fundamental communication-computation
trade-off, where each node has to decide on the optimal amount of preprocessing
in order to maximize the network performance. We consider a network in charge
of estimating the state of a dynamical system and provide three contributions.
First, we provide a rigorous problem formulation for optimal real-time
estimation in processing networks in the presence of delays. Second, we show
that, in the case of a homogeneous network (where all sensors have the same
computation) that monitors a continuous-time scalar linear system, the optimal
amount of local preprocessing maximizing the network estimation performance can
be computed analytically. Third, we consider the realistic case of a
heterogeneous network monitoring a discrete-time multi-variate linear system
and provide algorithms to decide on suitable preprocessing at each node, and to
select a sensor subset when computational constraints make using all sensors
suboptimal. Numerical simulations show that selecting the sensors is crucial.
Moreover, we show that if the nodes apply the preprocessing policy suggested by
our algorithms, they can largely improve the network estimation performance.Comment: 15 pages, 16 figures. Accepted journal versio
On Multi-Step Sensor Scheduling via Convex Optimization
Effective sensor scheduling requires the consideration of long-term effects
and thus optimization over long time horizons. Determining the optimal sensor
schedule, however, is equivalent to solving a binary integer program, which is
computationally demanding for long time horizons and many sensors. For linear
Gaussian systems, two efficient multi-step sensor scheduling approaches are
proposed in this paper. The first approach determines approximate but close to
optimal sensor schedules via convex optimization. The second approach combines
convex optimization with a \BB search for efficiently determining the optimal
sensor schedule.Comment: 6 pages, appeared in the proceedings of the 2nd International
Workshop on Cognitive Information Processing (CIP), Elba, Italy, June 201
Tracking Target Signal Strengths on a Grid using Sparsity
Multi-target tracking is mainly challenged by the nonlinearity present in the
measurement equation, and the difficulty in fast and accurate data association.
To overcome these challenges, the present paper introduces a grid-based model
in which the state captures target signal strengths on a known spatial grid
(TSSG). This model leads to \emph{linear} state and measurement equations,
which bypass data association and can afford state estimation via
sparsity-aware Kalman filtering (KF). Leveraging the grid-induced sparsity of
the novel model, two types of sparsity-cognizant TSSG-KF trackers are
developed: one effects sparsity through -norm regularization, and the
other invokes sparsity as an extra measurement. Iterative extended KF and
Gauss-Newton algorithms are developed for reduced-complexity tracking, along
with accurate error covariance updates for assessing performance of the
resultant sparsity-aware state estimators. Based on TSSG state estimates, more
informative target position and track estimates can be obtained in a follow-up
step, ensuring that track association and position estimation errors do not
propagate back into TSSG state estimates. The novel TSSG trackers do not
require knowing the number of targets or their signal strengths, and exhibit
considerably lower complexity than the benchmark hidden Markov model filter,
especially for a large number of targets. Numerical simulations demonstrate
that sparsity-cognizant trackers enjoy improved root mean-square error
performance at reduced complexity when compared to their sparsity-agnostic
counterparts.Comment: Submitted to IEEE Trans. on Signal Processin
Robust Photogeometric Localization over Time for Map-Centric Loop Closure
Map-centric SLAM is emerging as an alternative of conventional graph-based
SLAM for its accuracy and efficiency in long-term mapping problems. However, in
map-centric SLAM, the process of loop closure differs from that of conventional
SLAM and the result of incorrect loop closure is more destructive and is not
reversible. In this paper, we present a tightly coupled photogeometric metric
localization for the loop closure problem in map-centric SLAM. In particular,
our method combines complementary constraints from LiDAR and camera sensors,
and validates loop closure candidates with sequential observations. The
proposed method provides a visual evidence-based outlier rejection where
failures caused by either place recognition or localization outliers can be
effectively removed. We demonstrate the proposed method is not only more
accurate than the conventional global ICP methods but is also robust to
incorrect initial pose guesses.Comment: To Appear in IEEE ROBOTICS AND AUTOMATION LETTERS, ACCEPTED JANUARY
201
An Adaptive Design Methodology for Reduction of Product Development Risk
Embedded systems interaction with environment inherently complicates
understanding of requirements and their correct implementation. However,
product uncertainty is highest during early stages of development. Design
verification is an essential step in the development of any system, especially
for Embedded System. This paper introduces a novel adaptive design methodology,
which incorporates step-wise prototyping and verification. With each adaptive
step product-realization level is enhanced while decreasing the level of
product uncertainty, thereby reducing the overall costs. The back-bone of this
frame-work is the development of Domain Specific Operational (DOP) Model and
the associated Verification Instrumentation for Test and Evaluation, developed
based on the DOP model. Together they generate functionally valid test-sequence
for carrying out prototype evaluation. With the help of a case study 'Multimode
Detection Subsystem' the application of this method is sketched. The design
methodologies can be compared by defining and computing a generic performance
criterion like Average design-cycle Risk. For the case study, by computing
Average design-cycle Risk, it is shown that the adaptive method reduces the
product development risk for a small increase in the total design cycle time.Comment: 21 pages, 9 figure
- …