6,608 research outputs found
Gossip and Distributed Kalman Filtering: Weak Consensus under Weak Detectability
The paper presents the gossip interactive Kalman filter (GIKF) for
distributed Kalman filtering for networked systems and sensor networks, where
inter-sensor communication and observations occur at the same time-scale. The
communication among sensors is random; each sensor occasionally exchanges its
filtering state information with a neighbor depending on the availability of
the appropriate network link. We show that under a weak distributed
detectability condition:
1. the GIKF error process remains stochastically bounded, irrespective of the
instability properties of the random process dynamics; and
2. the network achieves \emph{weak consensus}, i.e., the conditional
estimation error covariance at a (uniformly) randomly selected sensor converges
in distribution to a unique invariant measure on the space of positive
semi-definite matrices (independent of the initial state.)
To prove these results, we interpret the filtered states (estimates and error
covariances) at each node in the GIKF as stochastic particles with local
interactions. We analyze the asymptotic properties of the error process by
studying as a random dynamical system the associated switched (random) Riccati
equation, the switching being dictated by a non-stationary Markov chain on the
network graph.Comment: Submitted to the IEEE Transactions, 30 pages
A distributed networked approach for fault detection of large-scale systems
Networked systems present some key new challenges in the development of fault diagnosis architectures. This paper proposes a novel distributed networked fault detection methodology for large-scale interconnected systems. The proposed formulation incorporates a synchronization methodology with a filtering approach in order to reduce the effect of measurement noise and time delays on the fault detection performance. The proposed approach allows the monitoring of multi-rate systems, where asynchronous and delayed measurements are available. This is achieved through the development of a virtual sensor scheme with a model-based re-synchronization algorithm and a delay compensation strategy for distributed fault diagnostic units. The monitoring architecture exploits an adaptive approximator with learning capabilities for handling uncertainties in the interconnection dynamics. A consensus-based estimator with timevarying weights is introduced, for improving fault detectability in the case of variables shared among more than one subsystem. Furthermore, time-varying threshold functions are designed to prevent false-positive alarms. Analytical fault detectability sufficient conditions are derived and extensive simulation results are presented to illustrate the effectiveness of the distributed fault detection technique
Constraining the Black Hole Mass Spectrum with Gravitational Wave Observations I: The Error Kernel
Many scenarios have been proposed for the origin of the supermassive black
holes (SMBHs) that are found in the centres of most galaxies. Many of these
formation scenarios predict a high-redshift population of intermediate-mass
black holes (IMBHs), with masses in the range 100 to 100000 times that of the
Sun. A powerful way to observe these IMBHs is via gravitational waves the black
holes emit as they merge. The statistics of the observed black hole population
should, in principle, allow us to discriminate between competing astrophysical
scenarios for the origin and formation of SMBHs. However, gravitational wave
detectors such as LISA will not be able to detect all such mergers nor assign
precise black hole parameters to the merger, due to weak gravitational wave
signal strengths. In order to use LISA observations to infer the statistics of
the underlying population, these errors must be taken into account. We describe
here a method for folding the LISA gravitational wave parameter error estimates
into an `error kernel' designed for use at the population model level. The
effects of this error function are demonstrated by applying it to several
recent models of black hole mergers, and some tentative conclusions are made
about LISA's ability to test scenarios of the origin and formation of
supermassive black holes.Comment: 22 pages, 4 figures. There have been various clarifications, typos
corrected, and so on, partly in response to referee comments. This second
arXiv version has been switched to AASTeX preprint format for better
compatibility with the arXiv. Accepted for publication in MNRA
Learning-based attacks in cyber-physical systems
We introduce the problem of learning-based attacks in a simple abstraction of
cyber-physical systems---the case of a discrete-time, linear, time-invariant
plant that may be subject to an attack that overrides the sensor readings and
the controller actions. The attacker attempts to learn the dynamics of the
plant and subsequently override the controller's actuation signal, to destroy
the plant without being detected. The attacker can feed fictitious sensor
readings to the controller using its estimate of the plant dynamics and mimic
the legitimate plant operation. The controller, on the other hand, is
constantly on the lookout for an attack; once the controller detects an attack,
it immediately shuts the plant off. In the case of scalar plants, we derive an
upper bound on the attacker's deception probability for any measurable control
policy when the attacker uses an arbitrary learning algorithm to estimate the
system dynamics. We then derive lower bounds for the attacker's deception
probability for both scalar and vector plants by assuming a specific
authentication test that inspects the empirical variance of the system
disturbance. We also show how the controller can improve the security of the
system by superimposing a carefully crafted privacy-enhancing signal on top of
the "nominal control policy." Finally, for nonlinear scalar dynamics that
belong to the Reproducing Kernel Hilbert Space (RKHS), we investigate the
performance of attacks based on nonlinear Gaussian-processes (GP) learning
algorithms
On the gravitational wave background from compact binary coalescences in the band of ground-based interferometers
This paper reports a comprehensive study on the gravitational wave (GW)
background from compact binary coalescences. We consider in our calculations
newly available observation-based neutron star and black hole mass
distributions and complete analytical waveforms that include post-Newtonian
amplitude corrections. Our results show that: (i) post-Newtonian effects cause
a small reduction in the GW background signal; (ii) below 100 Hz the background
depends primarily on the local coalescence rate and the average chirp
mass and is independent of the chirp mass distribution; (iii) the effects of
cosmic star formation rates and delay times between the formation and merger of
binaries are linear below 100 Hz and can be represented by a single parameter
within a factor of ~ 2; (iv) a simple power law model of the energy density
parameter up to 50-100 Hz is sufficient to be used
as a search template for ground-based interferometers. In terms of the
detection prospects of the background signal, we show that: (i) detection (a
signal-to-noise ratio of 3) within one year of observation by the Advanced LIGO
detectors (H1-L1) requires a coalescence rate of for binary neutron stars (binary black holes); (ii) this limit on
could be reduced 3-fold for two co-located detectors, whereas the
currently proposed worldwide network of advanced instruments gives only ~ 30%
improvement in detectability; (iii) the improved sensitivity of the planned
Einstein Telescope allows not only confident detection of the background but
also the high frequency components of the spectrum to be measured. Finally we
show that sub-threshold binary neutron star merger events produce a strong
foreground, which could be an issue for future terrestrial stochastic searches
of primordial GWs.Comment: A few typos corrected to match the published version in MNRA
Data Transmission Over Networks for Estimation and Control
We consider the problem of controlling a linear time invariant process when the controller is located at a location remote from where the sensor measurements are being generated. The communication from the sensor to the controller is supported by a communication network with arbitrary topology composed of analog erasure channels. Using a separation principle, we prove that the optimal linear-quadratic-Gaussian (LQG) controller consists of an LQ optimal regulator along with an estimator that estimates the state of the process across the communication network. We then determine the optimal information processing strategy that should be followed by each node in the network so that the estimator is able to compute the best possible estimate in the minimum mean squared error sense. The algorithm is optimal for any packet-dropping process and at every time step, even though it is recursive and hence requires a constant amount of memory, processing and transmission at every node in the network per time step. For the case when the packet drop processes are memoryless and independent across links, we analyze the stability properties and the performance of the closed loop system. The algorithm is an attempt to escape the viewpoint of treating a network of communication links as a single end-to-end link with the probability of successful transmission determined by some measure of the reliability of the network
- …