26,706 research outputs found
Bibliographic Review on Distributed Kalman Filtering
In recent years, a compelling need has arisen to understand the effects of distributed information structures on estimation and filtering. In this paper, a bibliographical review on distributed Kalman filtering (DKF) is provided.\ud
The paper contains a classification of different approaches and methods involved to DKF. The applications of DKF are also discussed and explained separately. A comparison of different approaches is briefly carried out. Focuses on the contemporary research are also addressed with emphasis on the practical applications of the techniques. An exhaustive list of publications, linked directly or indirectly to DKF in the open literature, is compiled to provide an overall picture of different developing aspects of this area
On the genericity properties in networked estimation: Topology design and sensor placement
In this paper, we consider networked estimation of linear, discrete-time
dynamical systems monitored by a network of agents. In order to minimize the
power requirement at the (possibly, battery-operated) agents, we require that
the agents can exchange information with their neighbors only \emph{once per
dynamical system time-step}; in contrast to consensus-based estimation where
the agents exchange information until they reach a consensus. It can be
verified that with this restriction on information exchange, measurement fusion
alone results in an unbounded estimation error at every such agent that does
not have an observable set of measurements in its neighborhood. To over come
this challenge, state-estimate fusion has been proposed to recover the system
observability. However, we show that adding state-estimate fusion may not
recover observability when the system matrix is structured-rank (-rank)
deficient.
In this context, we characterize the state-estimate fusion and measurement
fusion under both full -rank and -rank deficient system matrices.Comment: submitted for IEEE journal publicatio
REGULATORY APPROVAL OF NEW MEDICAL DEVICES: A CROSS SECTIONAL STUDY
Objective To investigate the regulatory approval of new medical devices. Design Cross sectional study of new medical devices reported in the biomedical literature. Data sources PubMed was searched between 1 January 2000 and 31 December 2004 to identify clinical studies of new medical devices. The search was carried out during this period to allow time for regulatory approval. Eligibility criteria for study selection Articles were included if they reported a clinical study of a new medical device and there was no evidence of a previous clinical study in the literature. We defined a medical device according to the US Food and Drug Administration as an “instrument, apparatus, implement, machine, contrivance, implant, in vitro reagent, or other similar or related article.” Main outcome measures Type of device, target specialty, and involvement of academia or of industry for each clinical study. The FDA medical databases were then searched for clearance or approval relevant to the device. Results 5574 titles and abstracts were screened, 493 full text articles assessed for eligibility, and 218 clinical studies of new medical devices included. In all, 99/218 (45%) of the devices described in clinical studies ultimately received regulatory clearance or approval. These included 510(k) clearance for devices determined to be “substantially equivalent” to another legally marketed device (78/99; 79%), premarket approval for high risk devices (17/99; 17%), and others (4/99; 4%). Of these, 43 devices (43/99; 43%) were actually cleared or approved before a clinical study was published. Conclusions We identified a multitude of new medical devices in clinical studies, almost half of which received regulatory clearance or approval. The 510(k) pathway was most commonly used, and clearance often preceded the first published clinical study
Noise thresholds for optical cluster-state quantum computation
In this paper we do a detailed numerical investigation of the fault-tolerant
threshold for optical cluster-state quantum computation. Our noise model allows
both photon loss and depolarizing noise, as a general proxy for all types of
local noise other than photon loss noise. We obtain a threshold region of
allowed pairs of values for the two types of noise. Roughly speaking, our
results show that scalable optical quantum computing is possible for photon
loss probabilities less than 0.003, and for depolarization probabilities less
than 0.0001. Our fault-tolerant protocol involves a number of innovations,
including a method for syndrome extraction known as telecorrection, whereby
repeated syndrome measurements are guaranteed to agree. This paper is an
extended version of [Dawson et al., Phys. Rev. Lett. 96, 020501].Comment: 28 pages. Corrections made to Table I
Cram\'er-Rao Bounds for Polynomial Signal Estimation using Sensors with AR(1) Drift
We seek to characterize the estimation performance of a sensor network where
the individual sensors exhibit the phenomenon of drift, i.e., a gradual change
of the bias. Though estimation in the presence of random errors has been
extensively studied in the literature, the loss of estimation performance due
to systematic errors like drift have rarely been looked into. In this paper, we
derive closed-form Fisher Information matrix and subsequently Cram\'er-Rao
bounds (upto reasonable approximation) for the estimation accuracy of
drift-corrupted signals. We assume a polynomial time-series as the
representative signal and an autoregressive process model for the drift. When
the Markov parameter for drift \rho<1, we show that the first-order effect of
drift is asymptotically equivalent to scaling the measurement noise by an
appropriate factor. For \rho=1, i.e., when the drift is non-stationary, we show
that the constant part of a signal can only be estimated inconsistently
(non-zero asymptotic variance). Practical usage of the results are demonstrated
through the analysis of 1) networks with multiple sensors and 2) bandwidth
limited networks communicating only quantized observations.Comment: 14 pages, 6 figures, This paper will appear in the Oct/Nov 2012 issue
of IEEE Transactions on Signal Processin
Recommended from our members
Next Steps for Hydrogen - physics, technology and the future
Hydrogen has been proposed as a future energy carrier for more than 40 years. In recent decades, impetus has been given by the need to reduce global greenhouse gas emissions from vehicles. In addition, hydrogen has the potential to facilitate the large-scale deployment of variable renewables in the electricity system. Despite such drivers, the long-anticipated hydrogen economy is proving to be slow to emerge. This report stresses the role that physics and physics-based technology could play in accelerating the large-scale deployment of hydrogen in the energy system.
Emphasis is given to the potential of cryogenic liquid hydrogen and the opportunities afforded by developments in nanoscience for hydrogen storage and use. The use of low-temperature liquid hydrogen opens up a technological opportunity separate from, but complementary with, energy applications. The new opportunity is the ability to cool novel materials into the superconducting state without the need to use significant quantities of expensive liquid helium. Two of the authors have previously coined the term “hydrogen cryomagnetics” for when liquid hydrogen is utilised in high-field and high-efficiency magnets. The opportunity for liquid hydrogen to displace liquid helium may be a relatively small business opportunity compared to global transport energy
demands, but it potentially affords an opportunity to kick-start the wider commercial use of hydrogen.
The report considers various important factors shaping the future for hydrogen, such as competing production methods and the importance of safety, but throughout it is clear that science and engineering are of central importance to hydrogen innovation and physics has an important role to play
- …