595 research outputs found
Goal-oriented adaptive sampling under random field modelling of response probability distributions
In the study of natural and artificial complex systems, responses that are
not completely determined by the considered decision variables are commonly
modelled probabilistically, resulting in response distributions varying across
decision space. We consider cases where the spatial variation of these response
distributions does not only concern their mean and/or variance but also other
features including for instance shape or uni-modality versus multi-modality.
Our contributions build upon a non-parametric Bayesian approach to modelling
the thereby induced fields of probability distributions, and in particular to a
spatial extension of the logistic Gaussian model.
The considered models deliver probabilistic predictions of response
distributions at candidate points, allowing for instance to perform
(approximate) posterior simulations of probability density functions, to
jointly predict multiple moments and other functionals of target distributions,
as well as to quantify the impact of collecting new samples on the state of
knowledge of the distribution field of interest. In particular, we introduce
adaptive sampling strategies leveraging the potential of the considered random
distribution field models to guide system evaluations in a goal-oriented way,
with a view towards parsimoniously addressing calibration and related problems
from non-linear (stochastic) inversion and global optimisation
A Naive Bayes Source Classifier for X-ray Sources
The Chandra Carina Complex Project (CCCP) provides a sensitive X-ray survey
of a nearby starburst region over >1 square degree in extent. Thousands of
faint X-ray sources are found, many concentrated into rich young stellar
clusters. However, significant contamination from unrelated Galactic and
extragalactic sources is present in the X-ray catalog. We describe the use of a
naive Bayes classifier to assign membership probabilities to individual
sources, based on source location, X-ray properties, and visual/infrared
properties. For the particular membership decision rule adopted, 75% of CCCP
sources are classified as members, 11% are classified as contaminants, and 14%
remain unclassified. The resulting sample of stars likely to be Carina members
is used in several other studies, which appear in a Special Issue of the ApJS
devoted to the CCCP.Comment: Accepted for the ApJS Special Issue on the Chandra Carina Complex
Project (CCCP), scheduled for publication in May 2011. All 16 CCCP Special
Issue papers are available at
http://cochise.astro.psu.edu/Carina_public/special_issue.html through 2011 at
least. 19 pages, 7 figure
On the use of autonomous unmanned vehicles in response to hazardous atmospheric release incidents
Recent events have induced a surge of interest in the methods of response to releases of hazardous materials or gases into the atmosphere. In the last decade there has been particular interest in mapping and quantifying emissions for regulatory purposes, emergency response, and environmental monitoring. Examples include: responding to events such as gas leaks, nuclear accidents or chemical, biological or radiological (CBR) accidents or attacks, and even exploring sources of methane emissions on the planet Mars. This thesis presents a review of the potential responses to hazardous releases, which includes source localisation, boundary tracking, mapping and source term estimation. [Continues.]</div
Recommended from our members
Enabling Resilience in Cyber-Physical-Human Water Infrastructures
Rapid urbanization and growth in urban populations have forced community-scale infrastructures (e.g., water, power and natural gas distribution systems, and transportation networks) to operate at their limits. Aging (and failing) infrastructures around the world are becoming increasingly vulnerable to operational degradation, extreme weather, natural disasters and cyber attacks/failures. These trends have wide-ranging socioeconomic consequences and raise public safety concerns. In this thesis, we introduce the notion of cyber-physical-human infrastructures (CPHIs) - smart community-scale infrastructures that bridge technologies with physical infrastructures and people. CPHIs are highly dynamic stochastic systems characterized by complex physical models that exhibit regionwide variability and uncertainty under disruptions. Failures in these distributed settings tend to be difficult to predict and estimate, and expensive to repair. Real-time fault identification is crucial to ensure continuity of lifeline services to customers at adequate levels of quality. Emerging smart community technologies have the potential to transform our failing infrastructures into robust and resilient future CPHIs.In this thesis, we explore one such CPHI - community water infrastructures. Current urban water infrastructures, that are decades (sometimes over a 100 years) old, encompass diverse geophysical regimes. Water stress concerns include the scarcity of supply and an increase in demand due to urbanization. Deterioration and damage to the infrastructure can disrupt water service; contamination events can result in economic and public health consequences. Unfortunately, little investment has gone into modernizing this key lifeline.To enhance the resilience of water systems, we propose an integrated middleware framework for quick and accurate identification of failures in complex water networks that exhibit uncertain behavior. Our proposed approach integrates IoT-based sensing, domain-specific models and simulations with machine learning methods to identify failures (pipe breaks, contamination events). The composition of techniques results in cost-accuracy-latency tradeoffs in fault identification, inherent in CPHIs due to the constraints imposed by cyber components, physical mechanics and human operators. Three key resilience problems are addressed in this thesis; isolation of multiple faults under a small number of failures, state estimation of the water systems under extreme events such as earthquakes, and contaminant source identification in water networks using human-in-the-loop based sensing. By working with real world water agencies (WSSC, DC and LADWP, LA), we first develop an understanding of operations of water CPHI systems. We design and implement a sensor-simulation-data integration framework AquaSCALE, and apply it to localize multiple concurrent pipe failures. We use a mixture of infrastructure measurements (i.e., historical and live water pressure/flow), environmental data (i.e., weather) and human inputs (i.e., twitter feeds), combined and enhanced with the domain model and supervised learning techniques to locate multiple failures at fine levels of granularity (individual pipeline level) with detection time reduced by orders of magnitude (from hours/days to minutes). We next consider the resilience of water infrastructures under extreme events (i.e., earthquakes) - the challenge here is the lack of apriori knowledge and the increased number and severity of damages to infrastructures. We present a graphical model based approach for efficient online state estimation, where the offline graph factorization partitions a given network into disjoint subgraphs, and the belief propagation based inference is executed on-the-fly in a distributed manner on those subgraphs. Our proposed approach can isolate 80% broken pipes and 99% loss-of-service to end-users during an earthquake.Finally, we address issues of water quality - today this is a human-in-the-loop process where operators need to gather water samples for lab tests. We incorporate the necessary abstractions with event processing methods into a workflow, which iteratively selects and refines the set of potential failure points via human-driven grab sampling. Our approach utilizes Hidden Markov Model based representations for event inference, along with reinforcement learning methods for further refining event locations and reducing the cost of human efforts.The proposed techniques are integrated into a middleware architecture, which enables components to communicate/collaborate with one another. We validate our approaches through a prototype implementation with multiple real-world water networks, supply-demand patterns from water utilities and policies set by the U.S. EPA. While our focus here is on water infrastructures in a community, the developed end-to-end solution is applicable to other infrastructures and community services which operate in disruptive and resource-constrained environments
Implicit Deep Adaptive Design: Policy–Based Experimental Design without Likelihoods
We introduce implicit Deep Adaptive Design (iDAD), a new method for
performing adaptive experiments in real-time with implicit models. iDAD
amortizes the cost of Bayesian optimal experimental design (BOED) by learning a
design policy network upfront, which can then be deployed quickly at the time
of the experiment. The iDAD network can be trained on any model which simulates
differentiable samples, unlike previous design policy work that requires a
closed form likelihood and conditionally independent experiments. At
deployment, iDAD allows design decisions to be made in milliseconds, in
contrast to traditional BOED approaches that require heavy computation during
the experiment itself. We illustrate the applicability of iDAD on a number of
experiments, and show that it provides a fast and effective mechanism for
performing adaptive design with implicit models.Comment: 33 pages, 8 figures. Published as a conference paper at NeurIPS 202
Deep Gaussian Processes: Advances in Models and Inference
Hierarchical models are certainly in fashion these days. It seems difficult to navigate the field of machine learning without encountering `deep' models of one sort or another. The popularity of the deep learning revolution has been driven by some striking empirical successes, prompting both intense rapture and intense criticism. The criticisms often centre around the lack of model uncertainty, leading to sometimes drastically overconfident predictions. Others point to the lack of a mechanism for incorporating prior knowledge, and the reliance on large datasets. A widely held hope is that a Bayesian approach might overcome these problems.
The deep Gaussian process presents a paradigm for building deep models from a Bayesian perspective. A Gaussian process is a prior for functions. A deep Gaussian process uses several Gaussian process functions and combines them hierarchically through composition (that is, the output of one is the input to the next). The deep Gaussian process promises to capture the compositional nature of deep learning while mitigating some of the disadvantages through a Bayesian approach.
The thesis develops deep Gaussian process modelling in a number of ways. The model is first interpreted differently from previous work, not as a `hierarchical prior' but as a factorized prior with an hierarchical likelihood. Mean functions are suggested to avoid issues of degeneracy and to aid initialization. The main contribution is a new method of inference that avoids the burden of representing the function values directly through an application of sparse variational inference. This method scales to arbitrarily large data and is shown to work well in practice through experiments.
The use of variational inference recasts (approximate) inference as optimization of Gaussian distributions. This optimization has an exploitable geometry via the natural gradient. The natural gradient is shown to be advantageous for single layer non-conjugate models, and for the (final layer of a) deep Gaussian process model.
Deep Gaussian processes can be a model both for complex associations between variables and complex marginal distributions of single variables. Incorporating noise in the hierarchy leads to complex marginal distribution through the non-linearities of the mappings at each layer. The inference required for noisy variables cannot be handled with sparse methods, as sparse methods rely on correlations between variables, which are absent for noisy variables. Instead, a more direct approach is developed, using an importance weighted variational scheme.Open Acces
Multilevel Delayed Acceptance MCMC with Applications to Hydrogeological Inverse Problems
Quantifying the uncertainty of model predictions is a critical task for engineering decision support systems. This is a particularly challenging effort in the context of statistical inverse problems, where the model parameters are unknown or poorly constrained, and where the data is often scarce. Many such problems emerge in the fields of hydrology and hydro--environmental engineering in general, and in hydrogeology in particular. While methods for rigorously quantifying the uncertainty of such problems exist, they are often prohibitively computationally expensive, particularly when the forward model is high--dimensional and expensive to evaluate. In this thesis, I present a Metropolis--Hastings algorithm, namely the Multilevel Delayed Acceptance (MLDA) algorithm, which exploits a hierarchy of forward models of increasing computational cost to significantly reduce the total cost of quantifying the uncertainty of high--dimensional, expensive forward models. The algorithm is shown to be in detailed balance with the posterior distribution of parameters, and the computational gains of the algorithm is demonstrated on multiple examples. Additionally, I present an approach for exploiting a deep neural network as an ultra--fast model approximation in an MLDA model hierarchy. This method is demonstrated in the context of both 2D and 3D groundwater flow modelling. Finally, I present a novel approach to adaptive optimal design of groundwater surveying, in which MLDA is employed to construct the posterior Monte Carlo estimates. This method utilises the posterior uncertainty of the primary problem in conjunction with the expected solution to an adjoint problem to sequentially determine the optimal location of the next datapoint.Engineering and Physical Sciences Research Council (EPSRC)Alan Turing InstituteEngineering and Physical Sciences Research Council (EPSRC
Radiation Sensing: Design and Deployment of Sensors and Detectors
Radiation detection is important in many fields, and it poses significant challenges for instrument designers. Radiation detection instruments, particularly for nuclear decommissioning and security applications, are required to operate in unknown environments and should detect and characterise radiation fields in real time. This book covers both theory and practice, and it solicits recent advances in radiation detection, with a particular focus on radiation detection instrument design, real-time data processing, radiation simulation and experimental work, robot design, control systems, task planning and radiation shielding
- …