3,002 research outputs found
Graphical Models and Symmetries : Loopy Belief Propagation Approaches
Whenever a person or an automated system has to reason in uncertain domains, probability theory is necessary. Probabilistic graphical models allow us to build statistical models that capture complex dependencies between random variables. Inference in these models, however, can easily become intractable. Typical ways to address this scaling issue are inference by approximate message-passing, stochastic gradients, and MapReduce, among others. Exploiting the symmetries of graphical models, however, has not yet been considered for scaling statistical machine learning applications. One instance of graphical models that are inherently symmetric are statistical relational models. These have recently gained attraction within the machine learning and AI communities and combine probability theory with first-order logic, thereby allowing for an efficient representation of structured relational domains. The provided formalisms to compactly represent complex real-world domains enable us to effectively describe large problem instances. Inference within and training of graphical models, however, have not been able to keep pace with the increased representational power. This thesis tackles two major aspects of graphical models and shows that both inference and training can indeed benefit from exploiting symmetries. It first deals with efficient inference exploiting symmetries in graphical models for various query types. We introduce lifted loopy belief propagation (lifted LBP), the first lifted parallel inference approach for relational as well as propositional graphical models. Lifted LBP can effectively speed up marginal inference, but cannot straightforwardly be applied to other types of queries. Thus we also demonstrate efficient lifted algorithms for MAP inference and higher order marginals, as well as the efficient handling of multiple inference tasks. Then we turn to the training of graphical models and introduce the first lifted online training for relational models. Our training procedure and the MapReduce lifting for loopy belief propagation combine lifting with the traditional statistical approaches to scaling, thereby bridging the gap between statistical relational learning and traditional statistical machine learning
3D Robotic Sensing of People: Human Perception, Representation and Activity Recognition
The robots are coming. Their presence will eventually bridge the digital-physical divide and dramatically impact human life by taking over tasks where our current society has shortcomings (e.g., search and rescue, elderly care, and child education). Human-centered robotics (HCR) is a vision to address how robots can coexist with humans and help people live safer, simpler and more independent lives.
As humans, we have a remarkable ability to perceive the world around us, perceive people, and interpret their behaviors. Endowing robots with these critical capabilities in highly dynamic human social environments is a significant but very challenging problem in practical human-centered robotics applications.
This research focuses on robotic sensing of people, that is, how robots can perceive and represent humans and understand their behaviors, primarily through 3D robotic vision. In this dissertation, I begin with a broad perspective on human-centered robotics by discussing its real-world applications and significant challenges. Then, I will introduce a real-time perception system, based on the concept of Depth of Interest, to detect and track multiple individuals using a color-depth camera that is installed on moving robotic platforms. In addition, I will discuss human representation approaches, based on local spatio-temporal features, including new “CoDe4D” features that incorporate both color and depth information, a new “SOD” descriptor to efficiently quantize 3D visual features, and the novel AdHuC features, which are capable of representing the activities of multiple individuals. Several new algorithms to recognize human activities are also discussed, including the RG-PLSA model, which allows us to discover activity patterns without supervision, the MC-HCRF model, which can explicitly investigate certainty in latent temporal patterns, and the FuzzySR model, which is used to segment continuous data into events and probabilistically recognize human activities. Cognition models based on recognition results are also implemented for decision making that allow robotic systems to react to human activities. Finally, I will conclude with a discussion of future directions that will accelerate the upcoming technological revolution of human-centered robotics
Dynamic safety analysis of decommissioning and abandonment of offshore oil and gas installations
The global oil and gas industry have seen an increase in the number of installations moving towards decommissioning. Offshore decommissioning is a complex, challenging and costly activity, making safety one of the major concerns. The decommissioning operation is, therefore, riskier than capital projects, partly due to the uniqueness of every offshore installation, and mainly because these installations were not designed for removal during their development phases. The extent of associated risks is deep and wide due to limited data and incomplete knowledge of the equipment conditions. For this reason, it is important to capture every uncertainty that can be introduced at the operational level, or existing hazards due to the hostile environment, technical difficulties, and the timing of the decommissioning operations. Conventional accident modelling techniques cannot capture the complex interactions among contributing elements. To assess the safety risks, a dynamic safety analysis of the accident is, thus, necessary. In this thesis, a dynamic integrated safety analysis model is proposed and developed to capture both planned and evolving risks during the various stages of decommissioning. First, the failure data are obtained from source-to-source and are processed utilizing Hierarchical Bayesian
Analysis. Then, the system failure and potential accident scenarios are built on bowtie model which is mapped into a Bayesian network with advanced relaxation techniques. The Dynamic Integrated Safety Analysis (DISA) allows for the combination of reliability tools to identify safetycritical causals and their evolution into single undesirable failure through the utilisation of source to-source variability, time-dependent prediction, diagnostic, and economic risk assessment to support effective recommendations and decisions-making. The DISA framework is applied to the Elgin platform well abandonment and Brent Alpha jacket structure decommissioning and the results are validated through sensitivity analysis. Through a dynamic-diagnostic and multi-factor regression analysis, the loss values of accident contributory factors are also presented. The study shows that integrating Hierarchical Bayesian Analysis (HBA) and dynamic Bayesian networks (DBN) application to modelling time-variant risks are essential to achieve a well-informed decommissioning decision through the identification of safety critical barriers that could be mitigated against to drive down the cost of remediation.The global oil and gas industry have seen an increase in the number of installations moving towards decommissioning. Offshore decommissioning is a complex, challenging and costly activity, making safety one of the major concerns. The decommissioning operation is, therefore, riskier than capital projects, partly due to the uniqueness of every offshore installation, and mainly because these installations were not designed for removal during their development phases. The extent of associated risks is deep and wide due to limited data and incomplete knowledge of the equipment conditions. For this reason, it is important to capture every uncertainty that can be introduced at the operational level, or existing hazards due to the hostile environment, technical difficulties, and the timing of the decommissioning operations. Conventional accident modelling techniques cannot capture the complex interactions among contributing elements. To assess the safety risks, a dynamic safety analysis of the accident is, thus, necessary. In this thesis, a dynamic integrated safety analysis model is proposed and developed to capture both planned and evolving risks during the various stages of decommissioning. First, the failure data are obtained from source-to-source and are processed utilizing Hierarchical Bayesian
Analysis. Then, the system failure and potential accident scenarios are built on bowtie model which is mapped into a Bayesian network with advanced relaxation techniques. The Dynamic Integrated Safety Analysis (DISA) allows for the combination of reliability tools to identify safetycritical causals and their evolution into single undesirable failure through the utilisation of source to-source variability, time-dependent prediction, diagnostic, and economic risk assessment to support effective recommendations and decisions-making. The DISA framework is applied to the Elgin platform well abandonment and Brent Alpha jacket structure decommissioning and the results are validated through sensitivity analysis. Through a dynamic-diagnostic and multi-factor regression analysis, the loss values of accident contributory factors are also presented. The study shows that integrating Hierarchical Bayesian Analysis (HBA) and dynamic Bayesian networks (DBN) application to modelling time-variant risks are essential to achieve a well-informed decommissioning decision through the identification of safety critical barriers that could be mitigated against to drive down the cost of remediation
Recommended from our members
Formally justified and modular Bayesian inference for probabilistic programs
Probabilistic modelling offers a simple and coherent framework to describe the
real world in the face of uncertainty. Furthermore, by applying Bayes' rule
it is possible to use probabilistic models to make inferences about the state of
the world from partial observations. While traditionally probabilistic models
were constructed on paper, more recently the approach of probabilistic
programming enables users to write the models in executable languages resembling
computer programs and to freely mix them with deterministic code.
It has long been recognised that the semantics of programming languages is
complicated and the intuitive understanding that programmers have is often
inaccurate, resulting in difficult to understand bugs and unexpected program
behaviours. Programming languages are therefore studied in a rigorous way using
formal languages with mathematically defined semantics. Traditionally formal
semantics of probabilistic programs are defined using exact inference results,
but in practice exact Bayesian inference is not tractable and approximate
methods are used instead, posing a question of how the results of these
algorithms relate to the exact results. Correctness of such approximate methods
is usually argued somewhat less rigorously, without reference to a formal
semantics.
In this dissertation we formally develop denotational semantics for
probabilistic programs that correspond to popular sampling algorithms often used
in practice. The semantics is defined for an expressive typed lambda calculus
with higher-order functions and inductive types, extended with probabilistic
effects for sampling and conditioning, allowing continuous distributions and
unbounded likelihoods. It makes crucial use of the recently developed formalism
of quasi-Borel spaces to bring all these elements together. We provide semantics
corresponding to several variants of Markov chain Monte Carlo and Sequential
Monte Carlo methods and formally prove a notion of correctness for these
algorithms in the context of probabilistic programming.
We also show that the semantic construction can be directly mapped to an
implementation using established functional programming abstractions called
monad transformers. We develop a compact Haskell library for probabilistic
programming closely corresponding to the semantic construction, giving users a
high level of assurance in the correctness of the implementation. We also
demonstrate on a collection of benchmarks that the library offers performance
competitive with existing systems of similar scope.
An important property of our construction, both the semantics and the
implementation, is the high degree of modularity it offers. All the inference
algorithms are constructed by combining small building blocks in a setup where
the type system ensures correctness of compositions. We show that with basic
building blocks corresponding to vanilla Metropolis-Hastings and Sequential
Monte Carlo we can implement more advanced algorithms known in the literature,
such as Resample-Move Sequential Monte Carlo, Particle Marginal
Metropolis-Hastings, and Sequential Monte Carlo squared. These implementations
are very concise, reducing the effort required to produce them and the scope for
bugs. On top of that, our modular construction enables in some cases
deterministic testing of randomised inference algorithms, further increasing
reliability of the implementation.Engineering and Physical Sciences Research Council, Cambridge Trust, Cambridge-Tuebingen programm
Lifted graphical models: a survey
Lifted graphical models provide a language for expressing dependencies between different types of entities, their attributes, and their diverse relations, as well as techniques for probabilistic reasoning in such multi-relational domains. In this survey, we review a general form for a lifted graphical model, a par-factor graph, and show how a number of existing statistical relational representations map to this formalism. We discuss inference algorithms, including lifted inference algorithms, that efficiently compute the answers to probabilistic queries over such models. We also review work in learning lifted graphical models from data. There is a growing need for statistical relational models (whether they go by that name or another), as we are inundated with data which is a mix of structured and unstructured, with entities and relations extracted in a noisy manner from text, and with the need to reason effectively with this data. We hope that this synthesis of ideas from many different research groups will provide an accessible starting point for new researchers in this expanding field
Backwards is the way forward: feedback in the cortical hierarchy predicts the expected future
Clark offers a powerful description of the brain as a prediction machine, which offers progress on two distinct levels. First, on an abstract conceptual level, it provides a unifying framework for perception, action, and cognition (including subdivisions such as attention, expectation, and imagination). Second, hierarchical prediction offers progress on a concrete descriptive level for testing and constraining conceptual elements and mechanisms of predictive coding models (estimation of predictions, prediction errors, and internal models)
- …