3 research outputs found

    Dynamic Bayesian Collective Awareness Models for a Network of Ego-Things

    Get PDF
    A novel approach is proposed for multimodal collective awareness (CA) of multiple networked intelligent agents. Each agent is here considered as an Internet-of-Things (IoT) node equipped with machine learning capabilities; CA aims to provide the network with updated causal knowledge of the state of execution of actions of each node performing a joint task, with particular attention to anomalies that can arise. Data-driven dynamic Bayesian models learned from multisensory data recorded during the normal realization of a joint task (agent network experience) are used for distributed state estimation of agents and detection of abnormalities. A set of switching dynamic Bayesian network (DBN) models collectively learned in a training phase, each related to particular sensorial modality, is used to allow each agent in the network to perform synchronous estimation of possible abnormalities occurring when a new task of the same type is jointly performed. Collective DBN (CDBN) learning is performed by unsupervised clustering of generalized errors (GEs) obtained from a starting generalized model. A growing neural gas (GNG) algorithm is used as a basis to learn the discrete switching variables at the semantic level. Conditional probabilities linking nodes in the CDBN models are estimated using obtained clusters. CDBN models are associated with a Bayesian inference method, namely, distributed Markov jump particle filter (D-MJPF), employed for joint state estimation and abnormality detection. The effects of networking protocols and of communications in the estimation of state and abnormalities are analyzed. Performance is evaluated by using a small network of two autonomous vehicles performing joint navigation tasks in a controlled environment. In the proposed method, first the sharing of observations is considered in ideal condition, and then the effects of a wireless communication channel have been analyzed for the collective abnormality estimation of the agents. Rician wireless channel and the usage of two protocols (i.e., IEEE 802.11p and IEEE 802.15.4) along with different channel conditions are considered as well

    Learning probabilistic interaction models

    Get PDF
    We live in a multi-modal world; therefore it comes as no surprise that the human brain is tailored for the integration of multi-sensory input. Inspired by the human brain, the multi-sensory data is used in Artificial Intelligence (AI) for teaching different concepts to computers. Autonomous Agents (AAs) are AI systems that sense and act autonomously in complex dynamic environments. Such agents can build up Self-Awareness (SA) by describing their experiences through multi-sensorial information with appropriate models and correlating them incrementally with the currently perceived situation to continuously expand their knowledge. This thesis proposes methods to learn such awareness models for AAs. These models include SA and situational awareness models in order to perceive and understand itself (self variables) and its surrounding environment (external variables) at the same time. An agent is considered self-aware when it can dynamically observe and understand itself and its surrounding through different proprioceptive and exteroceptive sensors which facilitate learning and maintaining a contextual representation by processing the observed multi-sensorial data. We proposed a probabilistic framework for generative and descriptive dynamic models that can lead to a computationally efficient SA system. In general, generative models facilitate the prediction of future states while descriptive models enable to select the representation that best fits the current observation. The proposed framework employs a Probabilistic Graphical Models (PGMs) such as Dynamic Bayesian Networks (DBNs) that represent a set of variables and their conditional dependencies. Once we obtain this probabilistic representation, the latter allows the agent to model interactions between itself, as observed through proprioceptive sensors, and the environment, as observed through exteroceptive sensors. In order to develop an awareness system, not only an agent needs to recognize the normal states and perform predictions accordingly, but also it is necessary to detect the abnormal states with respect to its previously learned knowledge. Therefore, there is a need to measure anomalies or irregularities in an observed situation. In this case, the agent should be aware that an abnormality (i.e., a non-stationary condition) never experienced before, is currently present. Due to our specific way of representation, which makes it possible to model multi-sensorial data into a uniform interaction model, the proposed work not only improves predictions of future events but also can be potentially used to effectuate a transfer learning process where information related to the learned model can be moved and interpreted by another body
    corecore