20,171 research outputs found

    Interactive Sensing in Social Networks

    Full text link
    This paper presents models and algorithms for interactive sensing in social networks where individuals act as sensors and the information exchange between individuals is exploited to optimize sensing. Social learning is used to model the interaction between individuals that aim to estimate an underlying state of nature. In this context the following questions are addressed: How can self-interested agents that interact via social learning achieve a tradeoff between individual privacy and reputation of the social group? How can protocols be designed to prevent data incest in online reputation blogs where individuals make recommendations? How can sensing by individuals that interact with each other be used by a global decision maker to detect changes in the underlying state of nature? When individual agents possess limited sensing, computation and communication capabilities, can a network of agents achieve sophisticated global behavior? Social and game theoretic learning are natural settings for addressing these questions. This article presents an overview, insights and discussion of social learning models in the context of data incest propagation, change detection and coordination of decision making

    Distributed Learning for Cooperative Inference

    Full text link
    We study the problem of cooperative inference where a group of agents interact over a network and seek to estimate a joint parameter that best explains a set of observations. Agents do not know the network topology or the observations of other agents. We explore a variational interpretation of the Bayesian posterior density, and its relation to the stochastic mirror descent algorithm, to propose a new distributed learning algorithm. We show that, under appropriate assumptions, the beliefs generated by the proposed algorithm concentrate around the true parameter exponentially fast. We provide explicit non-asymptotic bounds for the convergence rate. Moreover, we develop explicit and computationally efficient algorithms for observation models belonging to exponential families

    V2X System Architecture Utilizing Hybrid Gaussian Process-based Model Structures

    Full text link
    Scalable communication is of utmost importance for reliable dissemination of time-sensitive information in cooperative vehicular ad-hoc networks (VANETs), which is, in turn, an essential prerequisite for the proper operation of the critical cooperative safety applications. The model-based communication (MBC) is a recently-explored scalability solution proposed in the literature, which has shown a promising potential to reduce the channel congestion to a great extent. In this work, based on the MBC notion, a technology-agnostic hybrid model selection policy for Vehicle-to-Everything (V2X) communication is proposed which benefits from the characteristics of the non-parametric Bayesian inference techniques, specifically Gaussian Processes. The results show the effectiveness of the proposed communication architecture on both reducing the required message exchange rate and increasing the remote agent tracking precision.Comment: Accepted for Oral Presentation at the 13th IEEE Systems Conference (SysCon 2019

    Cooperative Hierarchical Dirichlet Processes: Superposition vs. Maximization

    Full text link
    The cooperative hierarchical structure is a common and significant data structure observed in, or adopted by, many research areas, such as: text mining (author-paper-word) and multi-label classification (label-instance-feature). Renowned Bayesian approaches for cooperative hierarchical structure modeling are mostly based on topic models. However, these approaches suffer from a serious issue in that the number of hidden topics/factors needs to be fixed in advance and an inappropriate number may lead to overfitting or underfitting. One elegant way to resolve this issue is Bayesian nonparametric learning, but existing work in this area still cannot be applied to cooperative hierarchical structure modeling. In this paper, we propose a cooperative hierarchical Dirichlet process (CHDP) to fill this gap. Each node in a cooperative hierarchical structure is assigned a Dirichlet process to model its weights on the infinite hidden factors/topics. Together with measure inheritance from hierarchical Dirichlet process, two kinds of measure cooperation, i.e., superposition and maximization, are defined to capture the many-to-many relationships in the cooperative hierarchical structure. Furthermore, two constructive representations for CHDP, i.e., stick-breaking and international restaurant process, are designed to facilitate the model inference. Experiments on synthetic and real-world data with cooperative hierarchical structures demonstrate the properties and the ability of CHDP for cooperative hierarchical structure modeling and its potential for practical application scenarios

    Location-Based Reasoning about Complex Multi-Agent Behavior

    Full text link
    Recent research has shown that surprisingly rich models of human activity can be learned from GPS (positional) data. However, most effort to date has concentrated on modeling single individuals or statistical properties of groups of people. Moreover, prior work focused solely on modeling actual successful executions (and not failed or attempted executions) of the activities of interest. We, in contrast, take on the task of understanding human interactions, attempted interactions, and intentions from noisy sensor data in a fully relational multi-agent setting. We use a real-world game of capture the flag to illustrate our approach in a well-defined domain that involves many distinct cooperative and competitive joint activities. We model the domain using Markov logic, a statistical-relational language, and learn a theory that jointly denoises the data and infers occurrences of high-level activities, such as a player capturing an enemy. Our unified model combines constraints imposed by the geometry of the game area, the motion model of the players, and by the rules and dynamics of the game in a probabilistically and logically sound fashion. We show that while it may be impossible to directly detect a multi-agent activity due to sensor noise or malfunction, the occurrence of the activity can still be inferred by considering both its impact on the future behaviors of the people involved as well as the events that could have preceded it. Further, we show that given a model of successfully performed multi-agent activities, along with a set of examples of failed attempts at the same activities, our system automatically learns an augmented model that is capable of recognizing success and failure, as well as goals of peoples actions with high accuracy. We compare our approach with other alternatives and show that our unified model, which takes into account not only relationships among individual players, but also relationships among activities over the entire length of a game, although more computationally costly, is significantly more accurate. Finally, we demonstrate that explicitly modeling unsuccessful attempts boosts performance on other important recognition tasks

    Distributed Detection via Bayesian Updates and Consensus

    Full text link
    In this paper, we discuss a class of distributed detection algorithms which can be viewed as implementations of Bayes' law in distributed settings. Some of the algorithms are proposed in the literature most recently, and others are first developed in this paper. The common feature of these algorithms is that they all combine (i) certain kinds of consensus protocols with (ii) Bayesian updates. They are different mainly in the aspect of the type of consensus protocol and the order of the two operations. After discussing their similarities and differences, we compare these distributed algorithms by numerical examples. We focus on the rate at which these algorithms detect the underlying true state of an object. We find that (a) The algorithms with consensus via geometric average is more efficient than that via arithmetic average; (b) The order of consensus aggregation and Bayesian update does not apparently influence the performance of the algorithms; (c) The existence of communication delay dramatically slows down the rate of convergence; (d) More communication between agents with different signal structures improves the rate of convergence.Comment: 6 pages, 3 figures. This paper has been submitted to Chinese Control Conference 2015 at Hangzhou, People's Republic of Chin

    Intelligence and Cooperative Search by Coupled Local Minimizers

    Full text link
    We show how coupling of local optimization processes can lead to better solutions than multi-start local optimization consisting of independent runs. This is achieved by minimizing the average energy cost of the ensemble, subject to synchronization constraints between the state vectors of the individual local minimizers. From an augmented Lagrangian which incorporates the synchronization constraints both as soft and hard constraints, a network is derived wherein the local minimizers interact and exchange information through the synchronization constraints. From the viewpoint of neural networks, the array can be considered as a Lagrange programming network for continuous optimization and as a cellular neural network (CNN). The penalty weights associated with the soft state synchronization constraints follow from the solution to a linear program. This expresses that the energy cost of the ensemble should maximally decrease. In this way successful local minimizers can implicitly impose their state to the others through a mechanism of master-slave dynamics resulting into a cooperative search mechanism. Improved information spreading within the ensemble is obtained by applying the concept of small-world networks. This work suggests, in an interdisciplinary context, the importance of information exchange and state synchronization within ensembles, towards issues as evolution, collective behaviour, optimality and intelligence.Comment: 25 pages, 10 figure

    Intelligent Wireless Communications Enabled by Cognitive Radio and Machine Learning

    Full text link
    The ability to intelligently utilize resources to meet the need of growing diversity in services and user behavior marks the future of wireless communication systems. Intelligent wireless communications aims at enabling the system to perceive and assess the available resources, to autonomously learn to adapt to the perceived wireless environment, and to reconfigure its operating mode to maximize the utility of the available resources. The perception capability and reconfigurability are the essential features of cognitive radio while modern machine learning techniques project great potential in system adaptation. In this paper, we discuss the development of the cognitive radio technology and machine learning techniques and emphasize their roles in improving spectrum and energy utility of wireless communication systems. We describe the state-of-the-art of relevant techniques, covering spectrum sensing and access approaches and powerful machine learning algorithms that enable spectrum- and energy-efficient communications in dynamic wireless environments. We also present practical applications of these techniques and identify further research challenges in cognitive radio and machine learning as applied to the existing and future wireless communication systems

    Decentralized Bayesian Learning over Graphs

    Full text link
    We propose a decentralized learning algorithm over a general social network. The algorithm leaves the training data distributed on the mobile devices while utilizing a peer to peer model aggregation method. The proposed algorithm allows agents with local data to learn a shared model explaining the global training data in a decentralized fashion. The proposed algorithm can be viewed as a Bayesian and peer-to-peer variant of federated learning in which each agent keeps a "posterior probability distribution" over a global model parameters. The agent update its "posterior" based on 1) the local training data and 2) the asynchronous communication and model aggregation with their 1-hop neighbors. This Bayesian formulation allows for a systematic treatment of model aggregation over any arbitrary connected graph. Furthermore, it provides strong analytic guarantees on converge in the realizable case as well as a closed form characterization of the rate of convergence. We also show that our methodology can be combined with efficient Bayesian inference techniques to train Bayesian neural networks in a decentralized manner. By empirical studies we show that our theoretical analysis can guide the design of network/social interactions and data partitioning to achieve convergence

    Dependency Networks for Collaborative Filtering and Data Visualization

    Full text link
    We describe a graphical model for probabilistic relationships---an alternative to the Bayesian network---called a dependency network. The graph of a dependency network, unlike a Bayesian network, is potentially cyclic. The probability component of a dependency network, like a Bayesian network, is a set of conditional distributions, one for each node given its parents. We identify several basic properties of this representation and describe a computationally efficient procedure for learning the graph and probability components from data. We describe the application of this representation to probabilistic inference, collaborative filtering (the task of predicting preferences), and the visualization of acausal predictive relationships.Comment: Appears in Proceedings of the Sixteenth Conference on Uncertainty in Artificial Intelligence (UAI2000
    • …
    corecore