65,762 research outputs found

    Non Parametric Distributed Inference in Sensor Networks Using Box Particles Messages

    Get PDF
    This paper deals with the problem of inference in distributed systems where the probability model is stored in a distributed fashion. Graphical models provide powerful tools for modeling this kind of problems. Inspired by the box particle filter which combines interval analysis with particle filtering to solve temporal inference problems, this paper introduces a belief propagation-like message-passing algorithm that uses bounded error methods to solve the inference problem defined on an arbitrary graphical model. We show the theoretic derivation of the novel algorithm and we test its performance on the problem of calibration in wireless sensor networks. That is the positioning of a number of randomly deployed sensors, according to some reference defined by a set of anchor nodes for which the positions are known a priori. The new algorithm, while achieving a better or similar performance, offers impressive reduction of the information circulating in the network and the needed computation times

    A Metamodel for Jason BDI Agents

    Get PDF
    In this paper, a metamodel, which can be used for modeling Belief-Desire-Intention (BDI) agents working on Jason platform, is introduced. The metamodel provides the modeling of agents with including their belief bases, plans, sets of events, rules and actions respectively. We believe that the work presented herein contributes to the current multi-agent system (MAS) metamodeling efforts by taking into account another BDI agent platform which is not considered in the existing platform-specific MAS modeling approaches. A graphical concrete syntax and a modeling tool based on the proposed metamodel are also developed in this study. MAS models can be checked according to the constraints originated from the Jason metamodel definitions and hence conformance of the instance models is supplied by utilizing the tool. Use of the syntax and the modeling tool are demonstrated with the design of a cleaning robot which is a well-known example of Jason BDI architecture

    On the Geometry of Message Passing Algorithms for Gaussian Reciprocal Processes

    Full text link
    Reciprocal processes are acausal generalizations of Markov processes introduced by Bernstein in 1932. In the literature, a significant amount of attention has been focused on developing dynamical models for reciprocal processes. Recently, probabilistic graphical models for reciprocal processes have been provided. This opens the way to the application of efficient inference algorithms in the machine learning literature to solve the smoothing problem for reciprocal processes. Such algorithms are known to converge if the underlying graph is a tree. This is not the case for a reciprocal process, whose associated graphical model is a single loop network. The contribution of this paper is twofold. First, we introduce belief propagation for Gaussian reciprocal processes. Second, we establish a link between convergence analysis of belief propagation for Gaussian reciprocal processes and stability theory for differentially positive systems.Comment: 15 pages; Typos corrected; This paper introduces belief propagation for Gaussian reciprocal processes and extends the convergence analysis in arXiv:1603.04419 to the Gaussian cas

    Modeling Scalability of Distributed Machine Learning

    Full text link
    Present day machine learning is computationally intensive and processes large amounts of data. It is implemented in a distributed fashion in order to address these scalability issues. The work is parallelized across a number of computing nodes. It is usually hard to estimate in advance how many nodes to use for a particular workload. We propose a simple framework for estimating the scalability of distributed machine learning algorithms. We measure the scalability by means of the speedup an algorithm achieves with more nodes. We propose time complexity models for gradient descent and graphical model inference. We validate our models with experiments on deep learning training and belief propagation. This framework was used to study the scalability of machine learning algorithms in Apache Spark.Comment: 6 pages, 4 figures, appears at ICDE 201
    corecore