9 research outputs found

    Randomized opinion dynamics over networks: influence estimation from partial observations

    Full text link
    In this paper, we propose a technique for the estimation of the influence matrix in a sparse social network, in which nn individual communicate in a gossip way. At each step, a random subset of the social actors is active and interacts with randomly chosen neighbors. The opinions evolve according to a Friedkin and Johnsen mechanism, in which the individuals updates their belief to a convex combination of their current belief, the belief of the agents they interact with, and their initial belief, or prejudice. Leveraging recent results of estimation of vector autoregressive processes, we reconstruct the social network topology and the strength of the interconnections starting from \textit{partial observations} of the interactions, thus removing one of the main drawbacks of finite horizon techniques. The effectiveness of the proposed method is shown on randomly generation networks

    Cloud-assisted Distributed Nonlinear Optimal Control for Dynamics over Graph

    Get PDF
    Dynamics over graph are large-scale systems in which the dynamic coupling among subsystems is modeled by a graph. Examples arise in spatially distributed systems (as discretized PDEs), multi-agent control systems or social dynamics. In this paper, we propose a cloud-assisted distributed algorithm to solve optimal control problems for nonlinear dynamics over graph. Inspired by the centralized Hauser's projection operator approach for optimal control, our main contribution is the design of a descent method in which at each step agents of a network compute a local descent direction, and then obtain a new system trajectory through a distributed feedback controller. Such a controller, iteratively designed by a cloud, allows agents of the network to use only information from neighboring agents, thus resulting into a distributed projection operator over graph. The main advantages of our globally convergent algorithm are dynamic feasibility at each iteration and numerical robustness (thanks to the closed-loop updates) even for unstable dynamics. In order to show the effectiveness of our strategy, we present numerical computations on a discretized model of the Burgers\u2019 nonlinear partial differential equation

    Interpretability and Explainability: A Machine Learning Zoo Mini-tour

    Full text link
    In this review, we examine the problem of designing interpretable and explainable machine learning models. Interpretability and explainability lie at the core of many machine learning and statistical applications in medicine, economics, law, and natural sciences. Although interpretability and explainability have escaped a clear universal definition, many techniques motivated by these properties have been developed over the recent 30 years with the focus currently shifting towards deep learning methods. In this review, we emphasise the divide between interpretability and explainability and illustrate these two different research directions with concrete examples of the state-of-the-art. The review is intended for a general machine learning audience with interest in exploring the problems of interpretation and explanation beyond logistic regression or random forest variable importance. This work is not an exhaustive literature survey, but rather a primer focusing selectively on certain lines of research which the authors found interesting or informative

    Learning Influence Structure in Sparse Social Networks

    No full text
    corecore