202,731 research outputs found

    Distributed Robust Learning

    Full text link
    We propose a framework for distributed robust statistical learning on {\em big contaminated data}. The Distributed Robust Learning (DRL) framework can reduce the computational time of traditional robust learning methods by several orders of magnitude. We analyze the robustness property of DRL, showing that DRL not only preserves the robustness of the base robust learning method, but also tolerates contaminations on a constant fraction of results from computing nodes (node failures). More precisely, even in presence of the most adversarial outlier distribution over computing nodes, DRL still achieves a breakdown point of at least λ∗/2 \lambda^*/2 , where λ∗ \lambda^* is the break down point of corresponding centralized algorithm. This is in stark contrast with naive division-and-averaging implementation, which may reduce the breakdown point by a factor of k k when k k computing nodes are used. We then specialize the DRL framework for two concrete cases: distributed robust principal component analysis and distributed robust regression. We demonstrate the efficiency and the robustness advantages of DRL through comprehensive simulations and predicting image tags on a large-scale image set.Comment: 18 pages, 2 figure

    ALOHA Random Access that Operates as a Rateless Code

    Get PDF
    Various applications of wireless Machine-to-Machine (M2M) communications have rekindled the research interest in random access protocols, suitable to support a large number of connected devices. Slotted ALOHA and its derivatives represent a simple solution for distributed random access in wireless networks. Recently, a framed version of slotted ALOHA gained renewed interest due to the incorporation of successive interference cancellation (SIC) in the scheme, which resulted in substantially higher throughputs. Based on similar principles and inspired by the rateless coding paradigm, a frameless approach for distributed random access in slotted ALOHA framework is described in this paper. The proposed approach shares an operational analogy with rateless coding, expressed both through the user access strategy and the adaptive length of the contention period, with the objective to end the contention when the instantaneous throughput is maximized. The paper presents the related analysis, providing heuristic criteria for terminating the contention period and showing that very high throughputs can be achieved, even for a low number for contending users. The demonstrated results potentially have more direct practical implications compared to the approaches for coded random access that lead to high throughputs only asymptotically.Comment: Revised version submitted to IEEE Transactions on Communication

    Decision and function problems based on boson sampling

    Get PDF
    Boson sampling is a mathematical problem that is strongly believed to be intractable for classical computers, whereas passive linear interferometers can produce samples efficiently. So far, the problem remains a computational curiosity, and the possible usefulness of boson-sampling devices is mainly limited to the proof of quantum supremacy. The purpose of this work is to investigate whether boson sampling can be used as a resource of decision and function problems that are computationally hard, and may thus have cryptographic applications. After the definition of a rather general theoretical framework for the design of such problems, we discuss their solution by means of a brute-force numerical approach, as well as by means of non-boson samplers. Moreover, we estimate the sample sizes required for their solution by passive linear interferometers, and it is shown that they are independent of the size of the Hilbert space.Comment: Close to the version published in PR

    On Counting Triangles through Edge Sampling in Large Dynamic Graphs

    Full text link
    Traditional frameworks for dynamic graphs have relied on processing only the stream of edges added into or deleted from an evolving graph, but not any additional related information such as the degrees or neighbor lists of nodes incident to the edges. In this paper, we propose a new edge sampling framework for big-graph analytics in dynamic graphs which enhances the traditional model by enabling the use of additional related information. To demonstrate the advantages of this framework, we present a new sampling algorithm, called Edge Sample and Discard (ESD). It generates an unbiased estimate of the total number of triangles, which can be continuously updated in response to both edge additions and deletions. We provide a comparative analysis of the performance of ESD against two current state-of-the-art algorithms in terms of accuracy and complexity. The results of the experiments performed on real graphs show that, with the help of the neighborhood information of the sampled edges, the accuracy achieved by our algorithm is substantially better. We also characterize the impact of properties of the graph on the performance of our algorithm by testing on several Barabasi-Albert graphs.Comment: A short version of this article appeared in Proceedings of the 2017 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining (ASONAM 2017

    Statistical inference framework for source detection of contagion processes on arbitrary network structures

    Get PDF
    In this paper we introduce a statistical inference framework for estimating the contagion source from a partially observed contagion spreading process on an arbitrary network structure. The framework is based on a maximum likelihood estimation of a partial epidemic realization and involves large scale simulation of contagion spreading processes from the set of potential source locations. We present a number of different likelihood estimators that are used to determine the conditional probabilities associated to observing partial epidemic realization with particular source location candidates. This statistical inference framework is also applicable for arbitrary compartment contagion spreading processes on networks. We compare estimation accuracy of these approaches in a number of computational experiments performed with the SIR (susceptible-infected-recovered), SI (susceptible-infected) and ISS (ignorant-spreading-stifler) contagion spreading models on synthetic and real-world complex networks

    Distributed Adaptive Networks: A Graphical Evolutionary Game-Theoretic View

    Full text link
    Distributed adaptive filtering has been considered as an effective approach for data processing and estimation over distributed networks. Most existing distributed adaptive filtering algorithms focus on designing different information diffusion rules, regardless of the nature evolutionary characteristic of a distributed network. In this paper, we study the adaptive network from the game theoretic perspective and formulate the distributed adaptive filtering problem as a graphical evolutionary game. With the proposed formulation, the nodes in the network are regarded as players and the local combiner of estimation information from different neighbors is regarded as different strategies selection. We show that this graphical evolutionary game framework is very general and can unify the existing adaptive network algorithms. Based on this framework, as examples, we further propose two error-aware adaptive filtering algorithms. Moreover, we use graphical evolutionary game theory to analyze the information diffusion process over the adaptive networks and evolutionarily stable strategy of the system. Finally, simulation results are shown to verify the effectiveness of our analysis and proposed methods.Comment: Accepted by IEEE Transactions on Signal Processin
    • …
    corecore