637,988 research outputs found

    Energy-aware Sparse Sensing of Spatial-temporally Correlated Random Fields

    Get PDF
    This dissertation focuses on the development of theories and practices of energy aware sparse sensing schemes of random fields that are correlated in the space and/or time domains. The objective of sparse sensing is to reduce the number of sensing samples in the space and/or time domains, thus reduce the energy consumption and complexity of the sensing system. Both centralized and decentralized sensing schemes are considered in this dissertation. Firstly we study the problem of energy efficient Level set estimation (LSE) of random fields correlated in time and/or space under a total power constraint. We consider uniform sampling schemes of a sensing system with a single sensor and a linear sensor network with sensors distributed uniformly on a line where sensors employ a fixed sampling rate to minimize the LSE error probability in the long term. The exact analytical cost functions and their respective upper bounds of these sampling schemes are developed by using an optimum thresholding-based LSE algorithm. The design parameters of these sampling schemes are optimized by minimizing their respective cost functions. With the analytical results, we can identify the optimum sampling period and/or node distance that can minimize the LSE error probability. Secondly we propose active sparse sensing schemes with LSE of a spatial-temporally correlated random field by using a limited number of spatially distributed sensors. In these schemes a central controller is designed to dynamically select a limited number of sensing locations according to the information revealed from past measurements,and the objective is to minimize the expected level set estimation error.The expected estimation error probability is explicitly expressed as a function of the selected sensing locations, and the results are used to formulate the optimal sensing location selection problem as a combinatorial problem. Two low complexity greedy algorithms are developed by using analytical upper bounds of the expected estimation error probability. Lastly we study the distributed estimations of a spatially correlated random field with decentralized wireless sensor networks (WSNs). We propose a distributed iterative estimation algorithm that defines the procedures for both information propagation and local estimation in each iteration. The key parameters of the algorithm, including an edge weight matrix and a sample weight matrix, are designed by following the asymptotically optimum criteria. It is shown that the asymptotically optimum performance can be achieved by distributively projecting the measurement samples into a subspace related to the covariance matrices of data and noise samples

    Pac-Bayesian Supervised Classification: The Thermodynamics of Statistical Learning

    Full text link
    This monograph deals with adaptive supervised classification, using tools borrowed from statistical mechanics and information theory, stemming from the PACBayesian approach pioneered by David McAllester and applied to a conception of statistical learning theory forged by Vladimir Vapnik. Using convex analysis on the set of posterior probability measures, we show how to get local measures of the complexity of the classification model involving the relative entropy of posterior distributions with respect to Gibbs posterior measures. We then discuss relative bounds, comparing the generalization error of two classification rules, showing how the margin assumption of Mammen and Tsybakov can be replaced with some empirical measure of the covariance structure of the classification model.We show how to associate to any posterior distribution an effective temperature relating it to the Gibbs prior distribution with the same level of expected error rate, and how to estimate this effective temperature from data, resulting in an estimator whose expected error rate converges according to the best possible power of the sample size adaptively under any margin and parametric complexity assumptions. We describe and study an alternative selection scheme based on relative bounds between estimators, and present a two step localization technique which can handle the selection of a parametric model from a family of those. We show how to extend systematically all the results obtained in the inductive setting to transductive learning, and use this to improve Vapnik's generalization bounds, extending them to the case when the sample is made of independent non-identically distributed pairs of patterns and labels. Finally we review briefly the construction of Support Vector Machines and show how to derive generalization bounds for them, measuring the complexity either through the number of support vectors or through the value of the transductive or inductive margin.Comment: Published in at http://dx.doi.org/10.1214/074921707000000391 the IMS Lecture Notes Monograph Series (http://www.imstat.org/publications/lecnotes.htm) by the Institute of Mathematical Statistics (http://www.imstat.org

    Self-interested service-oriented agents based on trust and QoS for dynamic reconfiguration

    Get PDF
    Progressively increasing complexity of dynamic environments, in which services and applications are demanded by potential clients, requires a high level of reconfiguration of the offer to better match that ever changing demand. In particular, the dynamic change of the client’s needs, leading to higher exigency, may require a smart and flexible automatic composition of more elementary services. By leveraging the service-oriented architectures and multi-agent system benefits, the paper proposes a method to explore the flexibility of the decision support for the services’ reconfiguration based on several pillars, such as trust, reputation and QoS models, which allows the selection based on measuring the expected performance of the agents. Preliminary experimental results, extracted from a real case scenario, allow highlighting the benefits of the proposed distributed and flexible solution to balance the workload of service providers in a simple and fast manner. The proposed solution includes the agents’ intelligent decision-making capability to dynamically and autonomously change services selection on the fly, towards more trustworthy services with better quality when unexpected events happen, e.g. broken machines. We then propose the use of competitive self-interested agents to provide services that best suits to the client through dynamic service composition.info:eu-repo/semantics/publishedVersio

    An exploratory survey of current practice in the medical device industry

    Get PDF
    This article is (c) Emerald Group Publishing and permission has been granted for this version to appear here. Emerald does not grant permission for this article to be further copied/distributed or hosted elsewhere without the express permission from Emerald Group Publishing Limited.Purpose – This study seeks to examine the extent to which mainstream tools and strategies are applied in the medical devices sector, which is highly fragmented and contains a high percentage of small companies, and to determine if company size impacts on manufacturing strategy selection. Design/methodology/approach – A questionnaire was developed and disseminated through a number of channels. Responses were received from 38 companies in the UK and Ireland, describing 68 products taken to market in the past five years. Findings – Because of the limited scope of the survey, the findings are indicative rather than conclusive, and interesting trends have emerged. New to the world products were much more likely to exceed company expectations of market success compared to derivative products. It was found that the majority of these innovative products were developed by small companies. Large companies appear to favour minor upgrades over major upgrades even though these prove – on the data presented – to be less successful overall. Practical implications – These results provide those engaged in this sector with comparative information and some insights for further improvement. The reported trends with respect to company size and product complexity (or degree of novelty) are particularly illuminating. Academically, this sets some expected trends on a firmer footing and unearths one or two unexpected findings. Originality/value – It is believed that this is the largest survey of determinants of success in UK medical device companies and it provides a comparison with other sectors

    Trade-offs between Selection Complexity and Performance when Searching the Plane without Communication

    Get PDF
    We consider the ANTS problem [Feinerman et al.] in which a group of agents collaboratively search for a target in a two-dimensional plane. Because this problem is inspired by the behavior of biological species, we argue that in addition to studying the {\em time complexity} of solutions it is also important to study the {\em selection complexity}, a measure of how likely a given algorithmic strategy is to arise in nature due to selective pressures. In more detail, we propose a new selection complexity metric χ\chi, defined for algorithm A{\cal A} such that χ(A)=b+log\chi({\cal A}) = b + \log \ell, where bb is the number of memory bits used by each agent and \ell bounds the fineness of available probabilities (agents use probabilities of at least 1/21/2^\ell). In this paper, we study the trade-off between the standard performance metric of speed-up, which measures how the expected time to find the target improves with nn, and our new selection metric. In particular, consider nn agents searching for a treasure located at (unknown) distance DD from the origin (where nn is sub-exponential in DD). For this problem, we identify loglogD\log \log D as a crucial threshold for our selection complexity metric. We first prove a new upper bound that achieves a near-optimal speed-up of (D2/n+D)2O()(D^2/n +D) \cdot 2^{O(\ell)} for χ(A)3loglogD+O(1)\chi({\cal A}) \leq 3 \log \log D + O(1). In particular, for O(1)\ell \in O(1), the speed-up is asymptotically optimal. By comparison, the existing results for this problem [Feinerman et al.] that achieve similar speed-up require χ(A)=Ω(logD)\chi({\cal A}) = \Omega(\log D). We then show that this threshold is tight by describing a lower bound showing that if χ(A)<loglogDω(1)\chi({\cal A}) < \log \log D - \omega(1), then with high probability the target is not found within D2o(1)D^{2-o(1)} moves per agent. Hence, there is a sizable gap to the straightforward Ω(D2/n+D)\Omega(D^2/n + D) lower bound in this setting.Comment: appears in PODC 201

    Essays in Problems in Sequential Decisions and Large-Scale Randomized Algorithms

    Get PDF
    In the first part of this dissertation, we consider two problems in sequential decision making. The first problem we consider is sequential selection of a monotone subsequence from a random permutation. We find a two term asymptotic expansion for the optimal expected value of a sequentially selected monotone subsequence from a random permutation of length nn. The second problem we consider deals with the multiplicative relaxation or constriction of the classical problem of the number of records in a sequence of nn independent and identically distributed observations. In the relaxed case, we find a central limit theorem (CLT) with a different normalization than Renyi\u27s classical CLT, and in the constricted case we find convergence in distribution to an unbounded random variable. In the second part of this dissertation, we put forward two large-scale randomized algorithms. We propose a two-step sensing scheme for the low-rank matrix recovery problem which requires far less storage space and has much lower computational complexity than other state-of-art methods based on nuclear norm minimization. We introduce a fast iterative reweighted least squares algorithm, \textit{Guluru}, based on subsampled randomized Hadamard transform, to solve a wide class of generalized linear models

    LAGC: Lazily Aggregated Gradient Coding for Straggler-Tolerant and Communication-Efficient Distributed Learning

    Get PDF
    Gradient-based distributed learning in Parameter Server (PS) computing architectures is subject to random delays due to straggling worker nodes, as well as to possible communication bottlenecks between PS and workers. Solutions have been recently proposed to separately address these impairments based on the ideas of gradient coding, worker grouping, and adaptive worker selection. This paper provides a unified analysis of these techniques in terms of wall-clock time, communication, and computation complexity measures. Furthermore, in order to combine the benefits of gradient coding and grouping in terms of robustness to stragglers with the communication and computation load gains of adaptive selection, novel strategies, named Lazily Aggregated Gradient Coding (LAGC) and Grouped-LAG (G-LAG), are introduced. Analysis and results show that G-LAG provides the best wall-clock time and communication performance, while maintaining a low computational cost, for two representative distributions of the computing times of the worker nodes.Comment: Submitte
    corecore