254 research outputs found

    A Consensus Approach to Distributed Convex Optimization in Multi-Agent Systems

    Get PDF
    In this thesis we address the problem of distributed unconstrained convex optimization under separability assumptions, i.e., the framework where a network of agents, each endowed with local private convex cost and subject to communication constraints, wants to collaborate to compute the minimizer of the sum of the local costs. We propose a design methodology that combines average consensus algorithms and separation of time-scales ideas. This strategy is proven, under suitable hypotheses, to be globally convergent to the true minimizer. Intuitively, the procedure lets the agents distributedly compute and sequentially update an approximated Newton-Raphson direction by means of suitable average consensus ratios. We consider both a scalar and a multidimensional scenario of the Synchronous Newton-Raphson Consensus, proposing some alternative strategies which trade-off communication and computational requirements with convergence speed. We provide analytical proofs of convergence and we show with numerical simulations that the speed of convergence of this strategy is comparable with alternative optimization strategies such as the Alternating Direction Method of Multipliers, the Distributed Subgradient Method and Distributed Control Method. Moreover, we consider the convergence rates of the Synchronous Newton-Raphson Consensus and the Gradient Descent Consensus under the simplificative assumption of quadratic local cost functions. We derive sufficient conditions which guarantee the convergence of the algorithms. From these conditions we then obtain closed form expressions that can be used to tune the parameters for maximizing the rate of convergence. Despite these formulas have been derived under quadratic local cost functions assumptions, they can be used as rules-of-thumb for tuning the parameters of the algorithms. Finally, we propose an asynchronous version of the Newton-Raphson Consensus. Beside having low computational complexity, low communication requirements and being interpretable as a distributed Newton-Raphson algorithm, the technique has also the beneficial properties of requiring very little coordination and naturally supporting time-varying topologies. Again, we analytically prove that under some assumptions it shows either local or global convergence properties. Through numerical simulations we corroborate these results and we compare the performance of the Asynchronous Newton-Raphson Consensus with other distributed optimization methods

    Newton-Raphson Consensus for Distributed Convex Optimization

    Full text link
    We address the problem of distributed uncon- strained convex optimization under separability assumptions, i.e., the framework where each agent of a network is endowed with a local private multidimensional convex cost, is subject to communication constraints, and wants to collaborate to compute the minimizer of the sum of the local costs. We propose a design methodology that combines average consensus algorithms and separation of time-scales ideas. This strategy is proved, under suitable hypotheses, to be globally convergent to the true minimizer. Intuitively, the procedure lets the agents distributedly compute and sequentially update an approximated Newton- Raphson direction by means of suitable average consensus ratios. We show with numerical simulations that the speed of convergence of this strategy is comparable with alternative optimization strategies such as the Alternating Direction Method of Multipliers. Finally, we propose some alternative strategies which trade-off communication and computational requirements with convergence speed.Comment: 18 pages, preprint with proof

    Multi-Agent Distributed Optimization and Estimation over Lossy Networks

    Get PDF
    Nowadays, optimization is a pervasive tool, employed in a lot different fields. Due to its flexibility, it can be used to solve many diverse problems, some of which do not seem to require an optimization framework. As so, the research on this topic is always active and copious. Another very interesting and current investigation field involves multi-agent systems, that is, systems composed by a lot of (possibly different) agents. The research on cyber-physical systems, believed to be one of the challenges of the 21st century, is very extensive, and comprises very complex systems like smart cities and smart power-grids, but also much more simple ones, like wireless sensor networks or camera networks. In a multi-agent context, the optimization framework is extensively used. As a consequence, optimization in multi-agent systems is an attractive topic to investigate. The contents of this thesis focus on distributed optimization within a multi-agent scenario, i.e., optimization performed by a set of peers, among which there is no leader. Accordingly, when these agents have to perform a task, formulated as an optimization problem, they have to collaborate to solve it, all using the same kind of update rule. Collaboration clearly implies the need of messages exchange among the agents, and the focus of the thesis is on the criticalities related to the communication step. In particular, no reliability of this step is assumed, meaning that the packets exchanged between two agents can sometime be lost. Also, the sought-for solution does not have to employ an acknowledge protocol, that is, when an agent has to send a packet, it just sends it and goes on with its computation, without waiting for a confirmation that the receiver has actually received the packet. Almost all works in the existing literature deal with packet losses employing an acknowledge (ACK) system; the effort in this thesis is to avoid the use of an ACK system, since it can slow down the communication step. However, this choice of averting the use of ACKs makes the development of optimization algorithms, and especially their convergence proof, more involved. Apart from robustness to packet losses, the algorithms developed in this dissertation are also asynchronous, that is, the agents do not need to be synchronized to perform the update and communication steps. Three types of optimization problems are analyzed in the thesis. The first one is the patrolling problem for camera networks. The algorithm developed to solve this problem has a restricted applicability, since it is very task-dependent. The other two problems are more general, because both concern the minimization of the sum of cost functions, one for each agent in the system. In the first case, the form of the local cost functions is particular: these, in fact, are locally coupled, in the sense that the cost function of an agent depends on the variables of the agent itself and on those of its direct neighbors. The sought-for algorithm has to satisfy two properties (apart from asynchronicity and robustness to packet losses): the requirement of asking a single communication exchange per iteration (which also reduces the need of synchronicity) and the requirement that the communication among agents is only between direct neighbors. In the second case, the local functions depend all on the same variables. The analysis first focuses on the special case of local quadratic cost functions and their strong relationship with the consensus problem. Besides the development of a robust and asynchronous algorithm for the average consensus problem, a comparison among algorithms to solve the minimization of the sum of quadratic cost functions is carried out. Finally, the distributed minimization of the sum of more general local cost functions is tackled, leading to the development of a robust version of the Newton-Raphson consensus. The theoretical tools employed in the thesis to prove convergence of the algorithms mainly rely on Lyapunov theory and the separation of scales theory

    Adaptive Robust Distributed Learning in Diffusion Sensor Networks

    Get PDF
    In this paper, the problem of adaptive distributed learning in diffusion networks is considered. The algorithms are developed within the convex set theoretic framework. More specifically, they are based on computationally simple geometric projections onto closed convex sets. The paper suggests a novel combine-project-adapt protocol for cooperation among the nodes of the network; such a protocol fits naturally with the philosophy that underlies the projection-based rationale. Moreover, the possibility that some of the nodes may fail is also considered and it is addressed by employing robust statistics loss functions. Such loss functions can easily be accommodated in the adopted algorithmic framework; all that is required from a loss function is convexity. Under some mild assumptions, the proposed algorithms enjoy monotonicity, asymptotic optimality, asymptotic consensus, strong convergence and linear complexity with respect to the number of unknown parameters. Finally, experiments in the context of the system-identification task verify the validity of the proposed algorithmic schemes, which are compared to other recent algorithms that have been developed for adaptive distributed learning

    Active Contours and Image Segmentation: The Current State Of the Art

    Get PDF
    Image segmentation is a fundamental task in image analysis responsible for partitioning an image into multiple sub-regions based on a desired feature. Active contours have been widely used as attractive image segmentation methods because they always produce sub-regions with continuous boundaries, while the kernel-based edge detection methods, e.g. Sobel edge detectors, often produce discontinuous boundaries. The use of level set theory has provided more flexibility and convenience in the implementation of active contours. However, traditional edge-based active contour models have been applicable to only relatively simple images whose sub-regions are uniform without internal edges. Here in this paper we attempt to brief the taxonomy and current state of the art in Image segmentation and usage of Active Contours

    Group-Lasso on Splines for Spectrum Cartography

    Full text link
    The unceasing demand for continuous situational awareness calls for innovative and large-scale signal processing algorithms, complemented by collaborative and adaptive sensing platforms to accomplish the objectives of layered sensing and control. Towards this goal, the present paper develops a spline-based approach to field estimation, which relies on a basis expansion model of the field of interest. The model entails known bases, weighted by generic functions estimated from the field's noisy samples. A novel field estimator is developed based on a regularized variational least-squares (LS) criterion that yields finitely-parameterized (function) estimates spanned by thin-plate splines. Robustness considerations motivate well the adoption of an overcomplete set of (possibly overlapping) basis functions, while a sparsifying regularizer augmenting the LS cost endows the estimator with the ability to select a few of these bases that ``better'' explain the data. This parsimonious field representation becomes possible, because the sparsity-aware spline-based method of this paper induces a group-Lasso estimator for the coefficients of the thin-plate spline expansions per basis. A distributed algorithm is also developed to obtain the group-Lasso estimator using a network of wireless sensors, or, using multiple processors to balance the load of a single computational unit. The novel spline-based approach is motivated by a spectrum cartography application, in which a set of sensing cognitive radios collaborate to estimate the distribution of RF power in space and frequency. Simulated tests corroborate that the estimated power spectrum density atlas yields the desired RF state awareness, since the maps reveal spatial locations where idle frequency bands can be reused for transmission, even when fading and shadowing effects are pronounced.Comment: Submitted to IEEE Transactions on Signal Processin

    Improving statistical inference for gene expression profiling data by borrowing information

    Get PDF
    Gene expression profiling experiments, in particular, microarray experiments, are popular in genomics research. However, in addition to the great opportunities provided by such experiments, statistical challenges also arise in the analysis of expression profiling data. The current thesis discusses statistical issues associated with gene expression profiling experiments and develops new statistical methods to tackle some of these problems. In Chapter 2, we consider the insufficient sample size problem in detecting differential gene expression. We address the problem by developing and evaluating methods for variance model selection. The idea is that information about error variances might be learned from related datasets to improve the estimation of error variances. We develop a modified multiresponse permutation procedure (MRPP), modified cross-validation procedures, and the right AICc (corrected Akaike’s information criterion) for choosing a variance model. Through realistic simulations based on three real microarray studies, we evaluate the proposed methods and suggest practical recommendations for data analysis. In Chapter 3, we address the multiple testing problem by improving the estimation of the distribution of noncentrality parameters given a large number of two-sample t-tests. We provide parametric, nonparametric and semiparametric estimators for the distribution of noncentrality parameters, as well as false discovery rates (FDR) and local FDR. Simulations show that our density estimates are closer to the underlying truth and that our estimates of FDR are also improved relative to competing methods under a variety of situations. In Chapter 4, we develop a novel combination of two statistical techniques with the aim to by-pass the curse of dimensionality problem in detecting differential expression of genes. We accept the fact that, in “small N, large p” situations, the data are not sufficient to provide enough information about dependency across genes. Hence, we suggest using a priori biological knowledge to assist statistical inference. We first use multidimensional scaling (MDS) methods to summarize prior knowledge about inter-gene relationships into a set of pseudo-covariates. Then, we develop a hierarchical additive logistic regression model conditional upon the generated pseudo-covariates. Simulations and analysis of real microarray data suggest that our strategy is more powerful than methods that do not use \a priori information. Future research directions are discussed at the end of the thesis

    Using Regularization to Evaluate Differential Item Functioning Among Multiple Covariates: A Penalized Expectation-Maximization Algorithm via Coordinate Descent and Soft-Thresholding

    Get PDF
    Testing for differential item functioning (DIF) has undergone rapid statistical developments in recent years. Namely, the moderated nonlinear factor analysis (MNLFA) model allows for simultaneous testing of DIF in multiple categorical and continuous covariates (e.g., age, gender, ethnicity, etc.). Recent work has also implemented a LASSO regularization approach to identify DIF and select anchor items for model identification. Although regularized MNLFA provides greater flexibility to evaluate DIF, less development has been made in efficiently estimating model parameters. Most previous implementations of MNLFA have directly maximized the observed marginal likelihood function, which limits the method to only a few items and covariates. Additionally, penalization in the MNLFA model has only been performed outside of the optimization routine, which results in a non-standard method for setting estimates to zero. To overcome these difficulties, I introduce a penalized expectation-maximization (EM) algorithm that efficiently estimates many more item parameters than previous implementations and performs regularization during optimization. I extend the regularized MNLFA model to include not just soft-thresholding for LASSO penalization, but also firm-thresholding for the MCP approach. A Monte Carlo simulation study and an empirical data analysis evaluates this new algorithm, comparing the LASSO and MCP approaches against previous work. Finally, a discussion of future research directions concludes the dissertation.Doctor of Philosoph
    corecore