2,763 research outputs found

    Fast network configuration in Software Defined Networking

    Get PDF
    Software Defined Networking (SDN) provides a framework to dynamically adjust and re-program the data plane with the use of flow rules. The realization of highly adaptive SDNs with the ability to respond to changing demands or recover after a network failure in a short period of time, hinges on efficient updates of flow rules. We model the time to deploy a set of flow rules by the update time at the bottleneck switch, and formulate the problem of selecting paths to minimize the deployment time under feasibility constraints as a mixed integer linear program (MILP). To reduce the computation time of determining flow rules, we propose efficient heuristics designed to approximate the minimum-deployment-time solution by relaxing the MILP or selecting the paths sequentially. Through extensive simulations we show that our algorithms outperform current, shortest path based solutions by reducing the total network configuration time up to 55% while having similar packet loss, in the considered scenarios. We also demonstrate that in a networked environment with a certain fraction of failed links, our algorithms are able to reduce the average time to reestablish disrupted flows by 40%

    Connectivity-guaranteed and obstacle-adaptive deployment schemes for mobile sensor networks

    Get PDF
    Mobile sensors can relocate and self-deploy into a network. While focusing on the problems of coverage, existing deployment schemes largely over-simplify the conditions for network connectivity: they either assume that the communication range is large enough for sensors in geometric neighborhoods to obtain location information through local communication, or they assume a dense network that remains connected. In addition, an obstacle-free field or full knowledge of the field layout is often assumed. We present new schemes that are not governed by these assumptions, and thus adapt to a wider range of application scenarios. The schemes are designed to maximize sensing coverage and also guarantee connectivity for a network with arbitrary sensor communication/sensing ranges or node densities, at the cost of a small moving distance. The schemes do not need any knowledge of the field layout, which can be irregular and have obstacles/holes of arbitrary shape. Our first scheme is an enhanced form of the traditional virtual-force-based method, which we term the Connectivity-Preserved Virtual Force (CPVF) scheme. We show that the localized communication, which is the very reason for its simplicity, results in poor coverage in certain cases. We then describe a Floor-based scheme which overcomes the difficulties of CPVF and, as a result, significantly outperforms it and other state-of-the-art approaches. Throughout the paper our conclusions are corroborated by the results from extensive simulations

    Solution Path Clustering with Adaptive Concave Penalty

    Full text link
    Fast accumulation of large amounts of complex data has created a need for more sophisticated statistical methodologies to discover interesting patterns and better extract information from these data. The large scale of the data often results in challenging high-dimensional estimation problems where only a minority of the data shows specific grouping patterns. To address these emerging challenges, we develop a new clustering methodology that introduces the idea of a regularization path into unsupervised learning. A regularization path for a clustering problem is created by varying the degree of sparsity constraint that is imposed on the differences between objects via the minimax concave penalty with adaptive tuning parameters. Instead of providing a single solution represented by a cluster assignment for each object, the method produces a short sequence of solutions that determines not only the cluster assignment but also a corresponding number of clusters for each solution. The optimization of the penalized loss function is carried out through an MM algorithm with block coordinate descent. The advantages of this clustering algorithm compared to other existing methods are as follows: it does not require the input of the number of clusters; it is capable of simultaneously separating irrelevant or noisy observations that show no grouping pattern, which can greatly improve data interpretation; it is a general methodology that can be applied to many clustering problems. We test this method on various simulated datasets and on gene expression data, where it shows better or competitive performance compared against several clustering methods.Comment: 36 page

    General maximum likelihood empirical Bayes estimation of normal means

    Full text link
    We propose a general maximum likelihood empirical Bayes (GMLEB) method for the estimation of a mean vector based on observations with i.i.d. normal errors. We prove that under mild moment conditions on the unknown means, the average mean squared error (MSE) of the GMLEB is within an infinitesimal fraction of the minimum average MSE among all separable estimators which use a single deterministic estimating function on individual observations, provided that the risk is of greater order than (log⁥n)5/n(\log n)^5/n. We also prove that the GMLEB is uniformly approximately minimax in regular and weak ℓp\ell_p balls when the order of the length-normalized norm of the unknown means is between (log⁥n)Îș1/n1/(p∧2)(\log n)^{\kappa_1}/n^{1/(p\wedge2)} and n/(log⁥n)Îș2n/(\log n)^{\kappa_2}. Simulation experiments demonstrate that the GMLEB outperforms the James--Stein and several state-of-the-art threshold estimators in a wide range of settings without much down side.Comment: Published in at http://dx.doi.org/10.1214/08-AOS638 the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Information-based complexity, feedback and dynamics in convex programming

    Get PDF
    We study the intrinsic limitations of sequential convex optimization through the lens of feedback information theory. In the oracle model of optimization, an algorithm queries an {\em oracle} for noisy information about the unknown objective function, and the goal is to (approximately) minimize every function in a given class using as few queries as possible. We show that, in order for a function to be optimized, the algorithm must be able to accumulate enough information about the objective. This, in turn, puts limits on the speed of optimization under specific assumptions on the oracle and the type of feedback. Our techniques are akin to the ones used in statistical literature to obtain minimax lower bounds on the risks of estimation procedures; the notable difference is that, unlike in the case of i.i.d. data, a sequential optimization algorithm can gather observations in a {\em controlled} manner, so that the amount of information at each step is allowed to change in time. In particular, we show that optimization algorithms often obey the law of diminishing returns: the signal-to-noise ratio drops as the optimization algorithm approaches the optimum. To underscore the generality of the tools, we use our approach to derive fundamental lower bounds for a certain active learning problem. Overall, the present work connects the intuitive notions of information in optimization, experimental design, estimation, and active learning to the quantitative notion of Shannon information.Comment: final version; to appear in IEEE Transactions on Information Theor
    • 

    corecore