351 research outputs found

    Lying Your Way to Better Traffic Engineering

    Full text link
    To optimize the flow of traffic in IP networks, operators do traffic engineering (TE), i.e., tune routing-protocol parameters in response to traffic demands. TE in IP networks typically involves configuring static link weights and splitting traffic between the resulting shortest-paths via the Equal-Cost-MultiPath (ECMP) mechanism. Unfortunately, ECMP is a notoriously cumbersome and indirect means for optimizing traffic flow, often leading to poor network performance. Also, obtaining accurate knowledge of traffic demands as the input to TE is elusive, and traffic conditions can be highly variable, further complicating TE. We leverage recently proposed schemes for increasing ECMP's expressiveness via carefully disseminated bogus information ("lies") to design COYOTE, a readily deployable TE scheme for robust and efficient network utilization. COYOTE leverages new algorithmic ideas to configure (static) traffic splitting ratios that are optimized with respect to all (even adversarially chosen) traffic scenarios within the operator's "uncertainty bounds". Our experimental analyses show that COYOTE significantly outperforms today's prevalent TE schemes in a manner that is robust to traffic uncertainty and variation. We discuss experiments with a prototype implementation of COYOTE

    A Mathematical Model for Supermarket Order Picking

    Get PDF
    Order picking consists in retrieving products from storage locations to sat- isfy independent orders from multiple customers. It is generally recognized as one of the most significant activities in a warehouse (Koster et al, 2007). In fact, order picking accounts up to 50% (Frazelle, 2001) or even 80% (Van den Berg, 1999) of the total warehouse operating costs. The critical issue in today’s business environ- ment is to simultaneously reduce the cost and increase the speed of order picking. In this paper, we address the order picking process in one of the Portuguese largest companies in the grocery business. This problem was proposed at the 92nd European Study Group with Industry (ESGI92). In this setting, each operator steers a trolley on the shop floor in order to select items for multiple customers. The objective is to improve their grocery e-commerce and bring it up to the level of the best inter- national practices. In particular, the company wants to improve the routing tasks in order to decrease distances. For this purpose, a mathematical model for a faster open shop picking was developed. In this paper, we describe the problem, our proposed solution as well as some preliminary results and conclusions.info:eu-repo/semantics/publishedVersio

    A robust and reliable method for detecting signals of interest in multiexponential decays

    Get PDF
    The concept of rejecting the null hypothesis for definitively detecting a signal was extended to relaxation spectrum space for multiexponential reconstruction. The novel test was applied to the problem of detecting the myelin signal, which is believed to have a time constant below 40ms, in T2 decays from MRI's of the human brain. It was demonstrated that the test allowed the detection of a signal in a relaxation spectrum using only the information in the data, thus avoiding any potentially unreliable prior information. The test was implemented both explicitly and implicitly for simulated T2 measurements. For the explicit implementation, the null hypothesis was that a relaxation spectrum existed that had no signal below 40ms and that was consistent with the T2 decay. The confidence level by which the null hypothesis could be rejected gave the confidence level that there was signal below the 40ms time constant. The explicit implementation assessed the test's performance with and without prior information where the prior information was the nonnegative relaxation spectrum assumption. The test was also implemented implicitly with a data conserving multiexponential reconstruction algorithm that used left invertible matrices and that has been published previously. The implicit and explicit implementations demonstrated similar characteristics in detecting the myelin signal in both the simulated and experimental T2 decays, providing additional evidence to support the close link between the two tests. [Full abstract in paper]Comment: 23 pages with 8 figure

    A filter algorithm : comparison with NLP solvers

    Get PDF
    Versão não definitiva do artigoThe purpose of this work is to present an algorithm to solve nonlinear constrained optimization problems, using the filter method with the inexact restoration (IR) approach. In the IR approach two independent phases are performed in each iteration—the feasibility and the optimality phases. The first one directs the iterative process into the feasible region, i.e. finds one point with less constraints violation. The optimality phase starts from this point and its goal is to optimize the objective function into the satisfied constraints space. To evaluate the solution approximations in each iteration a scheme based on the filter method is used in both phases of the algorithm. This method replaces the merit functions that are based on penalty schemes, avoiding the related difficulties such as the penalty parameter estimation and the non-differentiability of some of them. The filter method is implemented in the context of the line search globalization technique. A set of more than two hundred AMPL test problems is solved. The algorithm developed is compared with LOQO and NPSOL software packages.Fundação para a Ciência e a Tecnologia (FCT

    Symmetric vs asymmetric protection levels in SDC methods for tabular data

    Get PDF
    The final publication is available at link.springer.comProtection levels on sensitive cells—which are key parameters of any statistical disclosure control method for tabular data—are related to the difficulty of any attacker to recompute a good estimation of the true cell values. Those protection levels are two numbers (one for the lower protection, the other for the upper protection) imposing a safety interval around the cell value, that is, no attacker should be able to recompute an estimate within such safety interval. In the symmetric case the lower and upper protection levels are equal; otherwise they are referred as asymmetric protection levels. In this work we empirically study the effect of symmetry in protection levels for three protection methods: cell suppression problem (CSP), controlled tabular adjustment (CTA), and interval protection (IP). Since CSP and CTA are mixed integer linear optimization problems, it is seen that the symmetry (or not) of protection levels affect to the CPU time needed to compute a solution. For IP, a linear optimization problem, it is observed that the symmetry heavily affects to the quality of the solution provided rather than to the solution time.Peer ReviewedPostprint (author's final draft

    Computational Experiments with Minimum-Distance Controlled Perturbation Methods

    Full text link
    Abstract. Minimum-distance controlled perturbation is a recent family of methods for the protection of statistical tabular data. These methods are both efficient and versatile, since can deal with large tables of any structure and dimension, and in practice only need the solution of a linear or quadratic optimization problem. The purpose of this paper is to give insight into the behaviour of such methods through some computational experiments. In particular, the paper (1) illustrates the theoretical results about the low disclosure risk of the method; (2) analyzes the solutions provided by the method on a standard set of seven difficult and complex instances; and (3) shows the behaviour of a new approach obtained by the combination of two existing ones

    Feasibility and dominance rules in the electromagnetism-like algorithm for constrained global optimization

    Get PDF
    This paper presents the use of a constraint-handling technique, known as feasibility and dominance rules, in a electromagnetismlike (ELM) mechanism for solving constrained global optimization problems. Since the original ELM algorithm is specifically designed for solving bound constrained problems, only the inequality and equality constraints violation together with the objective function value are used to select points and to progress towards feasibility and optimality. Numerical experiments are presented, including a comparison with other methods recently reported in the literature
    corecore