122,081 research outputs found

    Polynomial algorithms for p-dispersion problems in a 2d Pareto Front

    Full text link
    Having many best compromise solutions for bi-objective optimization problems, this paper studies p-dispersion problems to select p2p\geqslant 2 representative points in the Pareto Front(PF). Four standard variants of p-dispersion are considered. A novel variant, denoted Max-Sum-Neighbor p-dispersion, is introduced for the specific case of a 2d PF. Firstly, it is proven that 22-dispersion and 33-dispersion problems are solvable in O(n)O(n) time in a 2d PF. Secondly, dynamic programming algorithms are designed for three p-dispersion variants, proving polynomial complexities in a 2d PF. The Max-Min p-dispersion problem is proven solvable in O(pnlogn)O(pn\log n) time and O(n)O(n) memory space. The Max-Sum-Min p-dispersion problem is proven solvable in O(pn3)O(pn^3) time and O(pn2)O(pn^2) space. The Max-Sum-Neighbor p-dispersion problem is proven solvable in O(pn2)O(pn^2) time and O(pn)O(pn) space. Complexity results and parallelization issues are discussed in regards to practical implementation

    Min Max Generalization for Two-stage Deterministic Batch Mode Reinforcement Learning: Relaxation Schemes

    Full text link
    We study the minmax optimization problem introduced in [22] for computing policies for batch mode reinforcement learning in a deterministic setting. First, we show that this problem is NP-hard. In the two-stage case, we provide two relaxation schemes. The first relaxation scheme works by dropping some constraints in order to obtain a problem that is solvable in polynomial time. The second relaxation scheme, based on a Lagrangian relaxation where all constraints are dualized, leads to a conic quadratic programming problem. We also theoretically prove and empirically illustrate that both relaxation schemes provide better results than those given in [22]

    On central tendency and dispersion measures for intervals and hypercubes

    Get PDF
    The uncertainty or the variability of the data may be treated by considering, rather than a single value for each data, the interval of values in which it may fall. This paper studies the derivation of basic description statistics for interval-valued datasets. We propose a geometrical approach in the determination of summary statistics (central tendency and dispersion measures) for interval-valued variables

    Dispersion for Data-Driven Algorithm Design, Online Learning, and Private Optimization

    Full text link
    Data-driven algorithm design, that is, choosing the best algorithm for a specific application, is a crucial problem in modern data science. Practitioners often optimize over a parameterized algorithm family, tuning parameters based on problems from their domain. These procedures have historically come with no guarantees, though a recent line of work studies algorithm selection from a theoretical perspective. We advance the foundations of this field in several directions: we analyze online algorithm selection, where problems arrive one-by-one and the goal is to minimize regret, and private algorithm selection, where the goal is to find good parameters over a set of problems without revealing sensitive information contained therein. We study important algorithm families, including SDP-rounding schemes for problems formulated as integer quadratic programs, and greedy techniques for canonical subset selection problems. In these cases, the algorithm's performance is a volatile and piecewise Lipschitz function of its parameters, since tweaking the parameters can completely change the algorithm's behavior. We give a sufficient and general condition, dispersion, defining a family of piecewise Lipschitz functions that can be optimized online and privately, which includes the functions measuring the performance of the algorithms we study. Intuitively, a set of piecewise Lipschitz functions is dispersed if no small region contains many of the functions' discontinuities. We present general techniques for online and private optimization of the sum of dispersed piecewise Lipschitz functions. We improve over the best-known regret bounds for a variety of problems, prove regret bounds for problems not previously studied, and give matching lower bounds. We also give matching upper and lower bounds on the utility loss due to privacy. Moreover, we uncover dispersion in auction design and pricing problems

    Validity of particle size analysis techniques for measurement of the attrition that occurs during vacuum agitated powder drying of needle-shaped particles

    Get PDF
    Analysis of needle-shaped particles of cellobiose octaacetate (COA) obtained from vacuum agitated drying experiments was performed using three particle size analysis techniques: laser diffraction (LD), focused beam reflectance measurements (FBRM) and dynamic image analysis. Comparative measurements were also made for various size fractions of granular particles of microcrystalline cellulose. The study demonstrated that the light scattering particle size methods (LD and FBRM) can be used qualitatively to study the attrition that occurs during drying of needle-shaped particles, however, for full quantitative analysis, image analysis is required. The algorithm used in analysis of LD data assumes the scattering particles are spherical regardless of the actual shape of the particles under evaluation. FBRM measures a chord length distribution (CLD) rather than the particle size distribution (PSD), which in the case of needles is weighted towards the needle width rather than their length. Dynamic image analysis allowed evaluation of the particles based on attributes of the needles such as length (e.g. the maximum Feret diameter) or width (e.g. the minimum Feret diameter) and as such, was the most informative of the techniques for the analysis of attrition that occurred during drying
    corecore