166 research outputs found

    Relax-and-fix heuristics applied to a real-world lot-sizing and scheduling problem in the personal care consumer goods industry

    Full text link
    This paper addresses an integrated lot-sizing and scheduling problem in the industry of consumer goods for personal care, a very competitive market in which the good customer service level and the cost management show up in the competition for the clients. In this research, a complex operational environment composed of unrelated parallel machines with limited production capacity and sequence-dependent setup times and costs is studied. There is also a limited finished-goods storage capacity, a characteristic not found in the literature. Backordering is allowed but it is extremely undesirable. The problem is described through a mixed integer linear programming formulation. Since the problem is NP-hard, relax-and-fix heuristics with hybrid partitioning strategies are investigated. Computational experiments with randomly generated and also with real-world instances are presented. The results show the efficacy and efficiency of the proposed approaches. Compared to current solutions used by the company, the best proposed strategies yield results with substantially lower costs, primarily from the reduction in inventory levels and better allocation of production batches on the machines

    On complexity and convergence of high-order coordinate descent algorithms

    Full text link
    Coordinate descent methods with high-order regularized models for box-constrained minimization are introduced. High-order stationarity asymptotic convergence and first-order stationarity worst-case evaluation complexity bounds are established. The computer work that is necessary for obtaining first-order ε\varepsilon-stationarity with respect to the variables of each coordinate-descent block is O(ε−(p+1)/p)O(\varepsilon^{-(p+1)/p}) whereas the computer work for getting first-order ε\varepsilon-stationarity with respect to all the variables simultaneously is O(ε−(p+1))O(\varepsilon^{-(p+1)}). Numerical examples involving multidimensional scaling problems are presented. The numerical performance of the methods is enhanced by means of coordinate-descent strategies for choosing initial points

    Parameter Estimation and Quantitative Parametric Linkage Analysis with GENEHUNTER-QMOD

    Get PDF
    Objective: We present a parametric method for linkage analysis of quantitative phenotypes. The method provides a test for linkage as well as an estimate of different phenotype parameters. We have implemented our new method in the program GENEHUNTER-QMOD and evaluated its properties by performing simulations. Methods: The phenotype is modeled as a normally distributed variable, with a separate distribution for each genotype. Parameter estimates are obtained by maximizing the LOD score over the normal distribution parameters with a gradient-based optimization called PGRAD method. Results: The PGRAD method has lower power to detect linkage than the variance components analysis (VCA) in case of a normal distribution and small pedigrees. However, it outperforms the VCA and Haseman-Elston regression for extended pedigrees, nonrandomly ascertained data and non-normally distributed phenotypes. Here, the higher power even goes along with conservativeness, while the VCA has an inflated type I error. Parameter estimation tends to underestimate residual variances but performs better for expectation values of the phenotype distributions. Conclusion: With GENEHUNTER-QMOD, a powerful new tool is provided to explicitly model quantitative phenotypes in the context of linkage analysis. It is freely available at http://www.helmholtz-muenchen.de/genepi/downloads. Copyright (C) 2012 S. Karger AG, Base

    Guaranteed clustering and biclustering via semidefinite programming

    Get PDF
    Identifying clusters of similar objects in data plays a significant role in a wide range of applications. As a model problem for clustering, we consider the densest k-disjoint-clique problem, whose goal is to identify the collection of k disjoint cliques of a given weighted complete graph maximizing the sum of the densities of the complete subgraphs induced by these cliques. In this paper, we establish conditions ensuring exact recovery of the densest k cliques of a given graph from the optimal solution of a particular semidefinite program. In particular, the semidefinite relaxation is exact for input graphs corresponding to data consisting of k large, distinct clusters and a smaller number of outliers. This approach also yields a semidefinite relaxation for the biclustering problem with similar recovery guarantees. Given a set of objects and a set of features exhibited by these objects, biclustering seeks to simultaneously group the objects and features according to their expression levels. This problem may be posed as partitioning the nodes of a weighted bipartite complete graph such that the sum of the densities of the resulting bipartite complete subgraphs is maximized. As in our analysis of the densest k-disjoint-clique problem, we show that the correct partition of the objects and features can be recovered from the optimal solution of a semidefinite program in the case that the given data consists of several disjoint sets of objects exhibiting similar features. Empirical evidence from numerical experiments supporting these theoretical guarantees is also provided

    Implementation of an Optimal First-Order Method for Strongly Convex Total Variation Regularization

    Get PDF
    We present a practical implementation of an optimal first-order method, due to Nesterov, for large-scale total variation regularization in tomographic reconstruction, image deblurring, etc. The algorithm applies to μ\mu-strongly convex objective functions with LL-Lipschitz continuous gradient. In the framework of Nesterov both μ\mu and LL are assumed known -- an assumption that is seldom satisfied in practice. We propose to incorporate mechanisms to estimate locally sufficient μ\mu and LL during the iterations. The mechanisms also allow for the application to non-strongly convex functions. We discuss the iteration complexity of several first-order methods, including the proposed algorithm, and we use a 3D tomography problem to compare the performance of these methods. The results show that for ill-conditioned problems solved to high accuracy, the proposed method significantly outperforms state-of-the-art first-order methods, as also suggested by theoretical results.Comment: 23 pages, 4 figure

    An artificial fish swarm filter-based Method for constrained global optimization

    Get PDF
    Ana Maria A.C. Rocha, M. Fernanda P. Costa and Edite M.G.P. Fernandes, An Artificial Fish Swarm Filter-Based Method for Constrained Global Optimization, B. Murgante, O. Gervasi, S. Mirsa, N. Nedjah, A.M. Rocha, D. Taniar, B. Apduhan (Eds.), Lecture Notes in Computer Science, Part III, LNCS 7335, pp. 57–71, Springer, Heidelberg, 2012.An artificial fish swarm algorithm based on a filter methodology for trial solutions acceptance is analyzed for general constrained global optimization problems. The new method uses the filter set concept to accept, at each iteration, a population of trial solutions whenever they improve constraint violation or objective function, relative to the current solutions. The preliminary numerical experiments with a wellknown benchmark set of engineering design problems show the effectiveness of the proposed method.Fundação para a Ciência e a Tecnologia (FCT

    Dificultades para codificar, relacionar y categorizar problemas verbales algebraicos: dos estudios con estudiantes de secundaria y profesores en formación

    Get PDF
    En resolución de problemas verbales por transferencia, la activación de problemas ya conocidos que sirvan de guía, depende de las analogías percibidas entre éstos y el problema a resolver. Se desarrollan dos estudios relacionados para analizar en qué características se basan los estudiantes para codificar problemas y detectar sus analogías, en tareas de categorización (sorting). Se utilizaron técnicas cuantitativas y cualitativas combinadas. Primero se analizó cómo los estudiantes de secundaria son influidos por diferentes variables características de problemas de ciencias. Una gran proporción de sujetos no fue capaz de percibir las analogías y diferencias adecuadas entre problemas. El segundo estudio trató de avanzar una explicación de estos resultados. El nivel académico y la familiaridad con las temáticas fueron factores significativos, pero los futuros profesores participantes mostraron demasiadas dificultades, alertando sobre la conveniencia de revisar algunos supuestos instruccionales habituales
    • …
    corecore