170 research outputs found
Relax-and-fix heuristics applied to a real-world lot-sizing and scheduling problem in the personal care consumer goods industry
This paper addresses an integrated lot-sizing and scheduling problem in the
industry of consumer goods for personal care, a very competitive market in
which the good customer service level and the cost management show up in the
competition for the clients. In this research, a complex operational
environment composed of unrelated parallel machines with limited production
capacity and sequence-dependent setup times and costs is studied. There is also
a limited finished-goods storage capacity, a characteristic not found in the
literature. Backordering is allowed but it is extremely undesirable. The
problem is described through a mixed integer linear programming formulation.
Since the problem is NP-hard, relax-and-fix heuristics with hybrid partitioning
strategies are investigated. Computational experiments with randomly generated
and also with real-world instances are presented. The results show the efficacy
and efficiency of the proposed approaches. Compared to current solutions used
by the company, the best proposed strategies yield results with substantially
lower costs, primarily from the reduction in inventory levels and better
allocation of production batches on the machines
On complexity and convergence of high-order coordinate descent algorithms
Coordinate descent methods with high-order regularized models for
box-constrained minimization are introduced. High-order stationarity asymptotic
convergence and first-order stationarity worst-case evaluation complexity
bounds are established. The computer work that is necessary for obtaining
first-order -stationarity with respect to the variables of each
coordinate-descent block is whereas the computer
work for getting first-order -stationarity with respect to all the
variables simultaneously is . Numerical examples
involving multidimensional scaling problems are presented. The numerical
performance of the methods is enhanced by means of coordinate-descent
strategies for choosing initial points
Evaluation complexity for nonlinear constrained optimization using unscaled kkt conditions and high-order models
FAPESP - FUNDAĂĂO DE AMPARO Ă PESQUISA DO ESTADO DE SĂO PAULOCNPQ - CONSELHO NACIONAL DE DESENVOLVIMENTO CIENTĂFICO E TECNOLĂGICOThe evaluation complexity of general nonlinear, possibly nonconvex, constrained optimization is analyzed. It is shown that, under suitable smoothness conditions, an epsilon-approximate first-order critical point of the problem can be computed in order O(epsilon(1-2(p+1)/p)) evaluations of the problem's functions and their first p derivatives. This is achieved by using a two-phase algorithm inspired by Cartis, Gould, and Toint [SIAM J. Optim., 21 (2011), pp. 1721-1739; SIAM J. Optim., 23 (2013), pp. 1553-1574]. It is also shown that strong guarantees (in terms of handling degeneracies) on the possible limit points of the sequence of iterates generated by this algorithm can be obtained at the cost of increased complexity. At variance with previous results, the epsilon-approximate first-order criticality is defined by satisfying a version of the KKT conditions with an accuracy that does not depend on the size of the Lagrange multipliers.The evaluation complexity of general nonlinear, possibly nonconvex, constrained optimization is analyzed. It is shown that, under suitable smoothness conditions, an epsilon-approximate first-order critical point of the problem can be computed in order O(epsilon(1-2(p+1)/p)) evaluations of the problem's functions and their first p derivatives. This is achieved by using a two-phase algorithm inspired by Cartis, Gould, and Toint [SIAM J. Optim., 21 (2011), pp. 1721-1739; SIAM J. Optim., 23 (2013), pp. 1553-1574]. It is also shown that strong guarantees (in terms of handling degeneracies) on the possible limit points of the sequence of iterates generated by this algorithm can be obtained at the cost of increased complexity. At variance with previous results, the epsilon-approximate first-order criticality is defined by satisfying a version of the KKT conditions with an accuracy that does not depend on the size of the Lagrange multipliers.262951967FAPESP - FUNDAĂĂO DE AMPARO Ă PESQUISA DO ESTADO DE SĂO PAULOCNPQ - CONSELHO NACIONAL DE DESENVOLVIMENTO CIENTĂFICO E TECNOLĂGICOFAPESP - FUNDAĂĂO DE AMPARO Ă PESQUISA DO ESTADO DE SĂO PAULOCNPQ - CONSELHO NACIONAL DE DESENVOLVIMENTO CIENTĂFICO E TECNOLĂGICO2010/10133-0; 2013/03447-6; 2013/05475-7; 2013/07375-0; 2013/23494-9304032/2010-7; 309517/2014-1; 303750/2014-6; 490326/2013-
Guaranteed clustering and biclustering via semidefinite programming
Identifying clusters of similar objects in data plays a significant role in a
wide range of applications. As a model problem for clustering, we consider the
densest k-disjoint-clique problem, whose goal is to identify the collection of
k disjoint cliques of a given weighted complete graph maximizing the sum of the
densities of the complete subgraphs induced by these cliques. In this paper, we
establish conditions ensuring exact recovery of the densest k cliques of a
given graph from the optimal solution of a particular semidefinite program. In
particular, the semidefinite relaxation is exact for input graphs corresponding
to data consisting of k large, distinct clusters and a smaller number of
outliers. This approach also yields a semidefinite relaxation for the
biclustering problem with similar recovery guarantees. Given a set of objects
and a set of features exhibited by these objects, biclustering seeks to
simultaneously group the objects and features according to their expression
levels. This problem may be posed as partitioning the nodes of a weighted
bipartite complete graph such that the sum of the densities of the resulting
bipartite complete subgraphs is maximized. As in our analysis of the densest
k-disjoint-clique problem, we show that the correct partition of the objects
and features can be recovered from the optimal solution of a semidefinite
program in the case that the given data consists of several disjoint sets of
objects exhibiting similar features. Empirical evidence from numerical
experiments supporting these theoretical guarantees is also provided
Filter-based DIRECT method for constrained global optimization
This paper presents a DIRECT-type method that uses a filter methodology to assure convergence to a feasible and optimal solution of nonsmooth and nonconvex constrained global optimization problems. The filter methodology aims to give priority to the selection of hyperrectangles with feasible center points, followed by those with infeasible and non-dominated center points and finally by those that have infeasible and dominated center points. The convergence properties of the algorithm are analyzed. Preliminary numerical experiments show that the proposed filter-based DIRECT algorithm gives competitive results when compared with other DIRECT-type methods.The authors would like to thank two anonymous referees and the Associate Editor for their
valuable comments and suggestions to improve the paper.
This work has been supported by COMPETE: POCI-01-0145-FEDER-007043 and FCT
- Fundacžao para a CiĂȘncia e Tecnologia within the projects UID/CEC/00319/2013 and Ë
UID/MAT/00013/2013.info:eu-repo/semantics/publishedVersio
Investigation of eighth-grade students' understanding of the slope of the linear function
This study aimed to investigate eighth-grade students' difficulties and misconceptions and their performance of translation between the different representation modes related to the slope of linear functions. The participants were 115 Turkish eighth-grade students in a city in the eastern part of the Black Sea region of Turkey. Data was collected with an instrument consisting of seven written questions and a semi-structured interview protocol conducted with six students. Students' responses to questions were categorized and scored. Quantitative data was analyzed using the SPSS 17.0 statistical packet program with cross tables and one-way ANOVA. Qualitative data obtained from interviews was analyzed using descriptive analytical techniques. It was found that students' performance in articulating the slope of the linear function using its algebraic representation form was higher than their performance in using transformation between graphical and algebraic representation forms. It was also determined that some of them had difficulties and misunderstood linear function equations, graphs, and slopes and could not comprehend the connection between slope and the x- and y-intercepts
The inverse problem of determining the filtration function and permeability reduction in flow of water with particles in porous media
The original publication can be found at www.springerlink.comDeep bed filtration of particle suspensions in porous media occurs during water injection into oil reservoirs, drilling fluid invasion of reservoir production zones, fines migration in oil fields, industrial filtering, bacteria, viruses or contaminants transport in groundwater etc. The basic features of the process are particle capture by the porous medium and consequent permeability reduction. Models for deep bed filtration contain two quantities that represent rock and fluid properties: the filtration function, which is the fraction of particles captured per unit particle path length, and formation damage function, which is the ratio between reduced and initial permeabilities. These quantities cannot be measured directly in the laboratory or in the field; therefore, they must be calculated indirectly by solving inverse problems. The practical petroleum and environmental engineering purpose is to predict injectivity loss and particle penetration depth around wells. Reliable prediction requires precise knowledge of these two coefficients. In this work we determine these quantities from pressure drop and effluent concentration histories measured in one-dimensional laboratory experiments. The recovery method consists of optimizing deviation functionals in appropriate subdomains; if necessary, a Tikhonov regularization term is added to the functional. The filtration function is recovered by optimizing a non-linear functional with box constraints; this functional involves the effluent concentration history. The permeability reduction is recovered likewise, taking into account the filtration function already found, and the functional involves the pressure drop history. In both cases, the functionals are derived from least square formulations of the deviation between experimental data and quantities predicted by the model.Alvarez, A. C., Hime, G., Marchesin, D., Bedrikovetski, P
- âŠ