337 research outputs found

    Sustainable two stage supply chain management: A quadratic optimization approach with a quadratic constraint

    Get PDF
    Designing a supply chain to comply with environmental policy requires awareness of how work and/or production methods impact the environment and what needs to be done to reduce those environmental impacts and make the company more sustainable. This is a dynamic process that occurs at both the strategic and operational levels. However, being environmentally friendly does not necessarily mean improving the efficiency of the system at the same time. Therefore, when allocating a production budget in a supply chain that implements the green paradigm, it is necessary to figure out how to properly recover costs in order to improve both sustainability and routine operations, offsetting the negative environmental impact of logistics and production without compromising the efficiency of the processes to be executed. In this paper, we study the latter problem in detail, focusing on the CO2 emissions generated by the transportation from suppliers to production sites, and by the production activities carried out in each plant. We do this using a novel mathematical model that has a quadratic objective function and all linear constraints except one, which is also quadratic, and models the constraint on the budget that can be used for green investments caused by the increasing internal complexity created by large production flows in the production nodes of the supply network. To solve this model, we propose a multistart algorithm based on successive linear approximations. Computational results show the effectiveness of our proposal

    Hyperfast second-order local solvers for efficient statistically preconditioned distributed optimization

    Get PDF
    Statistical preconditioning enables fast methods for distributed large-scale empirical risk minimization problems. In this approach, multiple worker nodes compute gradients in parallel, which are then used by the central node to update the parameter by solving an auxiliary (preconditioned) smaller-scale optimization problem. The recently proposed Statistically Preconditioned Accelerated Gradient (SPAG) method [1] has complexity bounds superior to other such algorithms but requires an exact solution for computationally intensive auxiliary optimization problems at every iteration. In this paper, we propose an Inexact SPAG (InSPAG) and explicitly characterize the accuracy by which the corresponding auxiliary subproblem needs to be solved to guarantee the same convergence rate as the exact method. We build our results by first developing an inexact adaptive accelerated Bregman proximal gradient method for general optimization problems under relative smoothness and strong convexity assumptions, which may be of independent interest. Moreover, we explore the properties of the auxiliary problem in the InSPAG algorithm assuming Lipschitz third-order derivatives and strong convexity. For such problem class, we develop a linearly convergent Hyperfast second-order method and estimate the total complexity of the InSPAG method with hyperfast auxiliary problem solver. Finally, we illustrate the proposed method's practical efficiency by performing large-scale numerical experiments on logistic regression models. To the best of our knowledge, these are the first empirical results on implementing high-order methods on large-scale problems, as we work with data where the dimension is of the order of 3 million, and the number of samples is 700 million

    Training very large scale nonlinear SVMs using Alternating Direction Method of Multipliers coupled with the Hierarchically Semi-Separable kernel approximations

    Get PDF
    Typically, nonlinear Support Vector Machines (SVMs) produce significantly higher classification quality when compared to linear ones but, at the same time, their computational complexity is prohibitive for large-scale datasets: this drawback is essentially related to the necessity to store and manipulate large, dense and unstructured kernel matrices. Despite the fact that at the core of training a SVM there is a \textit{simple} convex optimization problem, the presence of kernel matrices is responsible for dramatic performance reduction, making SVMs unworkably slow for large problems. Aiming to an efficient solution of large-scale nonlinear SVM problems, we propose the use of the \textit{Alternating Direction Method of Multipliers} coupled with \textit{Hierarchically Semi-Separable} (HSS) kernel approximations. As shown in this work, the detailed analysis of the interaction among their algorithmic components unveils a particularly efficient framework and indeed, the presented experimental results demonstrate a significant speed-up when compared to the \textit{state-of-the-art} nonlinear SVM libraries (without significantly affecting the classification accuracy)

    A variable metric proximal stochastic gradient method: An application to classification problems

    Get PDF
    Due to the continued success of machine learning and deep learning in particular, supervised classification problems are ubiquitous in numerous scientific fields. Training these models typically involves the minimization of the empirical risk over large data sets along with a possibly non-differentiable regularization. In this paper, we introduce a stochastic gradient method for the considered classification problem. To control the variance of the objective's gradients, we use an automatic sample size selection along with a variable metric to precondition the stochastic gradient directions. Further, we utilize a non -monotone line search to automatize step size selection. Convergence results are provided for both convex and non-convex objective functions. Extensive numerical experiments verify that the suggested approach performs on par with stateof-the-art methods for training both statistical models for binary classification and artificial neural networks for multi-class image classification. The code is publicly available at https:// github .com /koblererich /lisavm

    (Global) Optimization: Historical notes and recent developments

    Get PDF

    (Global) Optimization: Historical notes and recent developments

    Get PDF
    Recent developments in (Global) Optimization are surveyed in this paper. We collected and commented quite a large number of recent references which, in our opinion, well represent the vivacity, deepness, and width of scope of current computational approaches and theoretical results about nonconvex optimization problems. Before the presentation of the recent developments, which are subdivided into two parts related to heuristic and exact approaches, respectively, we briefly sketch the origin of the discipline and observe what, from the initial attempts, survived, what was not considered at all as well as a few approaches which have been recently rediscovered, mostly in connection with machine learning
    • …
    corecore