70 research outputs found
Generalized Affine Scaling Algorithms for Linear Programming Problems
Interior Point Methods are widely used to solve Linear Programming problems.
In this work, we present two primal affine scaling algorithms to achieve faster
convergence in solving Linear Programming problems. In the first algorithm, we
integrate Nesterov's restarting strategy in the primal affine scaling method
with an extra parameter, which in turn generalizes the original primal affine
scaling method. We provide the proof of convergence for the proposed
generalized algorithm considering long step size. We also provide the proof of
convergence for the primal and dual sequence without the degeneracy assumption.
This convergence result generalizes the original convergence result for the
affine scaling methods and it gives us hints about the existence of a new
family of methods. Then, we introduce a second algorithm to accelerate the
convergence rate of the generalized algorithm by integrating a non-linear
series transformation technique. Our numerical results show that the proposed
algorithms outperform the original primal affine scaling method
Sketch & Project Methods for Linear Feasibility Problems: Greedy Sampling & Momentum
We develop two greedy sampling rules for the Sketch & Project method for
solving linear feasibility problems. The proposed greedy sampling rules
generalize the existing max-distance sampling rule and uniform sampling rule
and generate faster variants of Sketch & Project methods. We also introduce
greedy capped sampling rules that improve the existing capped sampling rules.
Moreover, we incorporate the so-called heavy ball momentum technique to the
proposed greedy Sketch & Project method. By varying the parameters such as
sampling rules, sketching vectors; we recover several well-known algorithms as
special cases, including Randomized Kaczmarz (RK), Motzkin Relaxation (MR),
Sampling Kaczmarz Motzkin (SKM). We also obtain several new methods such as
Randomized Coordinate Descent, Sampling Coordinate Descent, Capped Coordinate
Descent, etc. for solving linear feasibility problems. We provide global linear
convergence results for both the basic greedy method and the greedy method with
momentum. Under weaker conditions, we prove
convergence rate for the Cesaro average of sequences generated by both methods.
We extend the so-called certificate of feasibility result for the proposed
momentum method that generalizes several existing results. To back up the
proposed theoretical results, we carry out comprehensive numerical experiments
on randomly generated test instances as well as sparse real-world test
instances. The proposed greedy sampling methods significantly outperform the
existing sampling methods. And finally, the momentum variants designed in this
work extend the computational performance of the Sketch & Project methods for
all of the sampling rules
A Distance Measuring Algorithm for Location Analysis
Approximating distance is one of the key challenge in a facility location
problem. Several algorithms have been proposed, however, none of them focused
on estimating distance between two concave regions. In this work, we present an
algorithm to estimate the distance between two irregular regions of a facility
location problem. The proposed algorithm can identify the distance between
concave shape regions. We also discuss some relevant properties of the proposed
algorithm. A distance-sensitive capacity location model is introduced to test
the algorithm. Moreover, sSeveral special geometric cases are discussed to show
the advantages and insights of the algorithm
Resilient Supplier Selection in Logistics 4.0 with Heterogeneous Information
Supplier selection problem has gained extensive attention in the prior
studies. However, research based on Fuzzy Multi-Attribute Decision Making
(F-MADM) approach in ranking resilient suppliers in logistic 4 is still in its
infancy. Traditional MADM approach fails to address the resilient supplier
selection problem in logistic 4 primarily because of the large amount of data
concerning some attributes that are quantitative, yet difficult to process
while making decisions. Besides, some qualitative attributes prevalent in
logistic 4 entail imprecise perceptual or judgmental decision relevant
information, and are substantially different than those considered in
traditional suppler selection problems. This study develops a Decision Support
System (DSS) that will help the decision maker to incorporate and process such
imprecise heterogeneous data in a unified framework to rank a set of resilient
suppliers in the logistic 4 environment. The proposed framework induces a
triangular fuzzy number from large-scale temporal data using
probability-possibility consistency principle. Large number of non-temporal
data presented graphically are computed by extracting granular information that
are imprecise in nature. Fuzzy linguistic variables are used to map the
qualitative attributes. Finally, fuzzy based TOPSIS method is adopted to
generate the ranking score of alternative suppliers. These ranking scores are
used as input in a Multi-Choice Goal Programming (MCGP) model to determine
optimal order allocation for respective suppliers. Finally, a sensitivity
analysis assesses how the Suppliers Cost versus Resilience Index (SCRI) changes
when differential priorities are set for respective cost and resilience
attributes
Accelerated Sampling Kaczmarz Motzkin Algorithm for The Linear Feasibility Problem
The Sampling Kaczmarz Motzkin (SKM) algorithm is a generalized method for
solving large scale linear systems of inequalities. Having its root in the
relaxation method of Agmon, Schoenberg, and Motzkin and the randomized Kaczmarz
method, SKM outperforms the state of the art methods in solving large-scale
Linear Feasibility (LF) problems. Motivated by SKM's success, in this work, we
propose an Accelerated Sampling Kaczmarz Motzkin (ASKM) algorithm which
achieves better convergence compared to the standard SKM algorithm on ill
conditioned problems. We provide a thorough convergence analysis for the
proposed accelerated algorithm and validate the results with various numerical
experiments. We compare the performance and effectiveness of ASKM algorithm
with SKM, Interior Point Method (IPM) and Active Set Method (ASM) on randomly
generated instances as well as Netlib LPs. In most of the test instances, the
proposed ASKM algorithm outperforms the other state of the art methods.Comment: Journal of Global Optimization, Oct 201
Sampling Kaczmarz Motzkin Method for Linear Feasibility Problems: Generalization & Acceleration
Randomized Kaczmarz (RK), Motzkin Method (MM) and Sampling Kaczmarz Motzkin
(SKM) algorithms are commonly used iterative techniques for solving a system of
linear inequalities (i.e., ). As linear systems of equations
represent a modeling paradigm for solving many optimization problems, these
randomized and iterative techniques are gaining popularity among researchers in
different domains. In this work, we propose a Generalized Sampling Kaczmarz
Motzkin (GSKM) method that unifies the iterative methods into a single
framework. In addition to the general framework, we propose a Nesterov type
acceleration scheme in the SKM method called as Probably Accelerated Sampling
Kaczmarz Motzkin (PASKM). We prove the convergence theorems for both GSKM and
PASKM algorithms in the norm perspective with respect to the proposed
sampling distribution. Furthermore, we prove sub-linear convergence for the
Cesaro average of iterates for the proposed GSKM and PASKM algorithms.From the
convergence theorem of the GSKM algorithm, we find the convergence results of
several well-known algorithms like the Kaczmarz method, Motzkin method and SKM
algorithm. We perform thorough numerical experiments using both randomly
generated and real-world (classification with support vector machine and Netlib
LP) test instances to demonstrate the efficiency of the proposed methods. We
compare the proposed algorithms with SKM, Interior Point Method (IPM) and
Active Set Method (ASM) in terms of computation time and solution quality. In
the majority of the problem instances, the proposed generalized and accelerated
algorithms significantly outperform the state-of-the-art methods
A Computational Framework for Solving Nonlinear Binary OptimizationProblems in Robust Causal Inference
Identifying cause-effect relations among variables is a key step in the
decision-making process. While causal inference requires randomized
experiments, researchers and policymakers are increasingly using observational
studies to test causal hypotheses due to the wide availability of observational
data and the infeasibility of experiments. The matching method is the most used
technique to make causal inference from observational data. However, the pair
assignment process in one-to-one matching creates uncertainty in the inference
because of different choices made by the experimenter. Recently, discrete
optimization models are proposed to tackle such uncertainty. Although a robust
inference is possible with discrete optimization models, they produce nonlinear
problems and lack scalability. In this work, we propose greedy algorithms to
solve the robust causal inference test instances from observational data with
continuous outcomes. We propose a unique framework to reformulate the nonlinear
binary optimization problems as feasibility problems. By leveraging the
structure of the feasibility formulation, we develop greedy schemes that are
efficient in solving robust test problems. In many cases, the proposed
algorithms achieve global optimal solutions. We perform experiments on three
real-world datasets to demonstrate the effectiveness of the proposed algorithms
and compare our result with the state-of-the-art solver. Our experiments show
that the proposed algorithms significantly outperform the exact method in terms
of computation time while achieving the same conclusion for causal tests. Both
numerical experiments and complexity analysis demonstrate that the proposed
algorithms ensure the scalability required for harnessing the power of big data
in the decision-making process
A Primal-Dual Interior Point Method for a Novel Type-2 Second Order Cone Optimization Problem
In this paper, we define a new, special second order cone as a type-
second order cone. We focus on the case of , which can be viewed as SOCO
with an additional {\em complicating variable}. For this new problem, we
develop the necessary prerequisites, based on previous work for traditional
SOCO. We then develop a primal-dual interior point algorithm for solving a
type-2 second order conic optimization (SOCO) problem, based on a family of
kernel functions suitable for this type-2 SOCO. We finally derive the following
iteration bound for our framework: \[\frac{L^\gamma}{\theta \kappa \gamma}
\left[2N \psi\left( \frac{\varrho \left(\tau
/4N\right)}{\sqrt{1-\theta}}\right)\right]^\gamma\log \frac{3N}{\epsilon}.\
Stochastic Steepest Descent Methods for Linear Systems: Greedy Sampling & Momentum
Recently proposed adaptive Sketch & Project (SP) methods connect several
well-known projection methods such as Randomized Kaczmarz (RK), Randomized
Block Kaczmarz (RBK), Motzkin Relaxation (MR), Randomized Coordinate Descent
(RCD), Capped Coordinate Descent (CCD), etc. into one framework for solving
linear systems. In this work, we first propose a Stochastic Steepest Descent
(SSD) framework that connects SP methods with the well-known Steepest Descent
(SD) method for solving positive-definite linear system of equations. We then
introduce two greedy sampling strategies in the SSD framework that allow us to
obtain algorithms such as Sampling Kaczmarz Motzkin (SKM), Sampling Block
Kaczmarz (SBK), Sampling Coordinate Descent (SCD), etc. In doing so, we
generalize the existing sampling rules into one framework and develop an
efficient version of SP methods. Furthermore, we incorporated the Polyak
momentum technique into the SSD method to accelerate the resulting algorithms.
We provide global convergence results for both the SSD method and the momentum
induced SSD method. Moreover, we prove convergence
rate for the Cesaro average of iterates generated by both methods. By varying
parameters in the SSD method, we obtain classical convergence results of the SD
method as well as the SP methods as special cases. We design computational
experiments to demonstrate the performance of the proposed greedy sampling
methods as well as the momentum methods. The proposed greedy methods
significantly outperform the existing methods for a wide variety of datasets
such as random test instances as well as real-world datasets (LIBSVM, sparse
datasets from matrix market collection). Finally, the momentum algorithms
designed in this work accelerate the algorithmic performance of the SSD
methods
A robust approach to quantifying uncertainty in matching problems of causal inference
Unquantified sources of uncertainty in observational causal analyses can
break the integrity of the results. One would never want another analyst to
repeat a calculation with the same dataset, using a seemingly identical
procedure, only to find a different conclusion. However, as we show in this
work, there is a typical source of uncertainty that is essentially never
considered in observational causal studies: the choice of match assignment for
matched groups, that is, which unit is matched to which other unit before a
hypothesis test is conducted. The choice of match assignment is anything but
innocuous, and can have a surprisingly large influence on the causal
conclusions. Given that a vast number of causal inference studies test
hypotheses on treatment effects after treatment cases are matched with similar
control cases, we should find a way to quantify how much this extra source of
uncertainty impacts results. What we would really like to be able to report is
that \emph{no matter} which match assignment is made, as long as the match is
sufficiently good, then the hypothesis test result still holds. In this paper,
we provide methodology based on discrete optimization to create robust tests
that explicitly account for this possibility. We formulate robust tests for
binary and continuous data based on common test statistics as integer linear
programs solvable with common methodologies. We study the finite-sample
behavior of our test statistic in the discrete-data case. We apply our methods
to simulated and real-world datasets and show that they can produce useful
results in practical applied settings
- β¦