330,700 research outputs found
The Voice of Optimization
We introduce the idea that using optimal classification trees (OCTs) and
optimal classification trees with-hyperplanes (OCT-Hs), interpretable machine
learning algorithms developed by Bertsimas and Dunn [2017, 2018], we are able
to obtain insight on the strategy behind the optimal solution in continuous and
mixed-integer convex optimization problem as a function of key parameters that
affect the problem. In this way, optimization is not a black box anymore.
Instead, we redefine optimization as a multiclass classification problem where
the predictor gives insights on the logic behind the optimal solution. In other
words, OCTs and OCT-Hs give optimization a voice. We show on several realistic
examples that the accuracy behind our method is in the 90%-100% range, while
even when the predictions are not correct, the degree of suboptimality or
infeasibility is very low. We compare optimal strategy predictions of OCTs and
OCT-Hs and feedforward neural networks (NNs) and conclude that the performance
of OCT-Hs and NNs is comparable. OCTs are somewhat weaker but often
competitive. Therefore, our approach provides a novel insightful understanding
of optimal strategies to solve a broad class of continuous and mixed-integer
optimization problems
Data-driven Localization and Estimation of Disturbance in the Interconnected Power System
Identifying the location of a disturbance and its magnitude is an important
component for stable operation of power systems. We study the problem of
localizing and estimating a disturbance in the interconnected power system. We
take a model-free approach to this problem by using frequency data from
generators. Specifically, we develop a logistic regression based method for
localization and a linear regression based method for estimation of the
magnitude of disturbance. Our model-free approach does not require the
knowledge of system parameters such as inertia constants and topology, and is
shown to achieve highly accurate localization and estimation performance even
in the presence of measurement noise and missing data
Local-Aggregate Modeling for Big-Data via Distributed Optimization: Applications to Neuroimaging
Technological advances have led to a proliferation of structured big data
that have matrix-valued covariates. We are specifically motivated to build
predictive models for multi-subject neuroimaging data based on each subject's
brain imaging scans. This is an ultra-high-dimensional problem that consists of
a matrix of covariates (brain locations by time points) for each subject; few
methods currently exist to fit supervised models directly to this tensor data.
We propose a novel modeling and algorithmic strategy to apply generalized
linear models (GLMs) to this massive tensor data in which one set of variables
is associated with locations. Our method begins by fitting GLMs to each
location separately, and then builds an ensemble by blending information across
locations through regularization with what we term an aggregating penalty. Our
so called, Local-Aggregate Model, can be fit in a completely distributed manner
over the locations using an Alternating Direction Method of Multipliers (ADMM)
strategy, and thus greatly reduces the computational burden. Furthermore, we
propose to select the appropriate model through a novel sequence of faster
algorithmic solutions that is similar to regularization paths. We will
demonstrate both the computational and predictive modeling advantages of our
methods via simulations and an EEG classification problem.Comment: 41 pages, 5 figures and 3 table
Rectified Gaussian Scale Mixtures and the Sparse Non-Negative Least Squares Problem
In this paper, we develop a Bayesian evidence maximization framework to solve
the sparse non-negative least squares (S-NNLS) problem. We introduce a family
of probability densities referred to as the Rectified Gaussian Scale Mixture
(R- GSM) to model the sparsity enforcing prior distribution for the solution.
The R-GSM prior encompasses a variety of heavy-tailed densities such as the
rectified Laplacian and rectified Student- t distributions with a proper choice
of the mixing density. We utilize the hierarchical representation induced by
the R-GSM prior and develop an evidence maximization framework based on the
Expectation-Maximization (EM) algorithm. Using the EM based method, we estimate
the hyper-parameters and obtain a point estimate for the solution. We refer to
the proposed method as rectified sparse Bayesian learning (R-SBL). We provide
four R- SBL variants that offer a range of options for computational complexity
and the quality of the E-step computation. These methods include the Markov
chain Monte Carlo EM, linear minimum mean-square-error estimation, approximate
message passing and a diagonal approximation. Using numerical experiments, we
show that the proposed R-SBL method outperforms existing S-NNLS solvers in
terms of both signal and support recovery performance, and is also very robust
against the structure of the design matrix.Comment: Under Review by IEEE Transactions on Signal Processin
- …