4,702 research outputs found
Single- and Multiple-Shell Uniform Sampling Schemes for Diffusion MRI Using Spherical Codes
In diffusion MRI (dMRI), a good sampling scheme is important for efficient
acquisition and robust reconstruction. Diffusion weighted signal is normally
acquired on single or multiple shells in q-space. Signal samples are typically
distributed uniformly on different shells to make them invariant to the
orientation of structures within tissue, or the laboratory coordinate frame.
The Electrostatic Energy Minimization (EEM) method, originally proposed for
single shell sampling scheme in dMRI, was recently generalized to multi-shell
schemes, called Generalized EEM (GEEM). GEEM has been successfully used in the
Human Connectome Project (HCP). However, EEM does not directly address the goal
of optimal sampling, i.e., achieving large angular separation between sampling
points. In this paper, we propose a more natural formulation, called Spherical
Code (SC), to directly maximize the minimal angle between different samples in
single or multiple shells. We consider not only continuous problems to design
single or multiple shell sampling schemes, but also discrete problems to
uniformly extract sub-sampled schemes from an existing single or multiple shell
scheme, and to order samples in an existing scheme. We propose five algorithms
to solve the above problems, including an incremental SC (ISC), a sophisticated
greedy algorithm called Iterative Maximum Overlap Construction (IMOC), an 1-Opt
greedy method, a Mixed Integer Linear Programming (MILP) method, and a
Constrained Non-Linear Optimization (CNLO) method. To our knowledge, this is
the first work to use the SC formulation for single or multiple shell sampling
schemes in dMRI. Experimental results indicate that SC methods obtain larger
angular separation and better rotational invariance than the state-of-the-art
EEM and GEEM. The related codes and a tutorial have been released in DMRITool.Comment: Accepted by IEEE transactions on Medical Imaging. Codes have been
released in dmritool
https://diffusionmritool.github.io/tutorial_qspacesampling.htm
Distributed VNF Scaling in Large-scale Datacenters: An ADMM-based Approach
Network Functions Virtualization (NFV) is a promising network architecture
where network functions are virtualized and decoupled from proprietary
hardware. In modern datacenters, user network traffic requires a set of Virtual
Network Functions (VNFs) as a service chain to process traffic demands. Traffic
fluctuations in Large-scale DataCenters (LDCs) could result in overload and
underload phenomena in service chains. In this paper, we propose a distributed
approach based on Alternating Direction Method of Multipliers (ADMM) to jointly
load balance the traffic and horizontally scale up and down VNFs in LDCs with
minimum deployment and forwarding costs. Initially we formulate the targeted
optimization problem as a Mixed Integer Linear Programming (MILP) model, which
is NP-complete. Secondly, we relax it into two Linear Programming (LP) models
to cope with over and underloaded service chains. In the case of small or
medium size datacenters, LP models could be run in a central fashion with a low
time complexity. However, in LDCs, increasing the number of LP variables
results in additional time consumption in the central algorithm. To mitigate
this, our study proposes a distributed approach based on ADMM. The
effectiveness of the proposed mechanism is validated in different scenarios.Comment: IEEE International Conference on Communication Technology (ICCT),
Chengdu, China, 201
Mixed-Integer Convex Nonlinear Optimization with Gradient-Boosted Trees Embedded
Decision trees usefully represent sparse, high dimensional and noisy data.
Having learned a function from this data, we may want to thereafter integrate
the function into a larger decision-making problem, e.g., for picking the best
chemical process catalyst. We study a large-scale, industrially-relevant
mixed-integer nonlinear nonconvex optimization problem involving both
gradient-boosted trees and penalty functions mitigating risk. This
mixed-integer optimization problem with convex penalty terms broadly applies to
optimizing pre-trained regression tree models. Decision makers may wish to
optimize discrete models to repurpose legacy predictive models, or they may
wish to optimize a discrete model that particularly well-represents a data set.
We develop several heuristic methods to find feasible solutions, and an exact,
branch-and-bound algorithm leveraging structural properties of the
gradient-boosted trees and penalty functions. We computationally test our
methods on concrete mixture design instance and a chemical catalysis industrial
instance
Optimal management of bio-based energy supply chains under parametric uncertainty through a data-driven decision-support framework
This paper addresses the optimal management of a multi-objective bio-based energy supply chain network subjected to multiple sources of uncertainty. The complexity to obtain an optimal solution using traditional uncertainty management methods dramatically increases with the number of uncertain factors considered. Such a complexity produces that, if tractable, the problem is solved after a large computational effort. Therefore, in this work a data-driven decision-making framework is proposed to address this issue. Such a framework exploits machine learning techniques to efficiently approximate the optimal management decisions considering a set of uncertain parameters that continuously influence the process behavior as an input. A design of computer experiments technique is used in order to combine these parameters and produce a matrix of representative information. These data are used to optimize the deterministic multi-objective bio-based energy network problem through conventional optimization methods, leading to a detailed (but elementary) map of the optimal management decisions based on the uncertain parameters. Afterwards, the detailed data-driven relations are described/identified using an Ordinary Kriging meta-model. The result exhibits a very high accuracy of the parametric meta-models for predicting the optimal decision variables in comparison with the traditional stochastic approach. Besides, and more importantly, a dramatic reduction of the computational effort required to obtain these optimal values in response to the change of the uncertain parameters is achieved. Thus the use of the proposed data-driven decision tool promotes a time-effective optimal decision making, which represents a step forward to use data-driven strategy in large-scale/complex industrial problems.Peer ReviewedPostprint (published version
Portfolio selection problems in practice: a comparison between linear and quadratic optimization models
Several portfolio selection models take into account practical limitations on
the number of assets to include and on their weights in the portfolio. We
present here a study of the Limited Asset Markowitz (LAM), of the Limited Asset
Mean Absolute Deviation (LAMAD) and of the Limited Asset Conditional
Value-at-Risk (LACVaR) models, where the assets are limited with the
introduction of quantity and cardinality constraints. We propose a completely
new approach for solving the LAM model, based on reformulation as a Standard
Quadratic Program and on some recent theoretical results. With this approach we
obtain optimal solutions both for some well-known financial data sets used by
several other authors, and for some unsolved large size portfolio problems. We
also test our method on five new data sets involving real-world capital market
indices from major stock markets. Our computational experience shows that,
rather unexpectedly, it is easier to solve the quadratic LAM model with our
algorithm, than to solve the linear LACVaR and LAMAD models with CPLEX, one of
the best commercial codes for mixed integer linear programming (MILP) problems.
Finally, on the new data sets we have also compared, using out-of-sample
analysis, the performance of the portfolios obtained by the Limited Asset
models with the performance provided by the unconstrained models and with that
of the official capital market indices
An Evolutionary Computational Approach for the Problem of Unit Commitment and Economic Dispatch in Microgrids under Several Operation Modes
In the last decades, new types of generation technologies have emerged and have been gradually integrated into the existing power systems, moving their classical architectures to distributed systems. Despite the positive features associated to this paradigm, new problems arise such as coordination and uncertainty. In this framework, microgrids constitute an effective solution to deal with the coordination and operation of these distributed energy resources. This paper proposes a Genetic Algorithm (GA) to address the combined problem of Unit Commitment (UC) and Economic Dispatch (ED). With this end, a model of a microgrid is introduced together with all the control variables and physical constraints. To optimally operate the microgrid, three operation modes are introduced. The first two attend to optimize economical and environmental factors, while the last operation mode considers the errors induced by the uncertainties in the demand forecasting. Therefore, it achieves a robust design that guarantees the power supply for different confidence levels. Finally, the algorithm was applied to an example scenario to illustrate its performance. The achieved simulation results demonstrate the validity of the proposed approach.Ministerio de Ciencia, Innovación y Universidades TEC2016-80242-PMinisterio de Economía y Competitividad PCIN-2015-043Universidad de Sevilla Programa propio de I+D+
A Domain Specific Approach to High Performance Heterogeneous Computing
Users of heterogeneous computing systems face two problems: firstly, in
understanding the trade-off relationships between the observable
characteristics of their applications, such as latency and quality of the
result, and secondly, how to exploit knowledge of these characteristics to
allocate work to distributed computing platforms efficiently. A domain specific
approach addresses both of these problems. By considering a subset of
operations or functions, models of the observable characteristics or domain
metrics may be formulated in advance, and populated at run-time for task
instances. These metric models can then be used to express the allocation of
work as a constrained integer program, which can be solved using heuristics,
machine learning or Mixed Integer Linear Programming (MILP) frameworks. These
claims are illustrated using the example domain of derivatives pricing in
computational finance, with the domain metrics of workload latency or makespan
and pricing accuracy. For a large, varied workload of 128 Black-Scholes and
Heston model-based option pricing tasks, running upon a diverse array of 16
Multicore CPUs, GPUs and FPGAs platforms, predictions made by models of both
the makespan and accuracy are generally within 10% of the run-time performance.
When these models are used as inputs to machine learning and MILP-based
workload allocation approaches, a latency improvement of up to 24 and 270 times
over the heuristic approach is seen.Comment: 14 pages, preprint draft, minor revisio
- …