9 research outputs found
Examination of optimizing information flow in networks
The central role of the Internet and the World-Wide-Web in global communications has refocused much attention on problems involving optimizing information flow through networks. The most basic formulation of the question is called the "max flow" optimization problem: given a set of channels with prescribed capacities that connect a set of nodes in a network, how should the materials or information be distributed among the various routes to maximize the total flow rate from the source to the destination. Theory in linear programming has been well developed to solve the classic max flow problem. Modern contexts have demanded the examination of more complicated variations of the max flow problem to take new factors or constraints into consideration; these changes lead to more difficult problems where linear programming is insufficient.
In the workshop we examined models for information flow on networks that considered trade-offs between the overall network utility (or flow rate) and path diversity to ensure balanced usage of all parts of the network (and to ensure stability and robustness against local disruptions in parts of the network).
While the linear programming solution of the basic max flow problem cannot handle the current problem, the approaches primal/dual formulation for describing the constrained optimization problem can be applied to the current generation of problems, called network utility maximization (NUM) problems. In particular, primal/dual formulations have been used extensively in studies of such networks.
A key feature of the traffic-routing model we are considering is its formulation as an economic system, governed by principles of supply and demand. Considering channel capacities as a commodity of limited supply, we might suspect that a system that regulates traffic via a pricing scheme would assign prices to channels in a manner inversely proportional to their respective capacities.
Once an appropriate network optimization problem has been formulated, it remains to solve the optimization problem; this will need to be done numerically, but the process can greatly benefit from simplifications and reductions that follow from analysis of the problem. Ideally the form of the numerical solution scheme can give insight on the design of a distributed algorithm for a Transmission Control Protocol (TCP) that can be directly implemented on the network.
At the workshop we considered the optimization problems for two small prototype network topologies: the two-link network and the diamond network. These examples are small enough to be tractable during the workshop, but retain some of the key features relevant to larger networks (competing routes with different capacities from the source to the destination, and routes with overlapping channels, respectively). We have studied a gradient descent method for solving obtaining the optimal solution via the dual problem. The numerical method was implemented in MATLAB and further analysis of the dual problem and properties of the gradient method were carried out. Another thrust of the group's work was in direct simulations of information flow in these small networks via Monte Carlo simulations as a means of directly testing the efficiencies of various allocation strategies
Robust artificial neural networks and outlier detection. Technical report
Large outliers break down linear and nonlinear regression models. Robust
regression methods allow one to filter out the outliers when building a model.
By replacing the traditional least squares criterion with the least trimmed
squares criterion, in which half of data is treated as potential outliers, one
can fit accurate regression models to strongly contaminated data.
High-breakdown methods have become very well established in linear regression,
but have started being applied for non-linear regression only recently. In this
work, we examine the problem of fitting artificial neural networks to
contaminated data using least trimmed squares criterion. We introduce a
penalized least trimmed squares criterion which prevents unnecessary removal of
valid data. Training of ANNs leads to a challenging non-smooth global
optimization problem. We compare the efficiency of several derivative-free
optimization methods in solving it, and show that our approach identifies the
outliers correctly when ANNs are used for nonlinear regression
An Approximation Algorithm for the Facility Location Problem with Lexicographic Minimax Objective
We present a new approximation algorithm to the discrete facility location problem providing solutions that are close to the lexicographic minimax optimum. The lexicographic minimax optimum is a concept that allows to find equitable location of facilities serving a large number of customers. The algorithm is independent of general purpose solvers and instead uses algorithms originally designed to solve the p-median problem. By numerical experiments, we demonstrate that our algorithm allows increasing the size of solvable problems and provides high-quality solutions. The algorithm found an optimal solution for all tested instances where we could compare the results with the exact algorithm
Using â„“p-norms for fairness in combinatorial optimisation
The issue of fairness has received attention from researchers in many fields, including combinatorial optimisation. One way to drive the solution toward fairness is to use a modified objective function that involves so-called â„“p-norms. If done in a naive way, this approach leads to large and symmetric mixed-integer nonlinear programs (MINLPs), that may be difficult to solve. We show that, for some problems, one can obtain alternative MINLP formulations that are much smaller, do not suffer from symmetry, and have a reasonably tight continuous relaxation. We give encouraging computational results for certain vehicle routing, facility location and network design problems
Development and calibration of a currency trading strategy using global optimization
We have developed a new financial indicator—called the Interest Rate Differentials Adjusted for Volatility (IRDAV) measure—to assist investors in currency markets. On a monthly basis, we rank currency pairs according to this measure and then select a basket of pairs with the highest IRDAV values. Under positive market conditions, an IRDAV based investment strategy (buying a currency with high interest rate and simultaneously selling a currency with low interest rate, after adjusting for volatility of the currency pairs in question) can generate significant returns. However, when the markets turn for the worse and crisis situations evolve, investors exit such money-making strategies suddenly, and—as a result—significant losses can occur. In an effort to minimize these potential losses, we also propose an aggregated Risk Metric that estimates the total risk by looking at various financial indicators across different markets. These risk indicators are used to get timely signals of evolving crises and to flip the strategy from long to short in a timely fashion, to prevent losses and make further gains even during crisis periods. Since our proprietary model is implemented in Excel as a highly nonlinear “black box” computational procedure, we use suitable global optimization methodology and software—the Lipschitz Global Optimizer solver suite linked to Excel—to maximize the performance of the currency basket, based on our selection of key decision variables. After the introduction of the new currency trading model and its implementation, we present numerical results based on actual market data. Our results clearly show the advantages of using global optimization based parameter settings, compared to the typically used “expert estimates” of the key model parameters.post-prin
Development and calibration of a currency trading strategy using global optimization
We have developed a new financial indicator—called the Interest Rate Differentials Adjusted for Volatility (IRDAV) measure—to assist investors in currency markets. On a monthly basis, we rank currency pairs according to this measure and then select a basket of pairs with the highest IRDAV values. Under positive market conditions, an IRDAV based investment strategy (buying a currency with high interest rate and simultaneously selling a currency with low interest rate, after adjusting for volatility of the currency pairs in question) can generate significant returns. However, when the markets turn for the worse and crisis situations evolve, investors exit such money-making strategies suddenly, and—as a result—significant losses can occur. In an effort to minimize these potential losses, we also propose an aggregated Risk Metric that estimates the total risk by looking at various financial indicators across different markets. These risk indicators are used to get timely signals of evolving crises and to flip the strategy from long to short in a timely fashion, to prevent losses and make further gains even during crisis periods. Since our proprietary model is implemented in Excel as a highly nonlinear “black box” computational procedure, we use suitable global optimization methodology and software—the Lipschitz Global Optimizer solver suite linked to Excel—to maximize the performance of the currency basket, based on our selection of key decision variables. After the introduction of the new currency trading model and its implementation, we present numerical results based on actual market data. Our results clearly show the advantages of using global optimization based parameter settings, compared to the typically used “expert estimates” of the key model parameters.post-prin
To be Fair or Efficient or a Bit of Both
Introducing a new concept of (  ¡ ¢)-fairness, which allows for a bounded fairness compromise, so that a source is allocated a rate neither less than £ ¤  ¤ ¥, nor more than ¢ ¦ ¥, times its fair share, this paper provides a framework to optimize efficiency (utilization, throughput or revenue) subject to fairness constraints in a general telecommunications network for an arbitrary fairness criterion and cost functions. We formulate a non-linear program (NLP) that finds the optimal bandwidth allocation by maximizing efficiency subject to (  ¡ ¢)-fairness constraints. This leads to what we call an efficiency-fairness function, which shows the benefit in efficiency as a function of the extent to which fairness is compromised. To solve the NLP we use two algorithms. The first is a well known branch-and-bound-based algorithm called Lipschitz Global Optimization and the second is a recently developed algorithm called Algorithm for Global Optimization Problems (AGOP). We demonstrate the applicability of the framework to a range of example from sharing a single link to efficiency fairness issues associated with serving customers in remote communities. Index Terms non-linear programming, utility optimization, fairness, efficiency-fairness tradeoff, bandwidth allocation. I
Recommended from our members
Integrated Scheduling and Beam Steering for Spatial Reuse
This document describes an approach to integrating antenna selection and control into a time-division MAC scheduling process. I argue that through such integration it is possible to achieve greater spatial reuse and interference mitigation than by solving the two problems separately. Without coupling between the MAC scheduling and physical antenna configuration processes, a \u22chicken-and-egg\u22 problem exists: If antenna decisions are made before scheduling, they cannot be optimized for the communication that will actually occur. If, on the other hand, the scheduling decisions are made first, the scheduler cannot know what the actual interference and communications properties of the network will be.
This dissertation presents algorithms for optimal spatial reuse TDMA scheduling with reconfigurable antennas. I present and solve the joint beam steering and scheduling problem for spatial reuse TDMA and describe an implemented system based on the algorithms developed. The algorithms described achieve up to a 600% speedup over TDMA in the experiments performed. This is based on using an optimization decomposition approach to arrive at a working distributed protocol which is equivalent to the original problem statement while also producing optimal solutions in an amount of time that is at worst linear in the size of the input. This is, to the best of my knowledge, the first actually implemented STDMA scheduling system based on dual decomposition. This dissertation identifies and briefly address some of the challenges that arise in taking such a system from theory to reality