25 research outputs found

    Pre-Conditioners and Relations between Different Measures of Conditioning for Conic Linear Systems

    Get PDF
    In recent years, new and powerful research into "condition numbers" for convex optimization has been developed, aimed at capturing the intuitive notion of problem behavior. This research has been shown to be important in studying the efficiency of algorithms, including interior-point algorithms, for convex optimization as well as other behavioral characteristics of these problems such as problem geometry, deformation under data perturbation, etc. This paper studies measures of conditioning for a conic linear system of the form (FPd): Ax = b, x E Cx, whose data is d = (A, b). We present a new measure of conditioning, denoted pd, and we show implications of lid for problem geometry and algorithm complexity, and demonstrate that the value of = id is independent of the specific data representation of (FPd). We then prove certain relations among a variety of condition measures for (FPd), including ld, pad, Xd, and C(d). We discuss some drawbacks of using the condition number C(d) as the sole measure of conditioning of a conic linear system, and we then introduce the notion of a "pre-conditioner" for (FPd) which results in an equivalent formulation (FPj) of (FPd) with a better condition number C(d). We characterize the best such pre-conditioner and provide an algorithm for constructing an equivalent data instance d whose condition number C(d) is within a known factor of the best possible

    Condition number complexity of an elementary algorithm for computing a reliable solution of a conic linear system

    Full text link
    A conic linear system is a system of the form¶¶(FP d )Ax = b ¶ x ∈ C X ,¶¶where A:X ? Y is a linear operator between n - and m -dimensional linear spaces X and Y , b ∈ Y , and C X ⊂X is a closed convex cone. The data for the system is d =( A,b ). This system is “well-posed” to the extent that (small) changes in the data d =( A,b ) do not alter the status of the system (the system remains feasible or not). Renegar defined the “distance to ill-posedness,”ρ( d ), to be the smallest change in the data Δ d =(Δ A ,Δ b ) needed to create a data instance d +Δ d that is “ill-posed,” i.e., that lies in the intersection of the closures of the sets of feasible and infeasible instances d â€Č =( A â€Č , b â€Č ) of (FP (·) ). Renegar also defined the condition number ?( d ) of the data instance d as the scale-invariant reciprocal of ρ( d ) : ?( d )= .¶In this paper we develop an elementary algorithm that computes a solution of (FP d ) when it is feasible, or demonstrates that (FP d ) has no solution by computing a solution of the alternative system. The algorithm is based on a generalization of von Neumann’s algorithm for solving linear inequalities. The number of iterations of the algorithm is essentially bounded by¶¶ O (  ?( d ) 2 ln(?( d )))¶¶where the constant depends only on the properties of the cone C X and is independent of data d . Each iteration of the algorithm performs a small number of matrix-vector and vector-vector multiplications (that take full advantage of the sparsity of the original data) plus a small number of other operations involving the cone C X . The algorithm is “elementary” in the sense that it performs only a few relatively simple computations at each iteration.¶The solution of the system (FP d ) generated by the algorithm has the property of being “reliable” in the sense that the distance from to the boundary of the cone C X , dist( ,∂ C X ), and the size of the solution, ∄ ∄, satisfy the following inequalities:¶¶∄ ∄≀ c 1 ?( d ),dist( ,∂ C X )≄ c 2 , and ≀ c 3 ?( d ),¶¶where c 1 , c 2 , c 3 are constants that depend only on properties of the cone C X and are independent of the data d (with analogous results for the alternative system when the system (FP d ) is infeasible).Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/42344/1/10107-88-3-451_00880451.pd

    Simplex Algorithm for Countable-state Discounted Markov Decision Processes

    Get PDF
    Submitted to Operations Research; preliminary version.We consider discounted Markov Decision Processes (MDPs) with countably-infinite state spaces, finite action spaces, and unbounded rewards. Typical examples of such MDPs are inventory management and queueing control problems in which there is no specific limit on the size of inventory or queue. Existing solution methods obtain a sequence of policies that converges to optimality in value but may not improve monotonically, i.e., a policy in the sequence may be worse than preceding policies. Our proposed approach considers countably-infinite linear programming (CILP) formulations of the MDPs (a CILP is defined as a linear program (LP) with countably-infinite numbers of variables and constraints). Under standard assumptions for analyzing MDPs with countably-infinite state spaces and unbounded rewards, we extend the major theoretical extreme point and duality results to the resulting CILPs. Under an additional technical assumption which is satisfied by several applications of interest, we present a simplex-type algorithm that is implementable in the sense that each of its iterations requires only a finite amount of data and computation. We show that the algorithm finds a sequence of policies which improves monotonically and converges to optimality in value. Unlike existing simplex-type algorithms for CILPs, our proposed algorithm solves a class of CILPs in which each constraint may contain an infinite number of variables and each variable may appear in an infinite number of constraints. A numerical illustration for inventory management problems is also presented.National Science Foundation grant CMMI-1333260A research grant from the University of Michiganhttp://deepblue.lib.umich.edu/bitstream/2027.42/109413/1/CountableStateMDP-MAE.pdfDescription of CountableStateMDP-MAE.pdf : Main article (preliminary version

    Call Center Staffing with Simulation and Cutting Plane Methods

    Full text link
    We present an iterative cutting plane method for minimizing staffing costs in a service system subject to satisfying acceptable service level requirements over multiple time periods. We assume that the service level cannot be easily computed, and instead is evaluated using simulation. The simulation uses the method of common random numbers, so that the same sequence of random phenomena is observed when evaluating different staffing plans. In other words, we solve a sample average approximation problem. We establish convergence of the cutting plane method on a given sample average approximation. We also establish both convergence, and the rate of convergence, of the solutions to the sample average approximation to solutions of the original problem as the sample size increases. The cutting plane method relies on the service level functions being concave in the number of servers. We show how to verify this requirement as our algorithm proceeds. A numerical example showcases the properties of our method, and sheds light on when the concavity requirement can be expected to hold.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/44119/1/10479_2004_Article_5255891.pd

    A new column-generation-based algorithm for VMAT treatment plan optimization

    Full text link
    We study the treatment plan optimization problem for volumetric modulated arc therapy (VMAT). We propose a new column-generation-based algorithm that takes into account bounds on the gantry speed and dose rate, as well as an upper bound on the rate of change of the gantry speed, in addition to MLC constraints. The algorithm iteratively adds one aperture at each control point along the treatment arc. In each iteration, a restricted problem optimizing intensities at previously selected apertures is solved, and its solution is used to formulate a pricing problem, which selects an aperture at another control point that is compatible with previously selected apertures and leads to the largest rate of improvement in the objective function value of the restricted problem. Once a complete set of apertures is obtained, their intensities are optimized and the gantry speeds and dose rates are adjusted to minimize treatment time while satisfying all machine restrictions. Comparisons of treatment plans obtained by our algorithm to idealized IMRT plans of 177 beams on five clinical prostate cancer cases demonstrate high quality with respect to clinical doseñ€ơÄìvolume criteria. For all cases, our algorithm yields treatment plans that can be delivered in around 2¬ñ€ min. Implementation on a graphic processing unit enables us to finish the optimization of a VMAT plan in 25ñ€ơÄì55¬ñ€ s.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/98593/1/0031-9155_57_14_4569.pd

    CoSIGN: A parallel algorithm for coordinated traffic signal control

    Get PDF
    Abstract — The problem of finding optimal coordinated signal timing plans for a large number of traffic signals is a challenging problem because of the exponential growth in the number of joint timing plans that need to be explored as the network size grows. In this paper, we employ the game-theoretic paradigm of fictitious play to iteratively search for a coordinated signal timing plan that improves a system-wide performance criterion for a traffic network. The algorithm is robustly scalable to realistic-size networks modelled with high fidelity simulations. We report results of a case study for the the city of Troy, Michigan, where there are 75 signalized intersections. Under normal traffic conditions, savings in average travel time of more than 20 percent are experienced against a static timing plan, and even against an aggressively tuned automatic signal re-timing algorithm, savings of more than 10 percent are achieved. The efficiency of the algorithm stems from its parallel nature. With a thousand parallel CPUs available, our algorithm finds the plan above in under 10 minutes, while a version of a hill-climbing algorithm makes virtually no progress in the same amount of wall-clock computational time. Index Terms — Coordinated traffic signal control, optimization, area traffic control I

    Ideal spatial radiotherapy dose distributions subject to positional uncertainties

    Full text link
    In radiotherapy a common method used to compensate for patient setup error and organ motion is to enlarge the clinical target volume (CTV) by a ‘margin’ to produce a ‘planning target volume’ (PTV). Using weighted power loss functions as a measure of performance for a treatment plan, a simple method can be developed to calculate the ideal spatial dose distribution (one that minimizes expected loss) when there is uncertainty. The spatial dose distribution is assumed to be invariant to the displacement of the internal structures and the whole patient. The results provide qualitative insights into the suitability of using a margin at all, and (if one is to be used) how to select a ‘good’ margin size. The common practice of raising the power parameters in the treatment loss function, in order to enforce target dose requirements, is shown to be potentially counter-productive. These results offer insights into desirable dose distributions and could be used, in conjunction with well-established inverse radiotherapy planning techniques, to produce dose distributions that are robust against uncertainties.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/58093/2/pmb6_24_004.pd

    A Dynamic Programming Approach to Achieving an Optimal End State along a Serial Production Line

    Get PDF
    In modern production systems, it is critical to perform maintenance, calibration, installation, and upgrade tasks during planned downtime. Otherwise, the systems become unreliable and new product introductions are delayed. For reasons of safety, testing, and access, task performance often requires the vicinity of impacted equipment to be left in a specific “end state” when production halts. Therefore, planning the shutdown of a production system to balance production goals against enabling non-production tasks yields a challenging optimization problem. In this paper, we propose a mathematical formulation of this problem and a dynamic programming approach that efficiently finds optimal shutdown policies for deterministic serial production lines. An event-triggered re-optimization procedure that is based on the proposed deterministic dynamic programming approach is also introduced for handling uncertainties in the production line for the stochastic case. We demonstrate numerically that in these cases with random breakdowns and repairs, the re-optimization procedure is efficient and even obtains results that are optimal or nearly optimal

    Costlets: A Generalized Approach to Cost Functions for Automated Optimization of IMRT Treatment Plans

    Full text link
    We present the creation and use of a generalized cost function methodology based on costlets for automated optimization for conformal and intensity modulated radiotherapy treatment plans. In our approach, cost functions are created by combining clinically relevant “costlets”. Each costlet is created by the user, using an “evaluator” of the plan or dose distribution which is incorporated into a function or “modifier” to create an individual costlet. Dose statistics, dose-volume points, biological model results, non-dosimetric parameters, and any other information can be converted into a costlet. A wide variety of different types of costlets can be used concurrently. Individual costlet changes affect not only the results for that structure, but also all the other structures in the plan (e.g., a change in a normal tissue costlet can have large effects on target volume results as well as the normal tissue). Effective cost functions can be created from combinations of dose-based costlets, dose-volume costlets, biological model costlets, and other parameters. Generalized cost functions based on costlets have been demonstrated, and show potential for allowing input of numerous clinical issues into the optimization process, thereby helping to achieve clinically useful optimized plans. In this paper, we describe and illustrate the use of the costlets in an automated planning system developed and used clinically at the University of Michigan Medical Center. We place particular emphasis on the flexibility of the system, and its ability to discover a variety of plans making various trade-offs between clinical goals of the treatment that may be difficult to meet simultaneously.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/47484/1/11081_2005_Article_2066.pd

    Effect of angiotensin-converting enzyme inhibitor and angiotensin receptor blocker initiation on organ support-free days in patients hospitalized with COVID-19

    Get PDF
    IMPORTANCE Overactivation of the renin-angiotensin system (RAS) may contribute to poor clinical outcomes in patients with COVID-19. Objective To determine whether angiotensin-converting enzyme (ACE) inhibitor or angiotensin receptor blocker (ARB) initiation improves outcomes in patients hospitalized for COVID-19. DESIGN, SETTING, AND PARTICIPANTS In an ongoing, adaptive platform randomized clinical trial, 721 critically ill and 58 non–critically ill hospitalized adults were randomized to receive an RAS inhibitor or control between March 16, 2021, and February 25, 2022, at 69 sites in 7 countries (final follow-up on June 1, 2022). INTERVENTIONS Patients were randomized to receive open-label initiation of an ACE inhibitor (n = 257), ARB (n = 248), ARB in combination with DMX-200 (a chemokine receptor-2 inhibitor; n = 10), or no RAS inhibitor (control; n = 264) for up to 10 days. MAIN OUTCOMES AND MEASURES The primary outcome was organ support–free days, a composite of hospital survival and days alive without cardiovascular or respiratory organ support through 21 days. The primary analysis was a bayesian cumulative logistic model. Odds ratios (ORs) greater than 1 represent improved outcomes. RESULTS On February 25, 2022, enrollment was discontinued due to safety concerns. Among 679 critically ill patients with available primary outcome data, the median age was 56 years and 239 participants (35.2%) were women. Median (IQR) organ support–free days among critically ill patients was 10 (–1 to 16) in the ACE inhibitor group (n = 231), 8 (–1 to 17) in the ARB group (n = 217), and 12 (0 to 17) in the control group (n = 231) (median adjusted odds ratios of 0.77 [95% bayesian credible interval, 0.58-1.06] for improvement for ACE inhibitor and 0.76 [95% credible interval, 0.56-1.05] for ARB compared with control). The posterior probabilities that ACE inhibitors and ARBs worsened organ support–free days compared with control were 94.9% and 95.4%, respectively. Hospital survival occurred in 166 of 231 critically ill participants (71.9%) in the ACE inhibitor group, 152 of 217 (70.0%) in the ARB group, and 182 of 231 (78.8%) in the control group (posterior probabilities that ACE inhibitor and ARB worsened hospital survival compared with control were 95.3% and 98.1%, respectively). CONCLUSIONS AND RELEVANCE In this trial, among critically ill adults with COVID-19, initiation of an ACE inhibitor or ARB did not improve, and likely worsened, clinical outcomes. TRIAL REGISTRATION ClinicalTrials.gov Identifier: NCT0273570
    corecore