365 research outputs found

    Pre-Conditioners and Relations between Different Measures of Conditioning for Conic Linear Systems

    Get PDF
    In recent years, new and powerful research into "condition numbers" for convex optimization has been developed, aimed at capturing the intuitive notion of problem behavior. This research has been shown to be important in studying the efficiency of algorithms, including interior-point algorithms, for convex optimization as well as other behavioral characteristics of these problems such as problem geometry, deformation under data perturbation, etc. This paper studies measures of conditioning for a conic linear system of the form (FPd): Ax = b, x E Cx, whose data is d = (A, b). We present a new measure of conditioning, denoted pd, and we show implications of lid for problem geometry and algorithm complexity, and demonstrate that the value of = id is independent of the specific data representation of (FPd). We then prove certain relations among a variety of condition measures for (FPd), including ld, pad, Xd, and C(d). We discuss some drawbacks of using the condition number C(d) as the sole measure of conditioning of a conic linear system, and we then introduce the notion of a "pre-conditioner" for (FPd) which results in an equivalent formulation (FPj) of (FPd) with a better condition number C(d). We characterize the best such pre-conditioner and provide an algorithm for constructing an equivalent data instance d whose condition number C(d) is within a known factor of the best possible

    Assignment of Swimmers to Dual Meet Events

    Get PDF
    Every fall, thousands of high school swimming coaches across the country begin the arduous process of preparing their athletes for competition. With a grueling practice schedule and a dedicated group of athletes, a coach can hone the squad into a cohesive unit, poised for any competition. However, oftentimes all preparation is in vain, as coaches assign swimmers to events with a lineup that is far from optimal. This paper provides a model that may help a high school (or other level) swim team coach make these assignments. Following state and national guidelines for swim meets, we describe a binary integer model that determines an overall assignment that maximizes the total number of points scored by the squad based on the times for swimmers on the squad and for the expected opponent

    Condition number complexity of an elementary algorithm for computing a reliable solution of a conic linear system

    Full text link
    A conic linear system is a system of the form¶¶(FP d )Ax = b ¶ x ∈ C X ,¶¶where A:X ? Y is a linear operator between n - and m -dimensional linear spaces X and Y , b ∈ Y , and C X ⊂X is a closed convex cone. The data for the system is d =( A,b ). This system is “well-posed” to the extent that (small) changes in the data d =( A,b ) do not alter the status of the system (the system remains feasible or not). Renegar defined the “distance to ill-posedness,”ρ( d ), to be the smallest change in the data Δ d =(Δ A ,Δ b ) needed to create a data instance d +Δ d that is “ill-posed,” i.e., that lies in the intersection of the closures of the sets of feasible and infeasible instances d ′ =( A ′ , b ′ ) of (FP (·) ). Renegar also defined the condition number ?( d ) of the data instance d as the scale-invariant reciprocal of ρ( d ) : ?( d )= .¶In this paper we develop an elementary algorithm that computes a solution of (FP d ) when it is feasible, or demonstrates that (FP d ) has no solution by computing a solution of the alternative system. The algorithm is based on a generalization of von Neumann’s algorithm for solving linear inequalities. The number of iterations of the algorithm is essentially bounded by¶¶ O (  ?( d ) 2 ln(?( d )))¶¶where the constant depends only on the properties of the cone C X and is independent of data d . Each iteration of the algorithm performs a small number of matrix-vector and vector-vector multiplications (that take full advantage of the sparsity of the original data) plus a small number of other operations involving the cone C X . The algorithm is “elementary” in the sense that it performs only a few relatively simple computations at each iteration.¶The solution of the system (FP d ) generated by the algorithm has the property of being “reliable” in the sense that the distance from to the boundary of the cone C X , dist( ,∂ C X ), and the size of the solution, ∥ ∥, satisfy the following inequalities:¶¶∥ ∥≤ c 1 ?( d ),dist( ,∂ C X )≥ c 2 , and ≤ c 3 ?( d ),¶¶where c 1 , c 2 , c 3 are constants that depend only on properties of the cone C X and are independent of the data d (with analogous results for the alternative system when the system (FP d ) is infeasible).Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/42344/1/10107-88-3-451_00880451.pd

    Ideal spatial radiotherapy dose distributions subject to positional uncertainties

    Full text link
    In radiotherapy a common method used to compensate for patient setup error and organ motion is to enlarge the clinical target volume (CTV) by a ‘margin’ to produce a ‘planning target volume’ (PTV). Using weighted power loss functions as a measure of performance for a treatment plan, a simple method can be developed to calculate the ideal spatial dose distribution (one that minimizes expected loss) when there is uncertainty. The spatial dose distribution is assumed to be invariant to the displacement of the internal structures and the whole patient. The results provide qualitative insights into the suitability of using a margin at all, and (if one is to be used) how to select a ‘good’ margin size. The common practice of raising the power parameters in the treatment loss function, in order to enforce target dose requirements, is shown to be potentially counter-productive. These results offer insights into desirable dose distributions and could be used, in conjunction with well-established inverse radiotherapy planning techniques, to produce dose distributions that are robust against uncertainties.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/58093/2/pmb6_24_004.pd

    Necrotic myocardial cells release damage-associated molecular patterns that provoke fibroblast activation in vitro and trigger myocardial inflammation and fibrosis in vivo

    Get PDF
    BACKGROUND: Tissue injury triggers inflammatory responses that promote tissue fibrosis; however, the mechanisms that couple tissue injury, inflammation, and fibroblast activation are not known. Given that dying cells release proinflammatory “damage-associated molecular patterns” (DAMPs), we asked whether proteins released by necrotic myocardial cells (NMCs) were sufficient to activate fibroblasts in vitro by examining fibroblast activation after stimulation with proteins released by necrotic myocardial tissue, as well as in vivo by injecting proteins released by necrotic myocardial tissue into the hearts of mice and determining the extent of myocardial inflammation and fibrosis at 72 hours. METHODS AND RESULTS: The freeze–thaw technique was used to induce myocardial necrosis in freshly excised mouse hearts. Supernatants from NMCs contained multiple DAMPs, including high mobility group box-1 (HMGB1), galectin-3, S100β, S100A8, S100A9, and interleukin-1α. NMCs provoked a significant increase in fibroblast proliferation, α–smooth muscle actin activation, and collagen 1A1 and 3A1 mRNA expression and significantly increased fibroblast motility in a cell-wounding assay in a Toll-like receptor 4 (TLR4)- and receptor for advanced glycation end products–dependent manner. NMC stimulation resulted in a significant 3- to 4-fold activation of Akt and Erk, whereas pretreatment with Akt (A6730) and Erk (U0126) inhibitors decreased NMC-induced fibroblast proliferation dose-dependently. The effects of NMCs on cell proliferation and collagen gene expression were mimicked by several recombinant DAMPs, including HMGB1 and galectin-3. Moreover, immunodepletion of HMGB1 in NMC supernatants abrogated NMC-induced cell proliferation. Finally, injection of NMC supernatants or recombinant HMGB1 into the heart provoked increased myocardial inflammation and fibrosis in wild-type mice but not in TLR4-deficient mice. CONCLUSIONS: These studies constitute the initial demonstration that DAMPs released by NMCs induce fibroblast activation in vitro, as well as myocardial inflammation and fibrosis in vivo, at least in part, through TLR4-dependent signaling

    The pancreas anatomy conditions the origin and properties of resident macrophages

    Get PDF
    We examine the features, origin, turnover, and gene expression of pancreatic macrophages under steady state. The data distinguish macrophages within distinct intrapancreatic microenvironments and suggest how macrophage phenotype is imprinted by the local milieu. Macrophages in islets of Langerhans and in the interacinar stroma are distinct in origin and phenotypic properties. In islets, macrophages are the only myeloid cells: they derive from definitive hematopoiesis, exchange to a minimum with blood cells, have a low level of self-replication, and depend on CSF-1. They express Il1b and Tnfa transcripts, indicating classical activation, M1, under steady state. The interacinar stroma contains two macrophage subsets. One is derived from primitive hematopoiesis, with no interchange by blood cells and alternative, M2, activation profile, whereas the second is derived from definitive hematopoiesis and exchanges with circulating myeloid cells but also shows an alternative activation profile. Complete replacement of islet and stromal macrophages by donor stem cells occurred after lethal irradiation with identical profiles as observed under steady state. The extraordinary plasticity of macrophages within the pancreatic organ and the distinct features imprinted by their anatomical localization sets the base for examining these cells in pathological conditions

    Exposure to magnetic fields and childhood acute lymphocytic leukemia in São Paulo, Brazil

    Get PDF
    Background: Epidemiological studies have identified increased risks of leukemia in children living near power lines and exposed to relatively high levels of magnetic fields. Results have been remarkably consistent, but there is still no explanation for this increase. in this study we evaluated the effect of 60 Hz magnetic fields on acute lymphocytic leukemia (ALL) in the State of São Paulo, Brazil. Methods: This case-control study included ALL cases (n = 162) recruited from eight hospitals between January 2003 and February 2009. Controls (n = 565) matched on gender, age, and city of birth were selected from the São Paulo Birth Registry. Exposure to extremely low frequency magnetic fields (ELF MF) was based on measurements inside home and distance to power lines. Results: for 24 h measurements in children rooms, levels of ELF MF equal to or greater than 0.3 microtesla (mu T), compared to children exposed to levels below 0.1 mu T showed no increased risk of ALL (odds ratio [OR] 1.09; 95% confidence interval [95% CI] 0.33-3.61). When only nighttime measurements were considered, a risk (OR 1.52; 95% CI 0.46-5.01) was observed. Children living within 200 m of power lines presented an increased risk of ALL (OR 1.67; 95% CI 0.49-5.75), compared to children living at 600 m or more of power lines. for those living within 50 m of power lines the OR was 3.57 (95% CI 0.41-31.44). Conclusions: Even though our results are consistent with the small risks reported in other studies on ELF MF and leukemia in children, overall our results do not provide support for an association between magnetic fields and childhood leukemia, but small numbers and likely biases weaken the strength of this conclusion. (C) 2011 Elsevier B.V. All rights reserved.Univ São Paulo, Fac Saude Publ, Dept Epidemiol, BR-01255 São Paulo, BrazilAssoc Brasileira Compatibilidade Eletromagnet, São Paulo, BrazilHosp Amaral Carvalho, Jau, BrazilUniv São Paulo, Fac Med, BR-14049 Ribeirao Preto, BrazilUniversidade Federal de São Paulo, Inst Oncol Pediat, São Paulo, BrazilHosp Infantil Darcy Vargas, São Paulo, BrazilSanta Casa Misericordia São Paulo, São Paulo, BrazilHosp Santa Marcelina, São Paulo, BrazilUniv Calif Los Angeles, Sch Publ Hlth, Los Angeles, CA 90024 USAUniversidade Federal de São Paulo, Inst Oncol Pediat, São Paulo, BrazilWeb of Scienc

    Rescaled coordinate descent methods for linear programming

    Get PDF
    We propose two simple polynomial-time algorithms to find a positive solution to Ax=0Ax=0 . Both algorithms iterate between coordinate descent steps similar to von Neumann’s algorithm, and rescaling steps. In both cases, either the updating step leads to a substantial decrease in the norm, or we can infer that the condition measure is small and rescale in order to improve the geometry. We also show how the algorithms can be extended to find a solution of maximum support for the system Ax=0Ax=0 , x≥0x≥0 . This is an extended abstract. The missing proofs will be provided in the full version

    Projective re-normalization for improving the behavior of a homogeneous conic linear system

    Get PDF
    In this paper we study the homogeneous conic system F : Ax = 0, x ∈ C \ {0}. We choose a point ¯s ∈ intC∗ that serves as a normalizer and consider computational properties of the normalized system F¯s : Ax = 0, ¯sT x = 1, x ∈ C. We show that the computational complexity of solving F via an interior-point method depends only on the complexity value ϑ of the barrier for C and on the symmetry of the origin in the image set H¯s := {Ax : ¯sT x = 1, x ∈ C}, where the symmetry of 0 in H¯s is sym(0,H¯s) := max{α : y ∈ H¯s -->−αy ∈ H¯s} .We show that a solution of F can be computed in O(sqrtϑ ln(ϑ/sym(0,H¯s)) interior-point iterations. In order to improve the theoretical and practical computation of a solution of F, we next present a general theory for projective re-normalization of the feasible region F¯s and the image set H¯s and prove the existence of a normalizer ¯s such that sym(0,H¯s) ≥ 1/m provided that F has an interior solution. We develop a methodology for constructing a normalizer ¯s such that sym(0,H¯s) ≥ 1/m with high probability, based on sampling on a geometric random walk with associated probabilistic complexity analysis. While such a normalizer is not itself computable in strongly-polynomialtime, the normalizer will yield a conic system that is solvable in O(sqrtϑ ln(mϑ)) iterations, which is strongly-polynomialtime. Finally, we implement this methodology on randomly generated homogeneous linear programming feasibility problems, constructed to be poorly behaved. Our computational results indicate that the projective re-normalization methodology holds the promise to markedly reduce the overall computation time for conic feasibility problems; for instance we observe a 46% decrease in average IPM iterations for 100 randomly generated poorly-behaved problem instances of dimension 1000 × 5000.Singapore-MIT Allianc
    corecore