1,227 research outputs found

    Parallel software tool for decomposing and meshing of 3d structures

    Get PDF
    An algorithm for automatic parallel generation of three-dimensional unstructured computational meshes based on geometrical domain decomposition is proposed in this paper. Software package build upon proposed algorithm is described. Several practical examples of mesh generation on multiprocessor computational systems are given. It is shown that developed parallel algorithm enables us to reduce mesh generation time significantly (dozens of times). Moreover, it easily produces meshes with number of elements of order 5 · 107, construction of those on a single CPU is problematic. Questions of time consumption, efficiency of computations and quality of generated meshes are also considered

    Service Center Location with Decision Dependent Utilities

    Full text link
    We study a service center location problem with ambiguous utility gains upon receiving service. The model is motivated by the problem of deciding medical clinic/service centers, possibly in rural communities, where residents need to visit the clinics to receive health services. A resident gains his utility based on travel distance, waiting time, and service features of the facility that depend on the clinic location. The elicited location-dependent utilities are assumed to be ambiguously described by an expected value and variance constraint. We show that despite a non-convex nonlinearity, given by a constraint specified by a maximum of two second-order conic functions, the model admits a mixed 0-1 second-order cone (MISOCP) formulation. We study the non-convex substructure of the problem, and present methods for developing its strengthened formulations by using valid tangent inequalities. Computational study shows the effectiveness of solving the strengthened formulations. Examples are used to illustrate the importance of including decision dependent ambiguity.Comment: 29 page

    The radius of robust feasibility of uncertain mathematical programs: A Survey and recent developments

    Get PDF
    The radius of robust feasibility provides a numerical value for the largest possible uncertainty set that guarantees feasibility of a robust counterpart of a mathematical program with uncertain constraints. The objective of this review of the state-of-the-art in this field is to present this useful tool of robust optimization to its potential users and to avoid undesirable overlapping of research works on the topic as those we have recently detected. In this paper we overview the existing literature on the radius of robust feasibility in continuous and mixed-integer linearly constrained programs, linearly constrained semi-infinite programs, convexly constrained programs, and conic linearly constrained programs. We also analyze the connection between the radius of robust feasibility and the distance to ill-posedness for different types of uncertain mathematical programs.This research was partially supported by the Australian Research Council, Discovery Project grant and the Ministry of Science, Innovation and Universities of Spain and the European Regional Development Fund (ERDF) of the European Commission, Grant PGC2018-097960-B-C22

    Distributionally robust optimization with applications to risk management

    No full text
    Many decision problems can be formulated as mathematical optimization models. While deterministic optimization problems include only known parameters, real-life decision problems almost invariably involve parameters that are subject to uncertainty. Failure to take this uncertainty under consideration may yield decisions which can lead to unexpected or even catastrophic results if certain scenarios are realized. While stochastic programming is a sound approach to decision making under uncertainty, it assumes that the decision maker has complete knowledge about the probability distribution that governs the uncertain parameters. This assumption is usually unjustified as, for most realistic problems, the probability distribution must be estimated from historical data and is therefore itself uncertain. Failure to take this distributional modeling risk into account can result in unduly optimistic risk assessment and suboptimal decisions. Furthermore, for most distributions, stochastic programs involving chance constraints cannot be solved using polynomial-time algorithms. In contrast to stochastic programming, distributionally robust optimization explicitly accounts for distributional uncertainty. In this framework, it is assumed that the decision maker has access to only partial distributional information, such as the first- and second-order moments as well as the support. Subsequently, the problem is solved under the worst-case distribution that complies with this partial information. This worst-case approach effectively immunizes the problem against distributional modeling risk. The objective of this thesis is to investigate how robust optimization techniques can be used for quantitative risk management. In particular, we study how the risk of large-scale derivative portfolios can be computed as well as minimized, while making minimal assumptions about the probability distribution of the underlying asset returns. Our interest in derivative portfolios stems from the fact that careless investment in derivatives can yield large losses or even bankruptcy. We show that by employing robust optimization techniques we are able to capture the substantial risks involved in derivative investments. Furthermore, we investigate how distributionally robust chance constrained programs can be reformulated or approximated as tractable optimization problems. Throughout the thesis, we aim to derive tractable models that are scalable to industrial-size problems
    corecore