400 research outputs found

    On the Equivalence of Quadratic Optimization Problems Commonly Used in Portfolio Theory

    Full text link
    In the paper, we consider three quadratic optimization problems which are frequently applied in portfolio theory, i.e, the Markowitz mean-variance problem as well as the problems based on the mean-variance utility function and the quadratic utility.Conditions are derived under which the solutions of these three optimization procedures coincide and are lying on the efficient frontier, the set of mean-variance optimal portfolios. It is shown that the solutions of the Markowitz optimization problem and the quadratic utility problem are not always mean-variance efficient. The conditions for the mean-variance efficiency of the solutions depend on the unknown parameters of the asset returns. We deal with the problem of parameter uncertainty in detail and derive the probabilities that the estimated solutions of the Markowitz problem and the quadratic utility problem are mean-variance efficient. Because these probabilities deviate from one the above mentioned quadratic optimization problems are not stochastically equivalent. The obtained results are illustrated by an empirical study.Comment: Revised preprint. To appear in European Journal of Operational Research. Contains 18 pages, 6 figure

    Emergence of social networks via direct and indirect reciprocity

    Get PDF
    Many models of social network formation implicitly assume that network properties are static in steady-state. In contrast, actual social networks are highly dynamic: allegiances and collaborations expire and may or may not be renewed at a later date. Moreover, empirical studies show that human social networks are dynamic at the individual level but static at the global level: individuals' degree rankings change considerably over time, whereas network-level metrics such as network diameter and clustering coefficient are relatively stable. There have been some attempts to explain these properties of empirical social networks using agent-based models in which agents play social dilemma games with their immediate neighbours, but can also manipulate their network connections to strategic advantage. However, such models cannot straightforwardly account for reciprocal behaviour based on reputation scores ("indirect reciprocity"), which is known to play an important role in many economic interactions. In order to account for indirect reciprocity, we model the network in a bottom-up fashion: the network emerges from the low-level interactions between agents. By so doing we are able to simultaneously account for the effect of both direct reciprocity (e.g. "tit-for-tat") as well as indirect reciprocity (helping strangers in order to increase one's reputation). This leads to a strategic equilibrium in the frequencies with which strategies are adopted in the population as a whole, but intermittent cycling over different strategies at the level of individual agents, which in turn gives rise to social networks which are dynamic at the individual level but stable at the network level

    Stochastic dominance spanning and augmenting the human development index with institutional quality

    Get PDF
    The well-known Human Development Index (HDI) goes beyond a single measure of well-being as it is constructed as a composite index of achievements in education, income, and health dimensions. However, it is argued that the above dimensions do not reflect the overall well-being, and new indicators should be included in its construction. This paper uses stochastic dominance spanning to test the inclusion of additional institutional quality (governance) dimensions to the HDI, and we examine whether the augmentation of the original set of welfare dimensions by an additional component leads to distributional welfare gains or losses or neither. We find that differently constructed indicators of the same institutional quality measure produce different distributions of well-being. SUPPLEMENTARY INFORMATION: The online version contains supplementary material available at 10.1007/s10479-022-04656-w

    Computing a Finite Size Representation of the Set of Approximate Solutions of an MOP

    Get PDF
    Recently, a framework for the approximation of the entire set of ϵ\epsilon-efficient solutions (denote by EϵE_\epsilon) of a multi-objective optimization problem with stochastic search algorithms has been proposed. It was proven that such an algorithm produces -- under mild assumptions on the process to generate new candidate solutions --a sequence of archives which converges to EϵE_{\epsilon} in the limit and in the probabilistic sense. The result, though satisfactory for most discrete MOPs, is at least from the practical viewpoint not sufficient for continuous models: in this case, the set of approximate solutions typically forms an nn-dimensional object, where nn denotes the dimension of the parameter space, and thus, it may come to perfomance problems since in practise one has to cope with a finite archive. Here we focus on obtaining finite and tight approximations of EϵE_\epsilon, the latter measured by the Hausdorff distance. We propose and investigate a novel archiving strategy theoretically and empirically. For this, we analyze the convergence behavior of the algorithm, yielding bounds on the obtained approximation quality as well as on the cardinality of the resulting approximation, and present some numerical results

    Domination and Decomposition in Multiobjective Programming

    Get PDF
    During the last few decades, multiobjective programming has received much attention for both its numerous theoretical advances as well as its continued success in modeling and solving real-life decision problems in business and engineering. In extension of the traditionally adopted concept of Pareto optimality, this research investigates the more general notion of domination and establishes various theoretical results that lead to new optimization methods and support decision making. After a preparatory discussion of some preliminaries and a review of the relevant literature, several new findings are presented that characterize the nondominated set of a general vector optimization problem for which the underlying domination structure is defined in terms of different cones. Using concepts from linear algebra and convex analysis, a well known result relating nondominated points for polyhedral cones with Pareto solutions is generalized to nonpolyhedral cones that are induced by positively homogeneous functions, and to translated polyhedral cones that are used to describe a notion of approximate nondominance. Pareto-oriented scalarization methods are modified and several new solution approaches are proposed for these two classes of cones. In addition, necessary and sufficient conditions for nondominance with respect to a variable domination cone are developed, and some more specific results for the case of Bishop-Phelps cones are derived. Based on the above findings, a decomposition framework is proposed for the solution of multi-scenario and large-scale multiobjective programs and analyzed in terms of the efficiency relationships between the original and the decomposed subproblems. Using the concept of approximate nondominance, an interactive decision making procedure is formulated to coordinate tradeoffs between these subproblems and applied to selected problems from portfolio optimization and engineering design. Some introductory remarks and concluding comments together with ideas and research directions for possible future work complete this dissertation

    Inequity-averse decisions in operational research

    Get PDF
    This thesis is on inequity-averse decisions in operational research, and draws on concepts from economics and operational research such as multi-criteria decision making (MCDM) and mathematical modelling. The main focus of the study is developing systematic methods and modelling to help decision makers (DMs) in situations where equity concerns are important. We draw on insights from the economics literature and base our methods on some of the widely accepted principles in this area. We discuss two equity related concerns, namely equitability and balance, which are distinguished based on whether anonymity holds or not. We review applications involving these concerns and discuss alternative ways to incorporate such concerns into operational research (OR) models. We point out some future research directions especially in using MCDM concepts in this context. Specifically, we observe that research is needed to design interactive decision support systems. Motivated by this observation, we study an MCDM approach to equitability. Our interactive approach uses holistic judgements of the DM to refine the ranking of an explicitly given (discrete) set of alternatives. The DM is assumed to have a rational preference relation with two additional equity-related axioms, namely anonymity and the Pigou-Dalton principle of transfers. We provide theoretical results that help us handle the computational difficulties due to the anonymity property. We illustrate our approach by designing an interactive ranking algorithm and provide computational results to show computational feasibility. We then consider balance concerns in resource allocation settings. Balance concerns arise when the DM wants to ensure justice over entities, the identities of which might affect the decision. We propose a bi-criteria modelling approach that has efficiency (quantified by the total output) and balance (quantified by the imbalance indicators) related criteria. We solve the models using optimization and heuristic algorithms. Our extensive computational experiments show the satisfactory behaviour of our algorithms

    A new adaptive algorithm for convex quadratic multicriteria optimization

    No full text
    We present a new adaptive algorithm for convex quadratic multicriteria optimization. The algorithm is able to adaptively refine the approximation to the set of efficient points by way of a warm-start interior-point scalarization approach. Numerical results show that this technique is faster than a standard method used for this problem

    Integrated microcantilever fluid sensor as a blood coagulometer

    Get PDF
    The work presented concerns the improvement in mechanical to thermal signal of a microcantilever fluid probe for monitoring patient prothrombin time (PT) and international normalized ratio (INR) based on the physical measurement of the clotting cascade. The current device overcomes hydrodynamic damping limitations by providing an internal thermal actuation force and is realised as a disposable sensor using an integrated piezoresistive deflection measurement. Unfortunately, the piezoresistor is sensitive to thermal changes and in the current design the signal is saturated by the thermal actuation. Overcoming this problem is critical for demonstrating a blood coagulometer and in the wider field as a microsensor capable of simultaneously monitoring rheological and thermal measurements of micro-litre samples. Thermal, electrical, and mechanical testing of a new design indicates a significant reduction in the thermal crosstalk and has led to a breakthrough in distinguishing the mechanical signal when operated in moderately viscous fluids (2-3 cP). A clinical evaluation has been conducted at The Royal London Hospital to measure the accuracy and precision of the improved microcantilever fluid probe. The correlation against the standard laboratory analyser INR, from a wide range of patient clotting times(INR 0.9-6.08) is equal to 0.987 (n=87) and precision of the device measured as the percentage coefficient of variation, excluding patient samples tested < 3 times, is equal to 4.00% (n=64). The accuracy and precision is comparable to that of currently available point-of-care PT/INR devices. The response of the fluid probe in glycerol solutions indicates the potential for simultaneous measurement of rheological and thermal properties though further work is required to establish the accuracy and range of the device as a MEMS based viscometer
    corecore