16,681 research outputs found
Mean-Variance and Expected Utility: The Borch Paradox
The model of rational decision-making in most of economics and statistics is
expected utility theory (EU) axiomatised by von Neumann and Morgenstern, Savage
and others. This is less the case, however, in financial economics and
mathematical finance, where investment decisions are commonly based on the
methods of mean-variance (MV) introduced in the 1950s by Markowitz. Under the
MV framework, each available investment opportunity ("asset") or portfolio is
represented in just two dimensions by the ex ante mean and standard deviation
of the financial return anticipated from that investment.
Utility adherents consider that in general MV methods are logically incoherent.
Most famously, Norwegian insurance theorist Borch presented a proof suggesting
that two-dimensional MV indifference curves cannot represent the preferences of
a rational investor (he claimed that MV indifference curves "do not exist").
This is known as Borch's paradox and gave rise to an important but generally
little-known philosophical literature relating MV to EU. We examine the main
early contributions to this literature, focussing on Borch's logic and the
arguments by which it has been set aside.Comment: Published in at http://dx.doi.org/10.1214/12-STS408 the Statistical
Science (http://www.imstat.org/sts/) by the Institute of Mathematical
Statistics (http://www.imstat.org
Recommended from our members
EASe : integrating search with learned episodes
Weak methods are insufficient to solve complex problems. Constrained weak methods, like hill-climbing, search too little of the problem space. Unconstrained weak methods, like breadth-first search, are intractable. Fortunately, through the integration of multiple weak methods more powerful problem solvers can be created. We demonstrate that augmenting a weak constrained search method with episodes provides a tractable method for solving a large class of problems. We demonstrate that these episodes can be generated using an unconstrained weak method while solving simple problems from a domain. We provide an analytical model of our approach and empirical results from the logic synthesis domain of VLSI design as well as the classic tile-sliding domain
Recommended from our members
Comparing instance-averaging with instance-saving learning algorithms
The goal of our research is to understand the power and appropriateness of instance-based representations and their associated acquisition methods. This paper concerns two methods for reducing storage requirements for instance-based learning algorithms. The first method, termed instance-saving, represents concept descriptions by selecting and storing a representative subset of the given training instances. We provide an analysis for instance-saving techniques and specify one general class of concepts that instance-saving algorithms are capable of learning. The second method, termed instance-averaging, represents concept descriptions by averaging together some training instances while simply saving others. We describe why analyses for instance-averaging algorithms are difficult to produce. Our empirical results indicate that storage requirements for these two methods are roughly equivalent. We outline the assumptions of instance-averaging algorithms and describe how their violation might degrade performance. To mitigate the effects of non-convex concepts, a dynamic thresholding technique is introduced and applied in both the averaging and non-averaging learning algorithms. Thresholding increases the storage requirements but also increases the quality of the resulting concept descriptions
Recommended from our members
Instance-based prediction of real-valued attributes
Instance-based representations have been applied to numerous classification tasks with a fair amount of success. These tasks predict a symbolic class based on observed attributes. This paper presents a method for predicting a numeric value based on observed attributes. We prove that if the numeric values are generated by continuous functions with bounded slope, then the predicted values are accurate approximations of the actual values. We demonstrate the utility of this approach by comparing it with standard approaches for value-prediction. The approach requires no background knowledge
Recommended from our members
Detecting and removing noisy instances from concept descriptions
Several published results show that instance-based learning algorithms record high classification accuracies and low storage requirements when applied to supervised learning tasks. However, these learning algorithms are highly sensitive to training set noise. This paper describes a simple extension of instance-based learning algorithms for detecting and removing noisy instances from concept descriptions. The extension requires evidence that saved instances be significantly good classifiers before it allows them to be used for subsequent classification tasks. We show that this extension's performance degrades more slowly in the presence of noise, improves classification accuracies, and further reduces storage requirements in several artificial and real-world databases
Recommended from our members
Data structures for retrieval on integer grids
A family of data structures is presented for retrieval of the sum of values of points within a half-plane or polygon, given that the points are on integer coordinates in the plane. Fredman has shown that the problem has a lower bound of Ω(N^2/3) for intermixed updates and retrievals. Willard has shown an upper bound of O(N^2log6^4) for the case where the points are not restricted to integer coordinates.We have developed families of related data structures for retrievals of half-planes or polygons. One of the data structures permits intermixed updates and half-plane retrievals in O(N^2/3log N) time, where N is the size of the grid.We use a technique we call "Rotation" to permit a better match of a portion of the data structure to the particular problem. Rotations appear to be an effective method for trading-off storage redundancy against retrieval time for certain classes of problems
Robust Assignments via Ear Decompositions and Randomized Rounding
Many real-life planning problems require making a priori decisions before all
parameters of the problem have been revealed. An important special case of such
problem arises in scheduling problems, where a set of tasks needs to be
assigned to the available set of machines or personnel (resources), in a way
that all tasks have assigned resources, and no two tasks share the same
resource. In its nominal form, the resulting computational problem becomes the
\emph{assignment problem} on general bipartite graphs.
This paper deals with a robust variant of the assignment problem modeling
situations where certain edges in the corresponding graph are \emph{vulnerable}
and may become unavailable after a solution has been chosen. The goal is to
choose a minimum-cost collection of edges such that if any vulnerable edge
becomes unavailable, the remaining part of the solution contains an assignment
of all tasks.
We present approximation results and hardness proofs for this type of
problems, and establish several connections to well-known concepts from
matching theory, robust optimization and LP-based techniques.Comment: Full version of ICALP 2016 pape
- …