345,271 research outputs found
Investment Model Uncertainty and Fair Pricing
Modern investment theory takes it for granted that a Security Market Line (SML) is as certain as its "corresponding" Capital Market Line. (CML). However, it can be easily demonstrated that this is not the case. Knightian non-probabilistic, information gap uncertainty exists in the security markets, as the bivariate "Galton's Error" and its concomitant information gap proves (Journal of Banking & Finance, 23, 1999, 1793-1829). In fact, an SML graph needs (at least) two parallel horizontal beta axes, implying that a particular mean security return corresponds with a limited Knightian uncertainty range of betas, although it does correspond with only one market portfolio risk volatility. This implies that a security' risk premium is uncertain and that a Knightian uncertainty range of SMLs and of fair pricing exists. This paper both updates the empirical evidence and graphically traces the financial market consequences of this model uncertainty for modern investment theory. First, any investment knowledge about the securities risk remains uncertain. Investment valuations carry with them epistemological ("modeling") risk in addition to the Markowitz-Sharpe market risk. Second, since idiosyncratic, or firm-specific, risk is limited-uncertain, the real option value of a firm is also limited-uncertain This explains the simultaneous coexistence of different analyst valuations of investment projects, particular firms or industries, included a category "undecided." Third, we can now distinguish between "buy", "sell" and "hold" trading orders based on an empirically determined collection of SMLs, based this Knightian modeling risk. The coexistence of such simultaneous value signals for the same security is necessary for the existence of a market for that security! Without epistemological investment uncertainty, no ongoing markets for securities could exist. In the absence of transaction costs and other inefficiencies, Knightian uncertainty is the necessary energy for market trading, since it creates potential or perceived arbitrage (= trading) opportunities, but it is also necessary for investors to hold securities. Knightian uncertainty provides a possible reason why the SEC can't obtain consensus on what constitutes "fair pricing." The paper also shows that Malkiel's recommended CML-based investments are extremely conservative and non-robust.capital market line, security market line, beta, investments, decision-making, Knightian uncertainty, robustness, information-gap, Galton's Error, real option value
Recommended from our members
The development of a fuzzy expert system to help top decision makers in political and investment domains
This thesis was submitted for the degree of Doctor of Philosophy and was awarded by Brunel UniversityThe worldâs increasing interconnectedness and the recent increase in the number of notable regional and international events pose greater and greater challenges for political decision-making, especially the decision to strengthen bilateral economic relationships between friendly nations. Typically, such critical decisions are influenced by certain factors and variables that are based on heterogeneous and vague information that exists in different domains. A serious problem that the decision-maker faces is the difficulty in building efficient political decision support systems (DSS) with heterogeneous factors. One must take many factors into account, for example, language (natural or human language), the availability, or lack thereof, of precise data (vague information), and possible consequences (rule conclusions).
The basic concept is a linguistic variable whose values are words rather than numbers and are therefore closer to human intuition. A common language is thus needed to describe such information which requires human knowledge for interpretation. To achieve robustness and efficiency of interpretation, we need to apply a method that can be used to generate high-level knowledge and information integration. Fuzzy logic is based on natural language and is tolerant of imprecise data. Fuzzy logicâs greatest strength lies in its ability to handle imprecise data, and it is perfectly suited for this situation.
In this thesis, we propose to use ontology to integrate the scattered information resources from the political and investment domains. The process started with understanding each concept and extracting key ideas and relationships between sets of information by constructing object paradigm ontology. Re-engineering according to the object-paradigm (OP) provided quality for the developed ontology where conceptualization can provide more expressive, reusable object and temporal ontology. Then fuzzy logic has been integrated with ontology. And a fuzzy ontology membership value that reflects the strength of an inter-concept relationship to represent pairs of concepts across ontology has been consistently used.
Each concept is assigned a fixed numerical value representing the concept consistency. Concept consistency is computed as a function of strength of all the relationships associated with the concept. Fuzzy expert systems enable one to weigh the consequences (rule conclusions) of certain choices based on vague information. Rule conclusions follow from rules composed of two parts, the if antecedent (input) and the then consequent (output). With fuzzy expert systems, one uses fuzzy logic toolbox graphical user interface (GUI) tools to build up a fuzzy inference system (FIS) to aid in decision-making. This research includes four main phases to develop a prototype architecture for an intelligent DSS that can help top political decision makers
On the value of context awareness for relay activation in beyond 5G radio access networks
This paper envisions to augment the Radio Access Network (RAN) infrastructure in Beyond 5G(B5G) systems by exploiting relaying capabilities of user equipment (UE) as a way to improve the coverage, capacity and robustness. Despite the concept and enabling technologies have been in place for some time, their efficient realization requires the conception and development of new features in B5G systems. Among them, this paper focuses on the Relay UE (RUE) activation decision making, in charge of deciding where and when a UE is suitable to be activated to relay traffic from other UEs. Specifically, the paper analyses seven RUE activation strategies that differ on the criteria and the type of context information considered for this decision-making problem. The considered strategies are evaluated through system level simulations in a realistic urban scenario with the objective of assessing the value of each type of context information. Results reveal that the most efficient strategies from the perspective of outage probability reduction are those that account for the number of UEs that would be served by a RUE based on the experienced spectral efficiency.This paper is part of ARTIST project (ref. PID2020-115104RB-I00) funded by MCIN/AEI/10.13039/ 501100011033.Peer ReviewedPostprint (author's final draft
High-dimensional A-learning for optimal dynamic treatment regimes
Precision medicine is a medical paradigm that focuses on finding the most effective treatment decision based on individual patient information. For many complex diseases, such as cancer, treatment decisions need to be tailored over time according to patients' responses to previous treatments. Such an adaptive strategy is referred as a dynamic treatment regime. A major challenge in deriving an optimal dynamic treatment regime arises when an extraordinary large number of prognostic factors, such as patient's genetic information, demographic characteristics, medical history and clinical measurements over time are available, but not all of them are necessary for making treatment decision. This makes variable selection an emerging need in precision medicine. In this paper, we propose a penalized multi-stage A-learning for deriving the optimal dynamic treatment regime when the number of covariates is of the nonpolynomial (NP) order of the sample size. To preserve the double robustness property of the A-learning method, we adopt the Dantzig selector, which directly penalizes the A-leaning estimating equations. Oracle inequalities of the proposed estimators for the parameters in the optimal dynamic treatment regime and error bounds on the difference between the value functions of the estimated optimal dynamic treatment regime and the true optimal dynamic treatment regime are established. Empirical performance of the proposed approach is evaluated by simulations and illustrated with an application to data from the STARâD study
Quantitative Risk-Based Analysis for Military Counterterrorism Systems
The article of record as published may be found at http://dx.doi.org/10.1002/sysThis paper presents a realistic and practical approach to quantitatively assess the risk-reduction
capabilities of military counterterrorism systems in terms of damage cost and casualty
figures. The comparison of alternatives is thereby based on absolute quantities rather than
an aggregated utility or value provided by multicriteria decision analysis methods. The key
elements of the approach are (1) the use of decision-attack event trees for modeling and
analyzing scenarios, (2) a portfolio model approach for analyzing multiple threats, and (3) the
quantitative probabilistic risk assessment matrix for communicating the results. Decision-attack
event trees are especially appropriate for modeling and analyzing terrorist attacks where
the sequence of events and outcomes are time-sensitive. The actions of the attackers and the
defenders are modeled as decisions and the outcomes are modeled as probabilistic events.
The quantitative probabilistic risk assessment matrix provides information about the range
of the possible outcomes while retaining the simplicity of the classic safety risk assessment
matrix based on Mil-Std-882D. It therefore provides a simple and reliable tool for comparing
alternatives on the basis of risk including confidence levels rather than single point estimates.
This additional valuable information requires minimal additional effort. The proposed approach
is illustrated using a simplified but realistic model of a destroyer operating in inland
restricted waters. The complex problem of choosing a robust counterterrorism protection
system against multiple terrorist threats is analyzed by introducing a surrogate multi-threat
portfolio. The associated risk profile provides a practical approach for assessing the robustness
of different counterterrorism systems against plausible terrorist threats. The paper documents the analysis for a hypothetical case of three potential threats.This work was performed as part of the Naval Postgraduate School institutionally funded research
Flexibility value in electric transmission expansion planning
Electric Transmission Expansion Planning (TEP) is a complex task exposed to multiple sources of uncertainties when the electricity market has been restructured. Approaches like those based on scenarios and robustness have been proposed and used by planners to deal with uncertainties. Alternatives of solution for expansion are identified and economically evaluated through methodologies based on Discounted Cash Flow (DCF). In general, these approaches have the risk to produce undersized or oversized designs of transmission lines because of uncertainties in demand growth rates and economies of scale. In addition, DCF helps to make a decision only with the information available today and it does not consider managerial flexibility. In consequence, transmission expansion projects are auctioned and the winner investor is forced to execute the project under bidding terms without the possibility to adapt the project to unpredictable events. This research introduces flexibility in TEP process and estimates its value as an approach to cope with uncertainties. A methodology based on Real Options is used and the value of flexibility is estimated in terms of social welfare. In particular, an option to defer a transmission expansion is applied and its value is estimated by using a binomial tree technique. Two case studies are analyzed: A two-node case and a reduced version of the Colombian transmission network. Conclusions suggest flexibility is a valid approach to be introduced in TEP in order to handle uncertainties.DoctoradoDoctor en IngenierĂa Industria
Distributionally Robust Model-Based Offline Reinforcement Learning with Near-Optimal Sample Complexity
This paper concerns the central issues of model robustness and sample
efficiency in offline reinforcement learning (RL), which aims to learn to
perform decision making from history data without active exploration. Due to
uncertainties and variabilities of the environment, it is critical to learn a
robust policy -- with as few samples as possible -- that performs well even
when the deployed environment deviates from the nominal one used to collect the
history dataset. We consider a distributionally robust formulation of offline
RL, focusing on tabular robust Markov decision processes with an uncertainty
set specified by the Kullback-Leibler divergence in both finite-horizon and
infinite-horizon settings. To combat with sample scarcity, a model-based
algorithm that combines distributionally robust value iteration with the
principle of pessimism in the face of uncertainty is proposed, by penalizing
the robust value estimates with a carefully designed data-driven penalty term.
Under a mild and tailored assumption of the history dataset that measures
distribution shift without requiring full coverage of the state-action space,
we establish the finite-sample complexity of the proposed algorithm, and
further show it is almost unimprovable in light of a nearly-matching
information-theoretic lower bound up to a polynomial factor of the (effective)
horizon length. To the best our knowledge, this provides the first provably
near-optimal robust offline RL algorithm that learns under model uncertainty
and partial coverage
- âŠ