345,271 research outputs found

    Investment Model Uncertainty and Fair Pricing

    Get PDF
    Modern investment theory takes it for granted that a Security Market Line (SML) is as certain as its "corresponding" Capital Market Line. (CML). However, it can be easily demonstrated that this is not the case. Knightian non-probabilistic, information gap uncertainty exists in the security markets, as the bivariate "Galton's Error" and its concomitant information gap proves (Journal of Banking & Finance, 23, 1999, 1793-1829). In fact, an SML graph needs (at least) two parallel horizontal beta axes, implying that a particular mean security return corresponds with a limited Knightian uncertainty range of betas, although it does correspond with only one market portfolio risk volatility. This implies that a security' risk premium is uncertain and that a Knightian uncertainty range of SMLs and of fair pricing exists. This paper both updates the empirical evidence and graphically traces the financial market consequences of this model uncertainty for modern investment theory. First, any investment knowledge about the securities risk remains uncertain. Investment valuations carry with them epistemological ("modeling") risk in addition to the Markowitz-Sharpe market risk. Second, since idiosyncratic, or firm-specific, risk is limited-uncertain, the real option value of a firm is also limited-uncertain This explains the simultaneous coexistence of different analyst valuations of investment projects, particular firms or industries, included a category "undecided." Third, we can now distinguish between "buy", "sell" and "hold" trading orders based on an empirically determined collection of SMLs, based this Knightian modeling risk. The coexistence of such simultaneous value signals for the same security is necessary for the existence of a market for that security! Without epistemological investment uncertainty, no ongoing markets for securities could exist. In the absence of transaction costs and other inefficiencies, Knightian uncertainty is the necessary energy for market trading, since it creates potential or perceived arbitrage (= trading) opportunities, but it is also necessary for investors to hold securities. Knightian uncertainty provides a possible reason why the SEC can't obtain consensus on what constitutes "fair pricing." The paper also shows that Malkiel's recommended CML-based investments are extremely conservative and non-robust.capital market line, security market line, beta, investments, decision-making, Knightian uncertainty, robustness, information-gap, Galton's Error, real option value

    On the value of context awareness for relay activation in beyond 5G radio access networks

    Get PDF
    This paper envisions to augment the Radio Access Network (RAN) infrastructure in Beyond 5G(B5G) systems by exploiting relaying capabilities of user equipment (UE) as a way to improve the coverage, capacity and robustness. Despite the concept and enabling technologies have been in place for some time, their efficient realization requires the conception and development of new features in B5G systems. Among them, this paper focuses on the Relay UE (RUE) activation decision making, in charge of deciding where and when a UE is suitable to be activated to relay traffic from other UEs. Specifically, the paper analyses seven RUE activation strategies that differ on the criteria and the type of context information considered for this decision-making problem. The considered strategies are evaluated through system level simulations in a realistic urban scenario with the objective of assessing the value of each type of context information. Results reveal that the most efficient strategies from the perspective of outage probability reduction are those that account for the number of UEs that would be served by a RUE based on the experienced spectral efficiency.This paper is part of ARTIST project (ref. PID2020-115104RB-I00) funded by MCIN/AEI/10.13039/ 501100011033.Peer ReviewedPostprint (author's final draft

    High-dimensional A-learning for optimal dynamic treatment regimes

    Get PDF
    Precision medicine is a medical paradigm that focuses on finding the most effective treatment decision based on individual patient information. For many complex diseases, such as cancer, treatment decisions need to be tailored over time according to patients' responses to previous treatments. Such an adaptive strategy is referred as a dynamic treatment regime. A major challenge in deriving an optimal dynamic treatment regime arises when an extraordinary large number of prognostic factors, such as patient's genetic information, demographic characteristics, medical history and clinical measurements over time are available, but not all of them are necessary for making treatment decision. This makes variable selection an emerging need in precision medicine. In this paper, we propose a penalized multi-stage A-learning for deriving the optimal dynamic treatment regime when the number of covariates is of the nonpolynomial (NP) order of the sample size. To preserve the double robustness property of the A-learning method, we adopt the Dantzig selector, which directly penalizes the A-leaning estimating equations. Oracle inequalities of the proposed estimators for the parameters in the optimal dynamic treatment regime and error bounds on the difference between the value functions of the estimated optimal dynamic treatment regime and the true optimal dynamic treatment regime are established. Empirical performance of the proposed approach is evaluated by simulations and illustrated with an application to data from the STAR∗D study

    Quantitative Risk-Based Analysis for Military Counterterrorism Systems

    Get PDF
    The article of record as published may be found at http://dx.doi.org/10.1002/sysThis paper presents a realistic and practical approach to quantitatively assess the risk-reduction capabilities of military counterterrorism systems in terms of damage cost and casualty figures. The comparison of alternatives is thereby based on absolute quantities rather than an aggregated utility or value provided by multicriteria decision analysis methods. The key elements of the approach are (1) the use of decision-attack event trees for modeling and analyzing scenarios, (2) a portfolio model approach for analyzing multiple threats, and (3) the quantitative probabilistic risk assessment matrix for communicating the results. Decision-attack event trees are especially appropriate for modeling and analyzing terrorist attacks where the sequence of events and outcomes are time-sensitive. The actions of the attackers and the defenders are modeled as decisions and the outcomes are modeled as probabilistic events. The quantitative probabilistic risk assessment matrix provides information about the range of the possible outcomes while retaining the simplicity of the classic safety risk assessment matrix based on Mil-Std-882D. It therefore provides a simple and reliable tool for comparing alternatives on the basis of risk including confidence levels rather than single point estimates. This additional valuable information requires minimal additional effort. The proposed approach is illustrated using a simplified but realistic model of a destroyer operating in inland restricted waters. The complex problem of choosing a robust counterterrorism protection system against multiple terrorist threats is analyzed by introducing a surrogate multi-threat portfolio. The associated risk profile provides a practical approach for assessing the robustness of different counterterrorism systems against plausible terrorist threats. The paper documents the analysis for a hypothetical case of three potential threats.This work was performed as part of the Naval Postgraduate School institutionally funded research

    Flexibility value in electric transmission expansion planning

    Get PDF
    Electric Transmission Expansion Planning (TEP) is a complex task exposed to multiple sources of uncertainties when the electricity market has been restructured. Approaches like those based on scenarios and robustness have been proposed and used by planners to deal with uncertainties. Alternatives of solution for expansion are identified and economically evaluated through methodologies based on Discounted Cash Flow (DCF). In general, these approaches have the risk to produce undersized or oversized designs of transmission lines because of uncertainties in demand growth rates and economies of scale. In addition, DCF helps to make a decision only with the information available today and it does not consider managerial flexibility. In consequence, transmission expansion projects are auctioned and the winner investor is forced to execute the project under bidding terms without the possibility to adapt the project to unpredictable events. This research introduces flexibility in TEP process and estimates its value as an approach to cope with uncertainties. A methodology based on Real Options is used and the value of flexibility is estimated in terms of social welfare. In particular, an option to defer a transmission expansion is applied and its value is estimated by using a binomial tree technique. Two case studies are analyzed: A two-node case and a reduced version of the Colombian transmission network. Conclusions suggest flexibility is a valid approach to be introduced in TEP in order to handle uncertainties.DoctoradoDoctor en IngenierĂ­a Industria

    Distributionally Robust Model-Based Offline Reinforcement Learning with Near-Optimal Sample Complexity

    Full text link
    This paper concerns the central issues of model robustness and sample efficiency in offline reinforcement learning (RL), which aims to learn to perform decision making from history data without active exploration. Due to uncertainties and variabilities of the environment, it is critical to learn a robust policy -- with as few samples as possible -- that performs well even when the deployed environment deviates from the nominal one used to collect the history dataset. We consider a distributionally robust formulation of offline RL, focusing on tabular robust Markov decision processes with an uncertainty set specified by the Kullback-Leibler divergence in both finite-horizon and infinite-horizon settings. To combat with sample scarcity, a model-based algorithm that combines distributionally robust value iteration with the principle of pessimism in the face of uncertainty is proposed, by penalizing the robust value estimates with a carefully designed data-driven penalty term. Under a mild and tailored assumption of the history dataset that measures distribution shift without requiring full coverage of the state-action space, we establish the finite-sample complexity of the proposed algorithm, and further show it is almost unimprovable in light of a nearly-matching information-theoretic lower bound up to a polynomial factor of the (effective) horizon length. To the best our knowledge, this provides the first provably near-optimal robust offline RL algorithm that learns under model uncertainty and partial coverage
    • 

    corecore