136 research outputs found

    DEA models with production trade-offs and weight restrictions

    Get PDF
    There is a large literature on the use of weight restrictions in multiplier DEA models. In this chapter we provide an alternative view of this subject from the perspective of dual envelopment DEA models in which weight restrictions can be interpreted as production trade-offs. The notion of production trade-offs allows us to state assumptions that certain simultaneous changes to the inputs and outputs are technologically possible in the production process. The incorporation of production trade-offs in the envelopment DEA model, or the corresponding weight restrictions in the multiplier model, leads to a meaningful expansion of the model of production technology. The efficiency measures in DEA models with production trade-offs retain their traditional meaning as the ultimate and technologically realistic improvement factors. This overcomes one of the known drawbacks of weight restrictions assessed using other methods. In this chapter we discuss the assessment of production trade-offs, provide the corresponding theoretical developments and suggest computational methods suitable for the solution of the resulting DEA models

    Empirical Business Valuation and Asset Pricing: An Analysis from an Economic Perspective

    Get PDF
    Common basis of all empirical accounting-based asset pricing models is their attempt to explain today’s asset prices or returns with accounting characteristics that are observable today. Technically, empirical accounting-based asset pricing is implemented in the literature with a wide variety of statistical methods: regression approaches, method of multiples, and error measures, a fact that results in several problems. First problem Given that regression approaches, method of multiples, and error measures deal with empirical asset pricing, the multitude of conceptually different and non-connected approaches is puzzling and gives rise to two questions: (i) If regression approaches, method of multiples, and error measures are applied empirically, they might lead to vastly different valuation results. Therefore, wouldn’t it be useful to elaborate conceptual similarities and differences between these statistical methods and even find a superordinate category? (ii) With respect to regression approaches, the existing literature uses just a small subset of possible statistical methods for empirical asset pricing, i.e., ordinary least squares, weighted least squares, or quantile regressions. Wouldn’t it be rational to enlarge this subset of regression approaches by using other functions of the residuals, e.g., higher (and not first or second) order of absolute values of residuals or the maximum error? With respect to the method of multiples, wouldn’t it be useful to possess a pricing formula that can integrate different methods of computing means as well as using several accounting figures? With respect to error measures, wouldn’t it be reasonable to have a pricing framework (= objective function) that is consistent with the error measure (= quality assessment). Given these questions, the first objective of this thesis in Chapter II is to analyze which of the existing empirical asset pricing approaches are conceptually similar, i.e., can be summarized to a superordinate category and present statistical methods that can be considered as quasi-natural extensions to existing empirical asset pricing models. Second problem Based on this overview over empirical asset pricing models and the literature, it can be strongly assumed that the chosen factors (numbers and specific selection of explanatory variables) as well as the specific statistical method used (e.g., ordinary least squares regression, quantile regression) have an important influence on the explanatory power of an empirical analysis. Since the only concern of the majority of existing papers is the previously mentioned explanatory power, they can be regarded as dealing with statistical significance of factors/specific statistical methods, whereas the economic relevance is far less analyzed. Since price differences are the decisive aspect of valuation models in practice and not statistical significance, analyzing their economic significance is essential and inevitable. Nobody will pay a higher price for a company just because a specific valuation method produces a high out-of-sample R². Moreover, business decisions should not be based only on whether a p-value passes a specific threshold because statistical significance (p-value) cannot measure the size of an effect or the importance of a result. Therefore, it is the second objective of this thesis in Chapter III to analyze the economic significance of different factors/specific statistical methods. Third problem If, however, different factors/specific statistical methods lead to economically significant differences in value, a model-selection criterion is needed that is based on economic instead of statistical criteria. While arbitrage theory provides a general guideline for economic model evaluation for theoretical asset pricing models (i.e., prices must be a linear function of their future cash flows), empirical asset pricing models do not rely on present values of cash flows, but on assumed relations between accounting characteristics/factor returns and company prices/returns. For that reason, no theoretical guidelines regarding the components of the model exist. In particular, there are neither hints regarding the number and type of explanatory variables nor the specific statistical approach. Given this high need for an economic model evaluation criterion, the third objective of this thesis in Chapter IV is to develop an economic model evaluation criterion and come up with an economic ranking of different empirical models. Fourth problem From the perspective of asset pricing theory such a model evaluation criterion is superfluous because the correct business valuation model is clear: the present value of future cash flows. Practically, forecasts of the future are difficult and, in particular, the determination of discount factors proves problematic. Therefore, it might be better to use a theoretically less convincing but easier applicable model—e.g., use of accounting characteristics—instead of a theoretically superior but inadequately implementable model—present value. However, the superior practicability of existing accounting-based valuations comes at a high cost: a relatively weak foundation in asset pricing theory: (i) Multiples Multiples essentially argue that similar accounting characteristics should result in similar prices. Problems from the perspective of asset pricing theory: While such a valuation statement is intuitive, it is not backed up by asset pricing/arbitrage theory that states: Identical cash flow streams must possess identical prices. In other words, there are three differences between multiples and arbitrage theory. First, accounting characteristics are considered instead of cash flow streams. Second, similar instead of identical positions are examined. Third, one accounting characteristic is regarded as enough to characterize a company completely. (ii) Implementing discounted cash flow models with the help of accounting characteristics In literature, there are discounted cash flow models that use (functions of) accounting figures in order to express cash flows, the horizon value and/or the discount rate. Problems from the perspective of asset pricing theory: Irrespective of the specific inclusion of the accounting characteristics in the discounted cash flow models, they can only serve as an approximation, i.e., the models contain assumptions that do not generally hold in reality. (iii) Empirical accounting-based approaches Empirical accounting-based approaches explain stock prices with the help of accounting characteristics. Problems from the perspective of asset pricing theory: These empirical accounting-based approaches belong to the field of value relevance studies and, thus, are only interested in statistical significance of accounting characteristics, but not economic significance, i.e., they do not derive pricing statements. In principle, the regression coefficients of value relevance studies can also be used to obtain business values. However, valuation differences between different regression approaches are huge and these models have a weak economic backing when contrasted with the economic principle. All these problems underline the trade-off between asset pricing rigor and practicability of models: Present value models are theoretically superior, but their practical implementation in form of constant discount rates and horizon models is far from economically convincing. Accounting-based models are characterized by less asset pricing theory rigor, however, can be implemented without sacrificing much of their theoretical basis. Obtaining better asset pricing models, hence, means either improve the implementation of present value models or the theoretical foundations of accounting-based models. Two reasons favor the improvement of the asset pricing foundation of empirical accounting-based models. On the one hand, the accounting literature so far has not fully exploited the asset pricing potential of accounting-based valuation models: It can be increased visibly without sacrificing practicability. On the other hand, purely empirical models always create a justification problem: Who would pay a higher price for a company because sales multiples result in higher prices than earnings multiples? Who would pay a higher price for a company because a lower discount rate for earnings is used? Who would pay a higher price for a company because an empirical estimation procedure, which possesses a higher R², recommends a higher price than other empirical estimation procedures? Therefore, it is the fourth objective of this thesis in Chapter V to connect the practicability of accounting-based valuation models with the theoretical rigor of asset pricing theory

    Modeling Uncertainty in Large Natural Resource Allocation Problems

    Get PDF
    The productivity of the world's natural resources is critically dependent on a variety of highly uncertain factors, which obscure individual investors and governments that seek to make long-term, sometimes irreversible investments in their exploration and utilization. These dynamic considerations are poorly represented in disaggregated resource models, as incorporating uncertainty into large-dimensional problems presents a challenging computational task. This study introduces a novel numerical method to solve large-scale dynamic stochastic natural resource allocation problems that cannot be addressed by conventional methods. The method is illustrated with an application focusing on the allocation of global land resource use under stochastic crop yields due to adverse climate impacts and limits on further technological progress. For the same model parameters, the range of land conversion is considerably smaller for the dynamic stochastic model as compared to deterministic scenario analysis. The scenario analysis can thus significantly overstate the magnitude of expected land conversion under uncertain crop yields

    Preferences and Increased Risk Aversion under a General Framework of Stochastic Dominance

    Get PDF
    This paper analyzes increased risk aversion in the presence of two risks. Necessary and sufficient conditions for increased risk aversion across the domain of the foreground risk are found for changes in both the foreground and background risks. Preferences that satisfy the necessary and sufficient conditions are determined through a lower bound on their measure of prudence. These bounds are found through second-degree spreads of a transformation of the background risk. The necessary and sufficient conditions demonstrate that for all second degree spreads of this nature, absolute temperance plays a central role in the necessary and sufficient conditions for increased risk aversion. The approach also demonstrates that changes in risk aversion under the general framework of stochastic dominating spreads can be explained by a weighted average of terms involving absolute prudence and absolute temperance. Once a general set of necessary and sufficient conditions have been found it is shown that for preferences that are decreasing absolute risk averse in the sense of Ross, increased risk aversion due to changes in the background risk within this framework is equivalent to Ross risk vulnerability. The general conditions also find necessary and sufficient conditions for preferences to be properly risk averse toward patent increases in risk

    Convex Identifcation of Stable Dynamical Systems

    Get PDF
    This thesis concerns the scalable application of convex optimization to data-driven modeling of dynamical systems, termed system identi cation in the control community. Two problems commonly arising in system identi cation are model instability (e.g. unreliability of long-term, open-loop predictions), and nonconvexity of quality-of- t criteria, such as simulation error (a.k.a. output error). To address these problems, this thesis presents convex parametrizations of stable dynamical systems, convex quality-of- t criteria, and e cient algorithms to optimize the latter over the former. In particular, this thesis makes extensive use of Lagrangian relaxation, a technique for generating convex approximations to nonconvex optimization problems. Recently, Lagrangian relaxation has been used to approximate simulation error and guarantee nonlinear model stability via semide nite programming (SDP), however, the resulting SDPs have large dimension, limiting their practical utility. The rst contribution of this thesis is a custom interior point algorithm that exploits structure in the problem to signi cantly reduce computational complexity. The new algorithm enables empirical comparisons to established methods including Nonlinear ARX, in which superior generalization to new data is demonstrated. Equipped with this algorithmic machinery, the second contribution of this thesis is the incorporation of model stability constraints into the maximum likelihood framework. Speci - cally, Lagrangian relaxation is combined with the expectation maximization (EM) algorithm to derive tight bounds on the likelihood function, that can be optimized over a convex parametrization of all stable linear dynamical systems. Two di erent formulations are presented, one of which gives higher delity bounds when disturbances (a.k.a. process noise) dominate measurement noise, and vice versa. Finally, identi cation of positive systems is considered. Such systems enjoy substantially simpler stability and performance analysis compared to the general linear time-invariant iv Abstract (LTI) case, and appear frequently in applications where physical constraints imply nonnegativity of the quantities of interest. Lagrangian relaxation is used to derive new convex parametrizations of stable positive systems and quality-of- t criteria, and substantial improvements in accuracy of the identi ed models, compared to existing approaches based on weighted equation error, are demonstrated. Furthermore, the convex parametrizations of stable systems based on linear Lyapunov functions are shown to be amenable to distributed optimization, which is useful for identi cation of large-scale networked dynamical systems

    The Intertemporal Approach to the Current Account

    Get PDF
    The intertemporal approach views the current-account balance as the outcome of forward-looking dynamic saving and investment decisions. This paper, a chapter in the forthcoming third volume of the Handbook of International Economics, surveys the theory and empirical work on the intertemporal approach as it has developed since the early 1980s. After reviewing the basic one-good, representative- consumer model, the paper considers a series of extended models incorporating relative prices, complex demographic structures, consumer durables, asset-market incompleteness, and asymmetric information. We also present a variety of empirical evidence illustrating the usefulness of the intertemporal approach, and argue that intertemporal models provide a consistent and coherent foundation for open-economy policy analysis. As such, the intertemporal approach should supplant the expanded versions of the Mundell-Fleming IS-LM model that currently furnish the dominant paradigm used by central banks, finance ministries, and international economic agencies.
    • …
    corecore