85 research outputs found

    Does specialization in security analysis and portfolio management explain deviations from the CAPM?

    Get PDF
    Thesis (M.B.A.)--Massachusetts Institute of Technology, Sloan School of Management, 2005.Includes bibliographical references (p. 44-45).The Capital Asset Pricing Model (CAPM), which relates the risk of an individual security to its expected return, is frequently cited in investments textbooks and the academic literature as a centerpiece of modem finance theory. The main prediction of the CAPM is that investors are compensated in the form of expected return only for bearing systematic or market risk, which is the portion of a security's risk that cannot be diversified away. That investors demand reparation for and only for systematic risk is a consequence from the pivotal assumption that all investors have identical information for the entire universe of publicly traded securities. In actuality, professional active money managers rarely invest in a portfolio broad enough to be considered the market portfolio. Instead, the asset management industry has self-organized over time according to a top-down investment process, where asset allocators provide capital to security selectors who specialize in high-yield bonds, large-cap value stocks, and the like. Any losses in diversification benefits resulting from this theoretically suboptimal two-phase investment strategy are deemed an unavoidable cost of obtaining accurate forecasts through specialization in security analysis and portfolio management.(cont.) This research paper extends the ideas of the CAPM to formulate an equilibrium security pricing model that attempts to account for the top-down approach followed by investors in the real-world.by Leonid Keyser.M.B.A

    Optimization in a Simulation Setting: Use of Function Approximation in Debt Strategy Analysis

    Get PDF
    The stochastic simulation model suggested by Bolder (2003) for the analysis of the federal government's debt-management strategy provides a wide variety of useful information. It does not, however, assist in determining an optimal debt-management strategy for the government in its current form. Including optimization in the debt-strategy model would be useful, since it could substantially broaden the range of policy questions that can be addressed. Finding such an optimal strategy is nonetheless complicated by two challenges. First, performing optimization with traditional techniques in a simulation setting is computationally intractable. Second, it is necessary to define precisely what one means by an "optimal" debt strategy. The authors detail a possible approach for addressing these two challenges. They address the first challenge by approximating the numerically computed objective function using a function-approximation technique. They consider the use of ordinary least squares, kernel regression, multivariate adaptive regression splines, and projection-pursuit regressions as approximation algorithms. The second challenge is addressed by proposing a wide range of possible government objective functions and examining them in the context of an illustrative example. The authors' view is that the approach permits debt and fiscal managers to address a number of policy questions that could not be fully addressed with the current stochastic simulation engine.Debt management; Econometric and statistical methods; Fiscal policy; Financial markets

    Echinodome response to dynamic loading

    Get PDF

    Quantifying cognitive and mortality outcomes in older patients following acute illness using epidemiological and machine learning approaches

    Get PDF
    Introduction: Cognitive and functional decompensation during acute illness in older people are poorly understood. It remains unclear how delirium, an acute confusional state reflective of cognitive decompensation, is contextualised by baseline premorbid cognition and relates to long-term adverse outcomes. High-dimensional machine learning offers a novel, feasible and enticing approach for stratifying acute illness in older people, improving treatment consistency while optimising future research design. Methods: Longitudinal associations were analysed from the Delirium and Population Health Informatics Cohort (DELPHIC) study, a prospective cohort ā‰„70 years resident in Camden, with cognitive and functional ascertainment at baseline and 2-year follow-up, and daily assessments during incident hospitalisation. Second, using routine clinical data from UCLH, I constructed an extreme gradient-boosted trees predicting 600-day mortality for unselected acute admissions of oldest-old patients with mechanistic inferences. Third, hierarchical agglomerative clustering was performed to demonstrate structure within DELPHIC participants, with predictive implications for survival and length of stay. Results: i. Delirium is associated with increased rates of cognitive decline and mortality risk, in a dose-dependent manner, with an interaction between baseline cognition and delirium exposure. Those with highest delirium exposure but also best premorbid cognition have the ā€œmost to loseā€. ii. High-dimensional multimodal machine learning models can predict mortality in oldest-old populations with 0.874 accuracy. The anterior cingulate and angular gyri, and extracranial soft tissue, are the highest contributory intracranial and extracranial features respectively. iii. Clinically useful acute illness subtypes in older people can be described using longitudinal clinical, functional, and biochemical features. Conclusions: Interactions between baseline cognition and delirium exposure during acute illness in older patients result in divergent long-term adverse outcomes. Supervised machine learning can robustly predict mortality in in oldest-old patients, producing a valuable prognostication tool using routinely collected data, ready for clinical deployment. Preliminary findings suggest possible discernible subtypes within acute illness in older people

    Modelling the Pultrusion Process of Off Shore Wind Turbine Blades

    Get PDF
    This thesis is devoted to the numerical modelling of the pultrusion process for industrial products such as wind turbine blades and structural profiles. The main focus is on the thermo-chemical and mechanical analyses of the process in which the process induced tresses and shape distortions together with the thermal and cure developments are addressed. A detailed survey on pultrusion is presented including numerical and experimental studies available in the literature since the 1980s. Keeping the multi-physics and large amount of variables involved in the pultrusion process in mind, a satisfactory experimental analysis for the production requires considerable time which is obviously not a cost-efficient approach. Therefore, the development of suitable computational models is highly desired in order to analyse the process for different composite manufacturing aspects such as heat transfer, curing and solid mechanics

    Understanding and Adapting Tree Ensembles: A Training Data Perspective

    Get PDF
    Despite the impressive success of deep-learning models on unstructured data (e.g., images, audio, text), tree-based ensembles such as random forests and gradient-boosted trees are hugely popular and remain the preferred choice for tabular or structured data, and are regularly used to win challenges on data-competition websites such as Kaggle and DrivenData. Despite their impressive predictive performance, tree-based ensembles lack certain characteristics which may limit their further adoption, especially for safety-critical or privacy-sensitive domains such as weather forecasting or predictive medical modeling. This dissertation investigates the shortcomings currently facing tree-based ensembles---lack of explainable predictions, limited uncertainty estimation, and inefficient adaptability to changes in the training data---and posits that numerous improvements to tree-based ensembles can be made by analyzing the relationships between the training data and the resulting learned model. By studying the effects of one or many training examples on tree-based ensembles, we develop solutions for these models which (1) increase their predictive explainability, (2) provide accurate uncertainty estimates for individual predictions, and (3) efficiently adapt learned models to accurately reflect updated training data. This dissertation includes previously published coauthored material
    • ā€¦
    corecore