99,986 research outputs found

    Evaluating hedge fund performance: a stochastic dominance approach

    Get PDF
    We introduce a general and flexible framework for hedge fund performance evaluation and asset allocation: stochastic dominance (SD) theory. Our approach utilizes statistical tests for stochastic dominance to compare the returns of hedge funds. We form hedge fund portfolios by using SD criteria and examine the out-of-sample performance of these hedge fund portfolios. Compared to performance of portfolios of randomly selected hedge funds and mean-variance e¢ cient hedge funds, our results show that fund selection method based on SD criteria greatly improves the performance of hedge fund portfolio

    Damage identification in structural health monitoring: a brief review from its implementation to the Use of data-driven applications

    Get PDF
    The damage identification process provides relevant information about the current state of a structure under inspection, and it can be approached from two different points of view. The first approach uses data-driven algorithms, which are usually associated with the collection of data using sensors. Data are subsequently processed and analyzed. The second approach uses models to analyze information about the structure. In the latter case, the overall performance of the approach is associated with the accuracy of the model and the information that is used to define it. Although both approaches are widely used, data-driven algorithms are preferred in most cases because they afford the ability to analyze data acquired from sensors and to provide a real-time solution for decision making; however, these approaches involve high-performance processors due to the high computational cost. As a contribution to the researchers working with data-driven algorithms and applications, this work presents a brief review of data-driven algorithms for damage identification in structural health-monitoring applications. This review covers damage detection, localization, classification, extension, and prognosis, as well as the development of smart structures. The literature is systematically reviewed according to the natural steps of a structural health-monitoring system. This review also includes information on the types of sensors used as well as on the development of data-driven algorithms for damage identification.Peer ReviewedPostprint (published version

    Addressing Uncertainty in TMDLS: Short Course at Arkansas Water Resources Center 2001 Annual Conference

    Get PDF
    Management of a critical natural resource like water requires information on the status of that resource. The US Environmental Protection Agency (EPA) reported in the 1998 National Water Quality Inventory that more than 291,000 miles of assessed rivers and streams and 5 million acres of lakes do not meet State water quality standards. This inventory represents a compilation of State assessments of 840,000 miles of rivers and 17.4 million acres of lakes; a 22 percent increase in river miles and 4 percent increase in lake acres over their 1996 reports. Siltation, bacteria, nutrients and metals were the leading pollutants of impaired waters, according to EPA. The sources of these pollutants were presumed to be runoff from agricultural lands and urban areas. EPA suggests that the majority of Americans-over 218 million-live within ten miles of a polluted waterbody. This seems to contradict the recent proclamations of the success of the Clean Water Act, the Nation\u27s water pollution control law. EPA also claims that, while water quality is still threatened in the US, the amount of water safe for fishing and swimming has doubled since 1972, and that the number of people served by sewage treatment plants has more than doubled

    An Empirical Analysis of the Medical Informed Consent Doctrine: Search for a Standard of Disclosure

    Get PDF
    Informed consent and its conceptual equivalents, e.g., right-to-know, are increasingly important. The author discusses the development of the informed consent doctrine in tort cases and attempts to evaluate the consistency of its application. He concludes that it is difficult to separate that which must be disclosed from that which need not be. He also argues that much remains to be done in achieving the objectives of the informed consent doctrine

    Domestic vs. International Correlations of Interest Rate Maturities

    Get PDF
    The association between long and short interest rates is traditionally envisaged from a purely domestic perspective, where it is believed an empirical regularity. Hence, the weakening of this relationship in the first half of the 2000s has represented a conundrum, calling for a reassessment of the term structure and the conduct of monetary policy. Some commentators have called for investigations into the international dimension of this puzzle. Hence, in this paper we employ recent advances in panel data econometrics to investigate the co-movement of interest rate maturities both at the domestic and international levels for a sample of industrial countries. Specifically, we use the Ng (2006) spacings correlations approach to examine interest rates correlations between and within countries. Compared to alternatives, this method does not just estimate bivariate correlations, but also assesses the degree of panel correlation without being restricted by the assumption of either zero or complete panel correlation. We find very small correlations between the different maturities of domestic rates and much higher correlations of international rates. Moreover, international correlations between long rates are significantly higher than those between short rates. These findings suggest a scenario for national monetary policy, where financial globalization may have changed the transmission mechanism, advocating searches for the “missing” yield curve in its international dimension.Financial globalization, yield spread, interest rates, spacings correlations

    Bayesian testing of many hypotheses ×\times many genes: A study of sleep apnea

    Full text link
    Substantial statistical research has recently been devoted to the analysis of large-scale microarray experiments which provide a measure of the simultaneous expression of thousands of genes in a particular condition. A typical goal is the comparison of gene expression between two conditions (e.g., diseased vs. nondiseased) to detect genes which show differential expression. Classical hypothesis testing procedures have been applied to this problem and more recent work has employed sophisticated models that allow for the sharing of information across genes. However, many recent gene expression studies have an experimental design with several conditions that requires an even more involved hypothesis testing approach. In this paper, we use a hierarchical Bayesian model to address the situation where there are many hypotheses that must be simultaneously tested for each gene. In addition to having many hypotheses within each gene, our analysis also addresses the more typical multiple comparison issue of testing many genes simultaneously. We illustrate our approach with an application to a study of genes involved in obstructive sleep apnea in humans.Comment: Published in at http://dx.doi.org/10.1214/09-AOAS241 the Annals of Applied Statistics (http://www.imstat.org/aoas/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Study for identification of Beneficial Uses of Space (BUS) (phase 2). Volume 1: Executive summary

    Get PDF
    A study was conducted to analyze the benefits to the world which can be realized from space manufacturing processes. The study envisaged the use of the space shuttle and manned space stations for the purpose. For each proposed operation a data base and rationale were established for those processes or portions of processes which would be improved by performance in the orbital environment. The four experiments which were recommended for the initial investigation are identified. The procedures for conducting the weightless manufacturing processes are outline

    Variable-fidelity optimization of microwave filters using co-kriging and trust regions

    Get PDF
    In this paper, a variable-fidelity optimization methodology for simulation-driven design optimization of filters is presented. We exploit electromagnetic (EM) simulations of different accuracy. Densely sampled but cheap low-fidelity EM data is utilized to create a fast kriging interpolation model (the surrogate), subsequently used to find an optimum design of the high-fidelity EM model of the filter under consideration. The high-fidelity data accumulated during the optimization process is combined with the existing surrogate using the co-kriging technique. This allows us to improve the surrogate model accuracy while approaching the optimum. The convergence of the algorithm is ensured by embedding it into the trust region framework that adaptively adjusts the search radius based on the quality of the predictions made by the co-kriging model. Three filter design cases are given for demonstration and verification purposes
    corecore