293 research outputs found
Recommended from our members
UK mutual fund performance
Using a comprehensive data set on (surviving and non-surviving) UK equity mutual funds (April 1975 - December 2002), this study uses a bootstrap methodology to distinguish between `skill' and `luck' in fund performance. This methodology allows for non-normality in the idiosyncratic risks of the funds -a major issue when considering the `best' and `worst' funds and these are the funds which investors are most interested in. The study points to the existence of genuine stock picking ability among a relatively small number of top performing UK equity mutual funds (i. e. performance which is not solely due to good luck). At the negative end of the performance scale, the analysis strongly rejects the hypothesis that most poor performing funds are merely unlucky. Most of these funds demonstrate `bad skill'. The study also examines the economic and statistical significance of persistence. Sorting funds into deciles based on past raw returns or on past 4-factor alphas, strong evidence is found that past loser funds continue to perform badly in terms of their future 4-factor alphas while little evidence is found that past winner funds provide future positive risk adjusted performance. However, on investigating relatively small `fund-of-fund' portfolios of past winners, evidence of positive persistence is found. Using a cross-section bootstrap approach the study derives the empirical distribution of final wealth at a 10 year horizon and finds that if transactions costs are above 2.5% per fund round trip, a passive strategy seems at least as good as the active strategies examined while with transactions costs of 5% the passive strategy is most probably superior. The study also examines the market timing performance of the funds. Using a nonparametric test procedure the study evaluates both unconditional market timing and timing conditional on publicly available information. A relatively small number of funds (around 1%) are found to successfully time the market while market mistiming is relatively prevalent
Near infrared reflectance spectroscopy for the determination of free gossypol in cottonseed meal
Gossypol is a toxic polyphenolic compound produced by the pigment glands of the cotton
plant. The free gossypol content of cottonseed meal (CSM) is commonly determined by the
American Oil Chemistsâ Society (AOCS) wet chemistry method. The AOCS method, however,
laboratory-intensive, time-consuming, and therefore, not practical for quick field analyses.
To determine if the free gossypol content of CSM could be predicted by near infrared reflectance
spectroscopy (NIRS), CSM samples were collected from all over the world. All CSM samples
were ground and a portion of each analyzed for free gossypol by the AOCS procedure (reference
data) and by NIRS (reflectance data). Both reflectance and reference data were combined in
calibration. The coefficient of determination (r2) and standard error of prediction (SEP) were
used to assess the calibration accuracy. The r2 was 0.728, and the SEP was 0.034 for the
initial calibration that included samples from all over the world. However, the r2 and SEP
improved to 0.921 and 0.014, respectively, if the calibration was made using CSM samples
only from the United States. These results indicate that a general prediction equation can be
developed to predict the free gossypol content of CSM by NIRS. From a practical standpoint,
NIRS technology provides a method for quickly assessing whether a particular batch of CSM
has a free gossypol content low enough to be suitable for use in poultry diets.This research was supported in part by grant 05-635GA from the Georgian Cotton Commission, Perry, G
LD Hub:a centralized database and web interface to perform LD score regression that maximizes the potential of summary level GWAS data for SNP heritability and genetic correlation analysis
Motivation: LD score regression is a reliable and efficient method of using genome-wide association study (GWAS) summary-level results data to estimate the SNP heritability of complex traits and diseases, partition this heritability into functional categories, and estimate the genetic correlation between different phenotypes. Because the method relies on summary level results data, LD score regression is computationally tractable even for very large sample sizes. However, publicly available GWAS summary-level data are typically stored in different databases and have different formats, making it difficult to apply LD score regression to estimate genetic correlations across many different traits simultaneously. Results: In this manuscript, we describe LD Hub - a centralized database of summary-level GWAS results for 173 diseases/traits from different publicly available resources/consortia and a web interface that automates the LD score regression analysis pipeline. To demonstrate functionality and validate our software, we replicated previously reported LD score regression analyses of 49 traits/diseases using LD Hub; and estimated SNP heritability and the genetic correlation across the different phenotypes. We also present new results obtained by uploading a recent atopic dermatitis GWAS meta-analysis to examine the genetic correlation between the condition and other potentially related traits. In response to the growing availability of publicly accessible GWAS summary-level results data, our database and the accompanying web interface will ensure maximal uptake of the LD score regression methodology, provide a useful database for the public dissemination of GWAS results, and provide a method for easily screening hundreds of traits for overlapping genetic aetiologies
Schottky barrier heights at polar metal/semiconductor interfaces
Using a first-principle pseudopotential approach, we have investigated the
Schottky barrier heights of abrupt Al/Ge, Al/GaAs, Al/AlAs, and Al/ZnSe (100)
junctions, and their dependence on the semiconductor chemical composition and
surface termination. A model based on linear-response theory is developed,
which provides a simple, yet accurate description of the barrier-height
variations with the chemical composition of the semiconductor. The larger
barrier values found for the anion- than for the cation-terminated surfaces are
explained in terms of the screened charge of the polar semiconductor surface
and its image charge at the metal surface. Atomic scale computations show how
the classical image charge concept, valid for charges placed at large distances
from the metal, extends to distances shorter than the decay length of the
metal-induced-gap states.Comment: REVTeX 4, 11 pages, 6 EPS figure
From core collapse to superluminous: The rates of massive stellar explosions from the Palomar Transient Factory
We present measurements of the local core-collapse supernova (CCSN) rate using SN discoveries from the Palomar Transient Factory (PTF). We use a Monte Carlo simulation of hundreds of millions of SN light-curve realizations coupled with the detailed PTF survey detection efficiencies to forward model the SN rates in PTF. Using a sample of 86 CCSNe, including 26 stripped-envelope SNe (SESNe), we show that the overall CCSN volumetric rate is CCv=9.10-1.27+1.56Ă 10-5SNe yr-1Mpc-3, h703 at za = 0.028, and the SESN volumetric rate is SEv=2.41-0.64+0.81Ă 10-5SNe yr-1Mpc-3, h703. We further measure a volumetric rate for hydrogen-free superluminous SNe (SLSNe-I) using eight events at z †0.2 of SLSN-Iv=35-13+25 SNe yr-1Gpc-3, h703, which represents the most precise SLSN-I rate measurement to date. Using a simple cosmic star formation history to adjust these volumetric rate measurements to the same redshift, we measure a local ratio of SLSN-I to SESN of âŒ1/810+1500-94, and of SLSN-I to all CCSN types of âŒ1/3500+2800-720. However, using host galaxy stellar mass as a proxy for metallicity, we also show that this ratio is strongly metallicity dependent: in low-mass (logMâ < 9.5 M·) galaxies, which are the only environments that host SLSN-I in our sample, we measure an SLSN-I to SESN fraction of 1/300+380-170 and 1/1700+1800-720 for all CCSN. We further investigate the SN rates a function of host galaxy stellar mass, and show that the specific rates of all CCSNe decrease with increasing stellar mass
Towards Machine Wald
The past century has seen a steady increase in the need of estimating and
predicting complex systems and making (possibly critical) decisions with
limited information. Although computers have made possible the numerical
evaluation of sophisticated statistical models, these models are still designed
\emph{by humans} because there is currently no known recipe or algorithm for
dividing the design of a statistical model into a sequence of arithmetic
operations. Indeed enabling computers to \emph{think} as \emph{humans} have the
ability to do when faced with uncertainty is challenging in several major ways:
(1) Finding optimal statistical models remains to be formulated as a well posed
problem when information on the system of interest is incomplete and comes in
the form of a complex combination of sample data, partial knowledge of
constitutive relations and a limited description of the distribution of input
random variables. (2) The space of admissible scenarios along with the space of
relevant information, assumptions, and/or beliefs, tend to be infinite
dimensional, whereas calculus on a computer is necessarily discrete and finite.
With this purpose, this paper explores the foundations of a rigorous framework
for the scientific computation of optimal statistical estimators/models and
reviews their connections with Decision Theory, Machine Learning, Bayesian
Inference, Stochastic Optimization, Robust Optimization, Optimal Uncertainty
Quantification and Information Based Complexity.Comment: 37 page
- âŠ