36,500 research outputs found

    Improved Methods for Detecting Acquirer Skills

    Get PDF
    Large merger and acquisition (M&A) samples feature the pervasive presence of repetitive acquirers. They offer an attractive empirical context for revealing the presence of acquirer skills (persistent superior performance). But panel data M&A are quite heterogeneous; just a few acquirers undertake many M&As. Does this feature affect statistical inference? To investigate the issue, our study relies on simulations based on real data sets. The results suggest the existence of a bias, such that extant statistical support for the presence of acquirer skills appears compromised. We introduce a new resampling method to detect acquirer skills with attractive statistical properties (size and power) for samples of acquirers that complete at least five acquisitions. The proposed method confirms the presence of acquirer skills but only for a marginal fraction of the acquirer population. This result is robust to endogenous attrition and varying time periods between successive transactions. Claims according to which acquirer skills are a first order factor explaining acquirer cross-­‐sectional cumulated abnormal returns appears overstated

    Maximum A Posteriori Resampling of Noisy, Spatially Correlated Data

    Get PDF
    In any geologic application, noisy data are sources of consternation for researchers, inhibiting interpretability and marring images with unsightly and unrealistic artifacts. Filtering is the typical solution to dealing with noisy data. However, filtering commonly suffers from ad hoc (i.e., uncalibrated, ungoverned) application. We present here an alternative to filtering: a newly developed method for correcting noise in data by finding the “best” value given available information. The motivating rationale is that data points that are close to each other in space cannot differ by “too much,” where “too much” is governed by the field covariance. Data with large uncertainties will frequently violate this condition and therefore ought to be corrected, or “resampled.” Our solution for resampling is determined by the maximum of the a posteriori density function defined by the intersection of (1) the data error probability density function (pdf) and (2) the conditional pdf, determined by the geostatistical kriging algorithm applied to proximal data values. A maximum a posteriori solution can be computed sequentially going through all the data, but the solution depends on the order in which the data are examined. We approximate the global a posteriori solution by randomizing this order and taking the average. A test with a synthetic data set sampled from a known field demonstrates quantitatively and qualitatively the improvement provided by the maximum a posteriori resampling algorithm. The method is also applied to three marine geology/geophysics data examples, demonstrating the viability of the method for diverse applications: (1) three generations of bathymetric data on the New Jersey shelf with disparate data uncertainties; (2) mean grain size data from the Adriatic Sea, which is a combination of both analytic (low uncertainty) and word-based (higher uncertainty) sources; and (3) side-scan backscatter data from the Martha\u27s Vineyard Coastal Observatory which are, as is typical for such data, affected by speckle noise. Compared to filtering, maximum a posteriori resampling provides an objective and optimal method for reducing noise, and better preservation of the statistical properties of the sampled field. The primary disadvantage is that maximum a posteriori resampling is a computationally expensive procedure

    Why has (reasonably accurate) Automatic Speech Recognition been so hard to achieve?

    Full text link
    Hidden Markov models (HMMs) have been successfully applied to automatic speech recognition for more than 35 years in spite of the fact that a key HMM assumption -- the statistical independence of frames -- is obviously violated by speech data. In fact, this data/model mismatch has inspired many attempts to modify or replace HMMs with alternative models that are better able to take into account the statistical dependence of frames. However it is fair to say that in 2010 the HMM is the consensus model of choice for speech recognition and that HMMs are at the heart of both commercially available products and contemporary research systems. In this paper we present a preliminary exploration aimed at understanding how speech data depart from HMMs and what effect this departure has on the accuracy of HMM-based speech recognition. Our analysis uses standard diagnostic tools from the field of statistics -- hypothesis testing, simulation and resampling -- which are rarely used in the field of speech recognition. Our main result, obtained by novel manipulations of real and resampled data, demonstrates that real data have statistical dependency and that this dependency is responsible for significant numbers of recognition errors. We also demonstrate, using simulation and resampling, that if we `remove' the statistical dependency from data, then the resulting recognition error rates become negligible. Taken together, these results suggest that a better understanding of the structure of the statistical dependency in speech data is a crucial first step towards improving HMM-based speech recognition

    Diverse correlation structures in gene expression data and their utility in improving statistical inference

    Full text link
    It is well known that correlations in microarray data represent a serious nuisance deteriorating the performance of gene selection procedures. This paper is intended to demonstrate that the correlation structure of microarray data provides a rich source of useful information. We discuss distinct correlation substructures revealed in microarray gene expression data by an appropriate ordering of genes. These substructures include stochastic proportionality of expression signals in a large percentage of all gene pairs, negative correlations hidden in ordered gene triples, and a long sequence of weakly dependent random variables associated with ordered pairs of genes. The reported striking regularities are of general biological interest and they also have far-reaching implications for theory and practice of statistical methods of microarray data analysis. We illustrate the latter point with a method for testing differential expression of nonoverlapping gene pairs. While designed for testing a different null hypothesis, this method provides an order of magnitude more accurate control of type 1 error rate compared to conventional methods of individual gene expression profiling. In addition, this method is robust to the technical noise. Quantitative inference of the correlation structure has the potential to extend the analysis of microarray data far beyond currently practiced methods.Comment: Published in at http://dx.doi.org/10.1214/07-AOAS120 the Annals of Applied Statistics (http://www.imstat.org/aoas/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Futility Analysis in the Cross-Validation of Machine Learning Models

    Full text link
    Many machine learning models have important structural tuning parameters that cannot be directly estimated from the data. The common tactic for setting these parameters is to use resampling methods, such as cross--validation or the bootstrap, to evaluate a candidate set of values and choose the best based on some pre--defined criterion. Unfortunately, this process can be time consuming. However, the model tuning process can be streamlined by adaptively resampling candidate values so that settings that are clearly sub-optimal can be discarded. The notion of futility analysis is introduced in this context. An example is shown that illustrates how adaptive resampling can be used to reduce training time. Simulation studies are used to understand how the potential speed--up is affected by parallel processing techniques.Comment: 22 pages, 5 figure

    Gap bootstrap methods for massive data sets with an application to transportation engineering

    Get PDF
    In this paper we describe two bootstrap methods for massive data sets. Naive applications of common resampling methodology are often impractical for massive data sets due to computational burden and due to complex patterns of inhomogeneity. In contrast, the proposed methods exploit certain structural properties of a large class of massive data sets to break up the original problem into a set of simpler subproblems, solve each subproblem separately where the data exhibit approximate uniformity and where computational complexity can be reduced to a manageable level, and then combine the results through certain analytical considerations. The validity of the proposed methods is proved and their finite sample properties are studied through a moderately large simulation study. The methodology is illustrated with a real data example from Transportation Engineering, which motivated the development of the proposed methods.Comment: Published in at http://dx.doi.org/10.1214/12-AOAS587 the Annals of Applied Statistics (http://www.imstat.org/aoas/) by the Institute of Mathematical Statistics (http://www.imstat.org
    corecore