1,504 research outputs found

    How effective is European merger control?

    Get PDF
    This paper applies an intuitive approach based on stock market data to a unique dataset of large concentrations during the period 1990-2002 to assess the effectiveness of European merger control. The basic idea is to relate announcement and decision abnormal returns. Under a set of four maintained assumptions, merger control might be interpreted to be effective if rents accruing due to the increased market power observed around the merger announcement are reversed by the antitrust decision, i.e. if there is a negative relation between announcement and decision abnormal returns. To clearly identify the events' competitive effects, we explicitly control for the market expectation about the outcome of the merger control procedure and run several robustness checks to assess the role of our maintained assumptions. We find that only outright prohibitions completely reverse the rents measured around a merger's announcement. On average, remedies seem to be only partially capable of reverting announcement abnormal returns. Yet they seem to be more effective when applied during the first rather than the second investigation phase and in subsamples where our assumptions are more likely to hold. Moreover, the European Commission appears to learn over time. --Merger Control,Remedies,European Commission,Event Studies

    Robust Testing in High-Dimensional Sparse Models

    Full text link
    We consider the problem of robustly testing the norm of a high-dimensional sparse signal vector under two different observation models. In the first model, we are given nn i.i.d. samples from the distribution N(θ,Id)\mathcal{N}\left(\theta,I_d\right) (with unknown θ\theta), of which a small fraction has been arbitrarily corrupted. Under the promise that θ0s\|\theta\|_0\le s, we want to correctly distinguish whether θ2=0\|\theta\|_2=0 or θ2>γ\|\theta\|_2>\gamma, for some input parameter γ>0\gamma>0. We show that any algorithm for this task requires n=Ω(slogeds)n=\Omega\left(s\log\frac{ed}{s}\right) samples, which is tight up to logarithmic factors. We also extend our results to other common notions of sparsity, namely, θqs\|\theta\|_q\le s for any 0<q<20 < q < 2. In the second observation model that we consider, the data is generated according to a sparse linear regression model, where the covariates are i.i.d. Gaussian and the regression coefficient (signal) is known to be ss-sparse. Here too we assume that an ϵ\epsilon-fraction of the data is arbitrarily corrupted. We show that any algorithm that reliably tests the norm of the regression coefficient requires at least n=Ω(min(slogd,1/γ4))n=\Omega\left(\min(s\log d,{1}/{\gamma^4})\right) samples. Our results show that the complexity of testing in these two settings significantly increases under robustness constraints. This is in line with the recent observations made in robust mean testing and robust covariance testing.Comment: Fixed typos, added a figure and discussion sectio

    The Importance of Being Clustered: Uncluttering the Trends of Statistics from 1970 to 2015

    Full text link
    In this paper we retrace the recent history of statistics by analyzing all the papers published in five prestigious statistical journals since 1970, namely: Annals of Statistics, Biometrika, Journal of the American Statistical Association, Journal of the Royal Statistical Society, series B and Statistical Science. The aim is to construct a kind of "taxonomy" of the statistical papers by organizing and by clustering them in main themes. In this sense being identified in a cluster means being important enough to be uncluttered in the vast and interconnected world of the statistical research. Since the main statistical research topics naturally born, evolve or die during time, we will also develop a dynamic clustering strategy, where a group in a time period is allowed to migrate or to merge into different groups in the following one. Results show that statistics is a very dynamic and evolving science, stimulated by the rise of new research questions and types of data

    Investigating Inflation Dynamics and Structural Change with an Adaptive ARFIMA Approach

    Get PDF
    Previous models of monthly CPI inflation time series have focused on possible regime shifts, non-linearities and the feature of long memory. This paper proposes a new time series model, named Adaptive ARFIMA; which appears well suited to describe inflation and potentially other economic time series data. The Adaptive ARFIMA model includes a time dependent intercept term which follows a Flexible Fourier Form. The model appears to be capable of succesfully dealing with various forms of breaks and discontinities in the conditional mean of a time series. Simulation evidence justifies estimation by approximate MLE and model specfication through robust inference based on QMLE. The Adaptive ARFIMA model when supplemented with conditional variance models is found to provide a good representation of the G7 monthly CPI inflation series.ARFIMA; FIGARCH, long memory, structural change, inflation, G7.

    NetCoMi: network construction and comparison for microbiome data in R

    Get PDF
    MOTIVATION Estimating microbial association networks from high-throughput sequencing data is a common exploratory data analysis approach aiming at understanding the complex interplay of microbial communities in their natural habitat. Statistical network estimation workflows comprise several analysis steps, including methods for zero handling, data normalization and computing microbial associations. Since microbial interactions are likely to change between conditions, e.g. between healthy individuals and patients, identifying network differences between groups is often an integral secondary analysis step. Thus far, however, no unifying computational tool is available that facilitates the whole analysis workflow of constructing, analysing and comparing microbial association networks from high-throughput sequencing data. RESULTS Here, we introduce NetCoMi (Network Construction and comparison for Microbiome data), an R package that integrates existing methods for each analysis step in a single reproducible computational workflow. The package offers functionality for constructing and analysing single microbial association networks as well as quantifying network differences. This enables insights into whether single taxa, groups of taxa or the overall network structure change between groups. NetCoMi also contains functionality for constructing differential networks, thus allowing to assess whether single pairs of taxa are differentially associated between two groups. Furthermore, NetCoMi facilitates the construction and analysis of dissimilarity networks of microbiome samples, enabling a high-level graphical summary of the heterogeneity of an entire microbiome sample collection. We illustrate NetCoMi's wide applicability using data sets from the GABRIELA study to compare microbial associations in settled dust from children's rooms between samples from two study centers (Ulm and Munich). AVAILABILITY R scripts used for producing the examples shown in this manuscript are provided as supplementary data. The NetCoMi package, together with a tutorial, is available at https://github.com/stefpeschel/NetCoMi. CONTACT Tel:+49 89 3187 43258; [email protected]. SUPPLEMENTARY INFORMATION Supplementary data are available at Briefings in Bioinformatics online

    Statistical analysis of high-dimensional biomedical data: a gentle introduction to analytical goals, common approaches and challenges

    Get PDF
    International audienceBackground: In high-dimensional data (HDD) settings, the number of variables associated with each observation is very large. Prominent examples of HDD in biomedical research include omics data with a large number of variables such as many measurements across the genome, proteome, or metabolome, as well as electronic health records data that have large numbers of variables recorded for each patient. The statistical analysis of such data requires knowledge and experience, sometimes of complex methods adapted to the respective research questions. Methods: Advances in statistical methodology and machine learning methods offer new opportunities for innovative analyses of HDD, but at the same time require a deeper understanding of some fundamental statistical concepts. Topic group TG9 “High-dimensional data” of the STRATOS (STRengthening Analytical Thinking for Observational Studies) initiative provides guidance for the analysis of observational studies, addressing particular statistical challenges and opportunities for the analysis of studies involving HDD. In this overview, we discuss key aspects of HDD analysis to provide a gentle introduction for non-statisticians and for classically trained statisticians with little experience specific to HDD. Results: The paper is organized with respect to subtopics that are most relevant for the analysis of HDD, in particular initial data analysis, exploratory data analysis, multiple testing, and prediction. For each subtopic, main analytical goals in HDD settings are outlined. For each of these goals, basic explanations for some commonly used analysis methods are provided. Situations are identified where traditional statistical methods cannot, or should not, be used in the HDD setting, or where adequate analytic tools are still lacking. Many key references are provided. Conclusions: This review aims to provide a solid statistical foundation for researchers, including statisticians and non-statisticians, who are new to research with HDD or simply want to better evaluate and understand the results of HDD analyses

    Automatic Target Recognition Classification System Evaluation Methodology

    Get PDF
    This dissertation research makes contributions towards the evaluation of developing Automatic Target Recognition (ATR) technologies through the application of decision analysis (DA) techniques. ATR technology development decisions should rely not only on the measures of performance (MOPs) associated with a given ATR classification system (CS), but also on the expected measures of effectiveness (MOEs). The purpose of this research is to improve the decision-makers in the ATR Technology development. A decision analysis framework that allows decision-makers in the ATR community to synthesize the performance measures, costs, and characteristics of each ATR system with the preferences and values of both the evaluators and the warfighters is developed. The inclusion of the warfighter\u27s perspective is important in that it has been proven that basing ATR CS comparisons solely upon performance characteristics does not ensure superior operational effectiveness. The methodology also captures the relationship between MOPs and MOEs via a combat model. An example scenario demonstrates how ATR CSs may be compared. Sensitivity analysis is performed to demonstrate the robustness of the MOP to value score and MOP to MOE translations. A multinomial section procedure is introduced to account for the random nature of the MOP estimates
    corecore