6,678 research outputs found

    A Falsification View of Success Typing

    Full text link
    Dynamic languages are praised for their flexibility and expressiveness, but static analysis often yields many false positives and verification is cumbersome for lack of structure. Hence, unit testing is the prevalent incomplete method for validating programs in such languages. Falsification is an alternative approach that uncovers definite errors in programs. A falsifier computes a set of inputs that definitely crash a program. Success typing is a type-based approach to document programs in dynamic languages. We demonstrate that success typing is, in fact, an instance of falsification by mapping success (input) types into suitable logic formulae. Output types are represented by recursive types. We prove the correctness of our mapping (which establishes that success typing is falsification) and we report some experiences with a prototype implementation.Comment: extended versio

    Fast robust correlation for high-dimensional data

    Full text link
    The product moment covariance is a cornerstone of multivariate data analysis, from which one can derive correlations, principal components, Mahalanobis distances and many other results. Unfortunately the product moment covariance and the corresponding Pearson correlation are very susceptible to outliers (anomalies) in the data. Several robust measures of covariance have been developed, but few are suitable for the ultrahigh dimensional data that are becoming more prevalent nowadays. For that one needs methods whose computation scales well with the dimension, are guaranteed to yield a positive semidefinite covariance matrix, and are sufficiently robust to outliers as well as sufficiently accurate in the statistical sense of low variability. We construct such methods using data transformations. The resulting approach is simple, fast and widely applicable. We study its robustness by deriving influence functions and breakdown values, and computing the mean squared error on contaminated data. Using these results we select a method that performs well overall. This also allows us to construct a faster version of the DetectDeviatingCells method (Rousseeuw and Van den Bossche, 2018) to detect cellwise outliers, that can deal with much higher dimensions. The approach is illustrated on genomic data with 12,000 variables and color video data with 920,000 dimensions

    Alkali vapor pressure modulation on the 100ms scale in a single-cell vacuum system for cold atom experiments

    Full text link
    We describe and characterize a device for alkali vapor pressure modulation on the 100ms timescale in a single-cell cold atom experiment. Its mechanism is based on optimized heat conduction between a current-modulated alkali dispenser and a heat sink at room temperature. We have studied both the short-term behavior during individual pulses and the long-term pressure evolution in the cell. The device combines fast trap loading and relatively long trap lifetime, enabling high repetition rates in a very simple setup. These features make it particularly suitable for portable atomic sensors.Comment: One reference added, one correcte

    Discussion of "The power of monitoring"

    Get PDF
    This is an invited comment on the discussion paper "The power of monitoring: how to make the most of a contaminated multivariate sample" by A. Cerioli, M. Riani, A. Atkinson and A. Corbellini that will appear in the journal Statistical Methods & Applications

    How biased are maximum entropy models?

    Get PDF
    Maximum entropy models have become popular statistical models in neuroscience and other areas in biology, and can be useful tools for obtaining estimates of mutual information in biological systems. However, maximum entropy models fit to small data sets can be subject to sampling bias; i.e. the true entropy of the data can be severely underestimated. Here we study the sampling properties of estimates of the entropy obtained from maximum entropy models. We show that if the data is generated by a distribution that lies in the model class, the bias is equal to the number of parameters divided by twice the number of observations. However, in practice, the true distribution is usually outside the model class, and we show here that this misspecification can lead to much larger bias. We provide a perturbative approximation of the maximally expected bias when the true model is out of model class, and we illustrate our results using numerical simulations of an Ising model; i.e. the second-order maximum entropy distribution on binary data.
    • 

    corecore