4,317 research outputs found

    ICE Second Halley radial: TDA mission support and DSN operations

    Get PDF
    The article documents the operations encompassing the International Cometary Explorer (ICE) second Halley radial experiment centered around March 28, 1986. The support was provided by the Deep Space Network (DSN) 64-meter subnetwork. Near continuous support was provided the last two weeks of March and the first two weeks of April to insure the collection of adequate background data for the Halley radial experiment. During the last week of March, plasma wave measurements indicate that ICE was within the Halley heavy ion pick-up region

    Congruent families and invariant tensors

    Full text link
    Classical results of Chentsov and Campbell state that -- up to constant multiples -- the only 22-tensor field of a statistical model which is invariant under congruent Markov morphisms is the Fisher metric and the only invariant 33-tensor field is the Amari-Chentsov tensor. We generalize this result for arbitrary degree nn, showing that any family of nn-tensors which is invariant under congruent Markov morphisms is algebraically generated by the canonical tensor fields defined in an earlier paper

    Soliton form factors from lattice simulations

    Full text link
    The form factor provides a convenient way to describe properties of topological solitons in the full quantum theory, when semiclassical concepts are not applicable. It is demonstrated that the form factor can be calculated numerically using lattice Monte Carlo simulations. The approach is very general and can be applied to essentially any type of soliton. The technique is illustrated by calculating the kink form factor near the critical point in 1+1-dimensional scalar field theory. As expected from universality arguments, the result agrees with the exactly calculable scaling form factor of the two-dimensional Ising model.Comment: 5 pages, 3 figures; v2: discussion extended, references added, version accepted for publication in PR

    A comparison of block and semi-parametric bootstrap methods for variance estimation in spatial statistics

    Get PDF
    Efron (1979) introduced the bootstrap method for independent data but it cannot be easily applied to spatial data because of their dependency. For spatial data that are correlated in terms of their locations in the underlying space the moving block bootstrap method is usually used to estimate the precision measures of the estimators. The precision of the moving block bootstrap estimators is related to the block size which is difficult to select. In the moving block bootstrap method also the variance estimator is underestimated. In this paper, first the semi-parametric bootstrap is used to estimate the precision measures of estimators in spatial data analysis. In the semi-parametric bootstrap method, we use the estimation of the spatial correlation structure. Then, we compare the semi-parametric bootstrap with a moving block bootstrap for variance estimation of estimators in a simulation study. Finally, we use the semi-parametric bootstrap to analyze the coal-ash data

    One-dimensional infinite component vector spin glass with long-range interactions

    Full text link
    We investigate zero and finite temperature properties of the one-dimensional spin-glass model for vector spins in the limit of an infinite number m of spin components where the interactions decay with a power, \sigma, of the distance. A diluted version of this model is also studied, but found to deviate significantly from the fully connected model. At zero temperature, defect energies are determined from the difference in ground-state energies between systems with periodic and antiperiodic boundary conditions to determine the dependence of the defect-energy exponent \theta on \sigma. A good fit to this dependence is \theta =3/4-\sigma. This implies that the upper critical value of \sigma is 3/4, corresponding to the lower critical dimension in the d-dimensional short-range version of the model. For finite temperatures the large m saddle-point equations are solved self-consistently which gives access to the correlation function, the order parameter and the spin-glass susceptibility. Special attention is paid to the different forms of finite-size scaling effects below and above the lower critical value, \sigma =5/8, which corresponds to the upper critical dimension 8 of the hypercubic short-range model.Comment: 27 pages, 27 figures, 4 table

    Optimal discrete stopping times for reliability growth tests

    Get PDF
    Often, the duration of a reliability growth development test is specified in advance and the decision to terminate or continue testing is conducted at discrete time intervals. These features are normally not captured by reliability growth models. This paper adapts a standard reliability growth model to determine the optimal time for which to plan to terminate testing. The underlying stochastic process is developed from an Order Statistic argument with Bayesian inference used to estimate the number of faults within the design and classical inference procedures used to assess the rate of fault detection. Inference procedures within this framework are explored where it is shown the Maximum Likelihood Estimators possess a small bias and converges to the Minimum Variance Unbiased Estimator after few tests for designs with moderate number of faults. It is shown that the Likelihood function can be bimodal when there is conflict between the observed rate of fault detection and the prior distribution describing the number of faults in the design. An illustrative example is provided

    Hierarchically nested factor model from multivariate data

    Full text link
    We show how to achieve a statistical description of the hierarchical structure of a multivariate data set. Specifically we show that the similarity matrix resulting from a hierarchical clustering procedure is the correlation matrix of a factor model, the hierarchically nested factor model. In this model, factors are mutually independent and hierarchically organized. Finally, we use a bootstrap based procedure to reduce the number of factors in the model with the aim of retaining only those factors significantly robust with respect to the statistical uncertainty due to the finite length of data records.Comment: 7 pages, 5 figures; accepted for publication in Europhys. Lett. ; the Appendix corresponds to the additional material of the accepted letter

    Novel Methods for Predicting Photometric Redshifts from Broad Band Photometry using Virtual Sensors

    Full text link
    We calculate photometric redshifts from the Sloan Digital Sky Survey Main Galaxy Sample, The Galaxy Evolution Explorer All Sky Survey, and The Two Micron All Sky Survey using two new training-set methods. We utilize the broad-band photometry from the three surveys alongside Sloan Digital Sky Survey measures of photometric quality and galaxy morphology. Our first training-set method draws from the theory of ensemble learning while the second employs Gaussian process regression both of which allow for the estimation of redshift along with a measure of uncertainty in the estimation. The Gaussian process models the data very effectively with small training samples of approximately 1000 points or less. These two methods are compared to a well known Artificial Neural Network training-set method and to simple linear and quadratic regression. Our results show that robust photometric redshift errors as low as 0.02 RMS can regularly be obtained. We also demonstrate the need to provide confidence bands on the error estimation made by both classes of models. Our results indicate that variations due to the optimization procedure used for almost all neural networks, combined with the variations due to the data sample, can produce models with variations in accuracy that span an order of magnitude. A key contribution of this paper is to quantify the variability in the quality of results as a function of model and training sample. We show how simply choosing the "best" model given a data set and model class can produce misleading results.Comment: 36 pages, 12 figures, ApJ in Press, modified to reflect published version and color figure

    Pivotal estimation in high-dimensional regression via linear programming

    Full text link
    We propose a new method of estimation in high-dimensional linear regression model. It allows for very weak distributional assumptions including heteroscedasticity, and does not require the knowledge of the variance of random errors. The method is based on linear programming only, so that its numerical implementation is faster than for previously known techniques using conic programs, and it allows one to deal with higher dimensional models. We provide upper bounds for estimation and prediction errors of the proposed estimator showing that it achieves the same rate as in the more restrictive situation of fixed design and i.i.d. Gaussian errors with known variance. Following Gautier and Tsybakov (2011), we obtain the results under weaker sensitivity assumptions than the restricted eigenvalue or assimilated conditions

    A simple and robust method for connecting small-molecule drugs using gene-expression signatures

    Get PDF
    Interaction of a drug or chemical with a biological system can result in a gene-expression profile or signature characteristic of the event. Using a suitably robust algorithm these signatures can potentially be used to connect molecules with similar pharmacological or toxicological properties. The Connectivity Map was a novel concept and innovative tool first introduced by Lamb et al to connect small molecules, genes, and diseases using genomic signatures [Lamb et al (2006), Science 313, 1929-1935]. However, the Connectivity Map had some limitations, particularly there was no effective safeguard against false connections if the observed connections were considered on an individual-by-individual basis. Further when several connections to the same small-molecule compound were viewed as a set, the implicit null hypothesis tested was not the most relevant one for the discovery of real connections. Here we propose a simple and robust method for constructing the reference gene-expression profiles and a new connection scoring scheme, which importantly allows the valuation of statistical significance of all the connections observed. We tested the new method with the two example gene-signatures (HDAC inhibitors and Estrogens) used by Lamb et al and also a new gene signature of immunosuppressive drugs. Our testing with this new method shows that it achieves a higher level of specificity and sensitivity than the original method. For example, our method successfully identified raloxifene and tamoxifen as having significant anti-estrogen effects, while Lamb et al's Connectivity Map failed to identify these. With these properties our new method has potential use in drug development for the recognition of pharmacological and toxicological properties in new drug candidates.Comment: 8 pages, 2 figures, and 2 tables; supplementary data supplied as a ZIP fil
    • 

    corecore