4,324 research outputs found
ICE Second Halley radial: TDA mission support and DSN operations
The article documents the operations encompassing the International Cometary Explorer (ICE) second Halley radial experiment centered around March 28, 1986. The support was provided by the Deep Space Network (DSN) 64-meter subnetwork. Near continuous support was provided the last two weeks of March and the first two weeks of April to insure the collection of adequate background data for the Halley radial experiment. During the last week of March, plasma wave measurements indicate that ICE was within the Halley heavy ion pick-up region
Congruent families and invariant tensors
Classical results of Chentsov and Campbell state that -- up to constant
multiples -- the only -tensor field of a statistical model which is
invariant under congruent Markov morphisms is the Fisher metric and the only
invariant -tensor field is the Amari-Chentsov tensor. We generalize this
result for arbitrary degree , showing that any family of -tensors which
is invariant under congruent Markov morphisms is algebraically generated by the
canonical tensor fields defined in an earlier paper
A comparison of block and semi-parametric bootstrap methods for variance estimation in spatial statistics
Efron (1979) introduced the bootstrap method for independent data but it cannot be easily applied to spatial data because of their dependency. For spatial data that are correlated in terms of their locations in the underlying space the moving block bootstrap method is usually used to estimate the precision measures of the estimators. The precision of the moving block bootstrap estimators is related to the block size which is difficult to select. In the moving block bootstrap method also the variance estimator is underestimated. In this paper, first the semi-parametric bootstrap is used to estimate the precision measures of estimators in spatial data analysis. In the semi-parametric bootstrap method, we use the estimation of the spatial correlation structure. Then, we compare the semi-parametric bootstrap with a moving block bootstrap for variance estimation of estimators in a simulation study. Finally, we use the semi-parametric bootstrap to analyze the coal-ash data
Soliton form factors from lattice simulations
The form factor provides a convenient way to describe properties of
topological solitons in the full quantum theory, when semiclassical concepts
are not applicable. It is demonstrated that the form factor can be calculated
numerically using lattice Monte Carlo simulations. The approach is very general
and can be applied to essentially any type of soliton. The technique is
illustrated by calculating the kink form factor near the critical point in
1+1-dimensional scalar field theory. As expected from universality arguments,
the result agrees with the exactly calculable scaling form factor of the
two-dimensional Ising model.Comment: 5 pages, 3 figures; v2: discussion extended, references added,
version accepted for publication in PR
One-dimensional infinite component vector spin glass with long-range interactions
We investigate zero and finite temperature properties of the one-dimensional
spin-glass model for vector spins in the limit of an infinite number m of spin
components where the interactions decay with a power, \sigma, of the distance.
A diluted version of this model is also studied, but found to deviate
significantly from the fully connected model. At zero temperature, defect
energies are determined from the difference in ground-state energies between
systems with periodic and antiperiodic boundary conditions to determine the
dependence of the defect-energy exponent \theta on \sigma. A good fit to this
dependence is \theta =3/4-\sigma. This implies that the upper critical value of
\sigma is 3/4, corresponding to the lower critical dimension in the
d-dimensional short-range version of the model. For finite temperatures the
large m saddle-point equations are solved self-consistently which gives access
to the correlation function, the order parameter and the spin-glass
susceptibility. Special attention is paid to the different forms of finite-size
scaling effects below and above the lower critical value, \sigma =5/8, which
corresponds to the upper critical dimension 8 of the hypercubic short-range
model.Comment: 27 pages, 27 figures, 4 table
Optimal discrete stopping times for reliability growth tests
Often, the duration of a reliability growth development test is specified in advance and the decision to terminate or continue testing is conducted at discrete time intervals. These features are normally not captured by reliability growth models. This paper adapts a standard reliability growth model to determine the optimal time for which to plan to terminate testing. The underlying stochastic process is developed from an Order Statistic argument with Bayesian inference used to estimate the number of faults within the design and classical inference procedures used to assess the rate of fault detection. Inference procedures within this framework are explored where it is shown the Maximum Likelihood Estimators possess a small bias and converges to the Minimum Variance Unbiased Estimator after few tests for designs with moderate number of faults. It is shown that the Likelihood function can be bimodal when there is conflict between the observed rate of fault detection and the prior distribution describing the number of faults in the design. An illustrative example is provided
Hierarchically nested factor model from multivariate data
We show how to achieve a statistical description of the hierarchical
structure of a multivariate data set. Specifically we show that the similarity
matrix resulting from a hierarchical clustering procedure is the correlation
matrix of a factor model, the hierarchically nested factor model. In this
model, factors are mutually independent and hierarchically organized. Finally,
we use a bootstrap based procedure to reduce the number of factors in the model
with the aim of retaining only those factors significantly robust with respect
to the statistical uncertainty due to the finite length of data records.Comment: 7 pages, 5 figures; accepted for publication in Europhys. Lett. ; the
Appendix corresponds to the additional material of the accepted letter
Analyzing 2D gel images using a two-component empirical bayes model
<p>Abstract</p> <p>Background</p> <p>Two-dimensional polyacrylomide gel electrophoresis (2D gel, 2D PAGE, 2-DE) is a powerful tool for analyzing the proteome of a organism. Differential analysis of 2D gel images aims at finding proteins that change under different conditions, which leads to large-scale hypothesis testing as in microarray data analysis. Two-component empirical Bayes (EB) models have been widely discussed for large-scale hypothesis testing and applied in the context of genomic data. They have not been implemented for the differential analysis of 2D gel data. In the literature, the mixture and null densities of the test statistics are estimated separately. The estimation of the mixture density does not take into account assumptions about the null density. Thus, there is no guarantee that the estimated null component will be no greater than the mixture density as it should be.</p> <p>Results</p> <p>We present an implementation of a two-component EB model for the analysis of 2D gel images. In contrast to the published estimation method, we propose to estimate the mixture and null densities simultaneously using a constrained estimation approach, which relies on an iteratively re-weighted least-squares algorithm. The assumption about the null density is naturally taken into account in the estimation of the mixture density. This strategy is illustrated using a set of 2D gel images from a factorial experiment. The proposed approach is validated using a set of simulated gels.</p> <p>Conclusions</p> <p>The two-component EB model is a very useful for large-scale hypothesis testing. In proteomic analysis, the theoretical null density is often not appropriate. We demonstrate how to implement a two-component EB model for analyzing a set of 2D gel images. We show that it is necessary to estimate the mixture density and empirical null component simultaneously. The proposed constrained estimation method always yields valid estimates and more stable results. The proposed estimation approach proposed can be applied to other contexts where large-scale hypothesis testing occurs.</p
Pivotal estimation in high-dimensional regression via linear programming
We propose a new method of estimation in high-dimensional linear regression
model. It allows for very weak distributional assumptions including
heteroscedasticity, and does not require the knowledge of the variance of
random errors. The method is based on linear programming only, so that its
numerical implementation is faster than for previously known techniques using
conic programs, and it allows one to deal with higher dimensional models. We
provide upper bounds for estimation and prediction errors of the proposed
estimator showing that it achieves the same rate as in the more restrictive
situation of fixed design and i.i.d. Gaussian errors with known variance.
Following Gautier and Tsybakov (2011), we obtain the results under weaker
sensitivity assumptions than the restricted eigenvalue or assimilated
conditions
Site dilution of quantum spins in the honeycomb lattice
We discuss the effect of site dilution on both the magnetization and the
density of states of quantum spins in the honeycomb lattice, described by the
antiferromagnetic Heisenberg spin-S model. For this purpose a real-space
Bogoliubov-Valatin transformation is used. In this work we show that for the
S>1/2 the system can be analyzed in terms of linear spin wave theory. For spin
S=1/2, however, the linear spin wave approximation breaks down. In this case,
we have studied the effect of dilution on the staggered magnetization using the
Stochastic Series Expansion Monte Carlo method. Two main results are to be
stressed from the Monte Carlo method: (i) a better value for the staggered
magnetization of the undiluted system, m=0.2677(6); (ii) a finite value of the
staggered magnetization of the percolating cluster at the classical percolation
threshold, showing that there is no quantum critical transition driven by
dilution in the Heisenberg model. In the solution of the problem using linear
the spin wave method we pay special attention to the presence of zero energy
modes. Using a combination of linear spin wave analysis and the recursion
method we were able to obtain the thermodynamic limit behavior of the density
of states for both the square and the honeycomb lattices. We have used both the
staggered magnetization and the density of states to analyze neutron scattering
experiments and Neel temperature measurements on quasi-two- -dimensional
honeycomb systems. Our results are in quantitative agreement with experimental
results on Mn_pZn_{1-p}PS_3 and on the Ba(Ni_pMg_{1-p})_2V_2O_8.Comment: 21 pages (REVTEX), 16 figure
- âŚ