44,775 research outputs found

    Systematic and random errors in rotating-analyzer, ellipsometry

    Get PDF
    Errors and error sources occurring in rotating-analyzer ellipsometry are discussed. From general considerations it is shown that a rotating-analyzer ellipsometer is inaccurate if applied at P = 0° and in cases when π = 0° or where Δ is near 0° or 180°. Window errors, component imperfections, azimuth errors and all other errors may, to first order, be treated independently and can subsequently be added. Explicit first-order expressions for the errors δΔ and δπ caused by windows, component imperfections, and azimuth errors are derived, showing that all of them, except the window errors, are eliminated in a two-zone measurement. Higher-order errors that are due to azimuth errors are studied numerically, revealing that they are in general less than 0.1°. Statistical errors are also discussed. Errors caused by noise and by correlated perturbations, i.e., periodic fluctuations of the light source, are also considered. Such periodic perturbations do cause random errors, especially when they have frequencies near 2ωA and 4ωA

    Efficiently decoding Reed-Muller codes from random errors

    Full text link
    Reed-Muller codes encode an mm-variate polynomial of degree rr by evaluating it on all points in {0,1}m\{0,1\}^m. We denote this code by RM(m,r)RM(m,r). The minimal distance of RM(m,r)RM(m,r) is 2mr2^{m-r} and so it cannot correct more than half that number of errors in the worst case. For random errors one may hope for a better result. In this work we give an efficient algorithm (in the block length n=2mn=2^m) for decoding random errors in Reed-Muller codes far beyond the minimal distance. Specifically, for low rate codes (of degree r=o(m)r=o(\sqrt{m})) we can correct a random set of (1/2o(1))n(1/2-o(1))n errors with high probability. For high rate codes (of degree mrm-r for r=o(m/logm)r=o(\sqrt{m/\log m})), we can correct roughly mr/2m^{r/2} errors. More generally, for any integer rr, our algorithm can correct any error pattern in RM(m,m(2r+2))RM(m,m-(2r+2)) for which the same erasure pattern can be corrected in RM(m,m(r+1))RM(m,m-(r+1)). The results above are obtained by applying recent results of Abbe, Shpilka and Wigderson (STOC, 2015), Kumar and Pfister (2015) and Kudekar et al. (2015) regarding the ability of Reed-Muller codes to correct random erasures. The algorithm is based on solving a carefully defined set of linear equations and thus it is significantly different than other algorithms for decoding Reed-Muller codes that are based on the recursive structure of the code. It can be seen as a more explicit proof of a result of Abbe et al. that shows a reduction from correcting erasures to correcting errors, and it also bares some similarities with the famous Berlekamp-Welch algorithm for decoding Reed-Solomon codes.Comment: 18 pages, 2 figure

    Sequential bifurcation for observations with random errors

    Get PDF
    Simulation;Bifurcation;analyse

    Redundancy Allocation of Partitioned Linear Block Codes

    Full text link
    Most memories suffer from both permanent defects and intermittent random errors. The partitioned linear block codes (PLBC) were proposed by Heegard to efficiently mask stuck-at defects and correct random errors. The PLBC have two separate redundancy parts for defects and random errors. In this paper, we investigate the allocation of redundancy between these two parts. The optimal redundancy allocation will be investigated using simulations and the simulation results show that the PLBC can significantly reduce the probability of decoding failure in memory with defects. In addition, we will derive the upper bound on the probability of decoding failure of PLBC and estimate the optimal redundancy allocation using this upper bound. The estimated redundancy allocation matches the optimal redundancy allocation well.Comment: 5 pages, 2 figures, to appear in IEEE International Symposium on Information Theory (ISIT), Jul. 201

    The consistency of estimator under fixed design regression model with NQD errors

    Full text link
    In this article, basing on NQD samples, we investigate the fixed design nonparametric regression model, where the errors are pairwise NQD random errors, with fixed design points, and an unknown function. Nonparametric weighted estimator will be introduced and its consistency is studied. As special case, the consistency result for weighted kernel estimators of the model is obtained. This extends the earlier work on independent random and dependent random errors to NQD case

    Two Theorems in List Decoding

    Full text link
    We prove the following results concerning the list decoding of error-correcting codes: (i) We show that for \textit{any} code with a relative distance of δ\delta (over a large enough alphabet), the following result holds for \textit{random errors}: With high probability, for a \rho\le \delta -\eps fraction of random errors (for any \eps>0), the received word will have only the transmitted codeword in a Hamming ball of radius ρ\rho around it. Thus, for random errors, one can correct twice the number of errors uniquely correctable from worst-case errors for any code. A variant of our result also gives a simple algorithm to decode Reed-Solomon codes from random errors that, to the best of our knowledge, runs faster than known algorithms for certain ranges of parameters. (ii) We show that concatenated codes can achieve the list decoding capacity for erasures. A similar result for worst-case errors was proven by Guruswami and Rudra (SODA 08), although their result does not directly imply our result. Our results show that a subset of the random ensemble of codes considered by Guruswami and Rudra also achieve the list decoding capacity for erasures. Our proofs employ simple counting and probabilistic arguments.Comment: 19 pages, 0 figure

    On the effect of random errors in gridded bathymetric compilations

    Get PDF
    We address the problem of compiling bathymetric data sets with heterogeneous coverage and a range of data measurement accuracies. To generate a regularly spaced grid, we are obliged to interpolate sparse data; our objective here is to augment this product with an estimate of confidence in the interpolated bathymetry based on our knowledge of the component of random error in the bathymetric source data. Using a direct simulation Monte Carlo method, we utilize data from the International Bathymetric Chart of the Arctic Ocean database to develop a suitable methodology for assessment of the standard deviations of depths in the interpolated grid. Our assessment of random errors in each data set are heuristic but realistic and are based on available metadata from the data providers. We show that a confidence grid can be built using this method and that this product can be used to assess reliability of the final compilation. The methodology as developed here is applied to bathymetric data but is equally applicable to other interpolated data sets, such as gravity and magnetic data

    Random Errors in Superconducting Dipoles

    Get PDF
    The magnetic field in a superconducting magnet is mainly determined by the position of the conductors. Hence, the main contribution to the random field errors comes from random displacement of the coil with respect to its nominal position. Using a Monte-Carlo method, we analyze the measured random field errors of the main dipoles of the LHC, Tevatron, RHIC and HERA projects in order to estimate the precision of the conductor positioning reached during the production. The method can be used to obtain more refined estimates of the random components for future projects

    Distances and absolute magnitudes from trigonometric parallaxes

    Get PDF
    We first review the current knowledge of Hipparcos systematic and random errors, in particular small-scale correlations. Then, assuming Gaussian parallax errors and using examples from the recent Hipparcos literature, we show how random errors may be misinterpreted as systematic errors, or transformed into systematic errors. Finally we summarise how to get unbiased estimates of absolute magnitudes and distances, using either Bayesian or non-parametrical methods. These methods may be applied to get either mean quantities or individual estimates. In particular, we underline the notion of astrometry-based luminosity, which avoids the truncation biases and allows a full use of Hipparcos samples.Comment: 20 pages, 8 figures, Invited paper in Haguenau Colloquium "Harmonizing Cosmic Distance Scales in a Post-Hipparcos Era", 14-16/09/98, to appear in ASP Conf. Series, D. Egret and A. Heck ed
    corecore