3,242,993 research outputs found
Self-checking on-line testable static RAM
This is a fault-tolerant random access memory for use in fault-tolerant computers. It comprises a plurality of memory chips each comprising a plurality of on-line testable and correctable memory cells disposed in rows and columns for holding individually addressable binary bits and provision for error detection incorporated into each memory cell for outputting an error signal whenever a transient error occurs therein. In one embodiment, each of the memory cells comprises a pair of static memory sub-cells for simultaneously receiving and holding a common binary data bit written to the memory cell and the error detection provision comprises comparator logic for continuously sensing and comparing the contents of the memory sub-cells to one another and for outputting the error signal whenever the contents do not match. In another embodiment, each of the memory cells comprises a static memory sub-cell and a dynamic memory sub-cell for simultaneously receiving and holding a common binary data bit written to the memory cell and the error detection provision comprises comparator logic for continuously sensing and comparing the contents of the static memory sub-cell to the dynamic memory sub-cell and for outputting the error signal whenever the contents do not match. Capability for correction of errors is also included
Approximation of real error channels by Clifford channels and Pauli measurements
The Gottesman-Knill theorem allows for the efficient simulation of
stabilizer-based quantum error-correction circuits. Errors in these circuits
are commonly modeled as depolarizing channels by using Monte Carlo methods to
insert Pauli gates randomly throughout the circuit. Although convenient, these
channels are poor approximations of common, realistic channels like amplitude
damping. Here we analyze a larger set of efficiently simulable error channels
by allowing the random insertion of any one-qubit gate or measurement that can
be efficiently simulated within the stabilizer formalism. Our new error
channels are shown to be a viable method for accurately approximating real
error channels.Comment: 6 pages, 4 figure
Efficient Estimation of Approximate Factor Models via Regularized Maximum Likelihood
We study the estimation of a high dimensional approximate factor model in the
presence of both cross sectional dependence and heteroskedasticity. The
classical method of principal components analysis (PCA) does not efficiently
estimate the factor loadings or common factors because it essentially treats
the idiosyncratic error to be homoskedastic and cross sectionally uncorrelated.
For efficient estimation it is essential to estimate a large error covariance
matrix. We assume the model to be conditionally sparse, and propose two
approaches to estimating the common factors and factor loadings; both are based
on maximizing a Gaussian quasi-likelihood and involve regularizing a large
covariance sparse matrix. In the first approach the factor loadings and the
error covariance are estimated separately while in the second approach they are
estimated jointly. Extensive asymptotic analysis has been carried out. In
particular, we develop the inferential theory for the two-step estimation.
Because the proposed approaches take into account the large error covariance
matrix, they produce more efficient estimators than the classical PCA methods
or methods based on a strict factor model
Macroeconomic Integration in Asia Pacific: Common Stochastic Trends and Business Cycle Coherence
This paper addresses the question of macroeconomic integration in the Asian Pacific region. Economically, the analysis is based on the notions of stochastic long-run convergence and business cycle coherence. The econometric procedure consists of tests for cointegration, the examination of vector error correction models, several variants of common cycle tests and forecast error variance decompositions. Results in favour of cyclical synchrony can be partly established, and are even exceeded by the broad evidence for equilibrium relations. In these domains, several leading countries are identified.Real Convergence, Cointegration, Common Cycles, Asia Pacific
Iatrogenic Specification Error: A Cautionary Tale of Cleaning Data
It is common in empirical research to use what appear to be sensible rules of thumb for cleaning data. Measurement error is often the justification for removing (trimming) or recoding (winsorizing) observations whose values lie outside a specified range. This paper considers identification in a linear model when the dependent variable is mismeasured. The results examine the common practice of trimming and winsorizing to address the identification failure. In contrast to the physical and laboratory sciences, measurement error in social science data is likely to be more complex than simply additive white noise. We consider a general measurement error process which nests many processes including the additive white noise process and a contaminated sampling process. Analytic results are only tractable under strong distributional assumptions, but demonstrate that winsorizing and trimming are only solutions for a particular class of measurement error processes. Indeed, trimming and winsorizing may induce or exacerbate bias. We term this source of bias Iatrogenic' (or econometrician induced) error. The identification results for the general error process highlight other approaches which are more robust to distributional assumptions. Monte Carlo simulations demonstrate the fragility of trimming and winsorizing as solutions to measurement error in the dependent variable.
- …
