362,534 research outputs found
Quantum error mitigation and error correction for practical quantum computation
We are rapidly entering the era of potentially useful quantum computation. To keep on designing larger and more capable quantum computers, some form of algorithmic noise management will be necessary. In this thesis, I propose multiple practical advances in quantum error mitigation and error correction. First, I present a novel and intuitive way to mitigate errors using a strategy that assumes no or very minimal knowledge about the nature of errors. This strategy can deal with most complex noise profiles, including those that describe severe correlated errors. Second, I present proof that quantum computation is scalable on a defective planar array of qubits. This result is based on a two-dimensional surface code architecture for which I showed that a finite rate of fabrication defects is not a fundamental obstacle to maintaining a non-zero error-rate threshold. The same conclusions are supported by extensive numerical studies. Finally, I give a new perspective on how to view and construct quantum error-correcting codes tailored for modular architectures. Following a given recipe, one can design codes that are compatible with the qubit connectivity demanded by the architecture. In addition, I present several product code constructions, some of which correspond to the latest developments in quantum LDPC code design. These and other practical advancements in quantum error mitigation and error correction will be crucial in guiding the design of emerging quantum computers
Correction factors for oxygen and flow-rate effects on neonatal Fleisch and Lilly pneumotachometers
Objective: To assess the effects of different oxygen concentrations and flow rates on the measurement errors of neonatal pneumotachometers in heated and unheated situations and to develop correction factors to correct for these effects. Design: Prospective laboratory study. Setting: Outpatient clinic with equipment in a standardized setting. Subjects: Neonatal pneumotachometers. Interventions: In standardized conditions, the tested pneumotachometer was calibrated at a flow rate of 3 L/min with 60% oxygen and was set in series with a closed spirometer system being used as a reference. Different air-flow levels (1-9 L/min) and oxygen concentrations (21-100%) were infused into the closed system with the pneumotachometer and spirometer. Measurements and Main Results: The pneumotachometers were significantly affected by changing oxygen concentrations (p < .01) and increasing flow rates (p < .01), increasing the actually measured flow rate. Correction factors, developed by multiple regression analysis, significantly reduced the overall maximum errors of the pneumotachometers from -1.1 to 0.6 L/min to -0.5 to 0.4 L/min. Conclusions: The effects of changes in oxygen concentrations and flow rates on neonatal pneumotachometers could be considerably decreased by the use of correction factors such as were calculated in this study. This will preclude frequent calibration procedures with actual flow and oxygen levels during changes in experimental settings. Copyrigh
Just Culture: It\u27s More Than Policy
[Description] Paradiso and Sweeney discuss the relationship between trust, just culture, and error reporting in medical care. Errors rarely occur in a vacuum, rather they\u27re a sequence of events with multiple opportunities for correction. Clinical nurses can have a significant impact on reducing errors due to their proximity to patients. Just culture is a safe haven that supports reporting. In a just culture environment, organizations are accountable for systems they design and analysis of the incident, not the individual. The shift to a just culture is a slow process that takes years to develop and hardwire. Hospital-wide policies that incorporate just culture principles are a first step. Studies are needed to regularly assess trust and just culture perceptions among nurse leaders and clinical nurses
Just Culture: It\u27s More Than Policy
[Description] Paradiso and Sweeney discuss the relationship between trust, just culture, and error reporting in medical care. Errors rarely occur in a vacuum, rather they\u27re a sequence of events with multiple opportunities for correction. Clinical nurses can have a significant impact on reducing errors due to their proximity to patients. Just culture is a safe haven that supports reporting. In a just culture environment, organizations are accountable for systems they design and analysis of the incident, not the individual. The shift to a just culture is a slow process that takes years to develop and hardwire. Hospital-wide policies that incorporate just culture principles are a first step. Studies are needed to regularly assess trust and just culture perceptions among nurse leaders and clinical nurses
High-contrast imaging at small separation: impact of the optical configuration of two deformable mirrors on dark holes
The direct detection and characterization of exoplanets will be a major
scientific driver over the next decade, involving the development of very large
telescopes and requires high-contrast imaging close to the optical axis. Some
complex techniques have been developed to improve the performance at small
separations (coronagraphy, wavefront shaping, etc). In this paper, we study
some of the fundamental limitations of high contrast at the instrument design
level, for cases that use a combination of a coronagraph and two deformable
mirrors for wavefront shaping. In particular, we focus on small-separation
point-source imaging (around 1 /D). First, we analytically or
semi-analytically analysing the impact of several instrument design parameters:
actuator number, deformable mirror locations and optic aberrations (level and
frequency distribution). Second, we develop in-depth Monte Carlo simulation to
compare the performance of dark hole correction using a generic test-bed model
to test the Fresnel propagation of multiple randomly generated optics static
phase errors. We demonstrate that imaging at small separations requires large
setup and small dark hole size. The performance is sensitive to the optic
aberration amount and spatial frequencies distribution but shows a weak
dependence on actuator number or setup architecture when the dark hole is
sufficiently small (from 1 to 5 /D).Comment: 13 pages, 18 figure
Edge inference for UWB ranging error correction using autoencoders
Indoor localization knows many applications, such as industry 4.0, warehouses, healthcare, drones, etc., where high accuracy becomes more critical than ever. Recent advances in ultra-wideband localization systems allow high accuracies for multiple active users in line-of-sight environments, while they still introduce errors above 300 mm in non-line-of-sight environments due to multi-path effects. Current work tries to improve the localization accuracy of ultra-wideband through offline error correction approaches using popular machine learning techniques. However, these techniques are still limited to simple environments with few multi-path effects and focus on offline correction. With the upcoming demand for high accuracy and low latency indoor localization systems, there is a need to deploy (online) efficient error correction techniques with fast response times in dynamic and complex environments. To address this, we propose (i) a novel semi-supervised autoencoder-based machine learning approach for improving ranging accuracy of ultra-wideband localization beyond the limitations of current improvements while aiming for performance improvements and a small memory footprint and (ii) an edge inference architecture for online UWB ranging error correction. As such, this paper allows the design of accurate localization systems by using machine learning for low-cost edge devices. Compared to a deep neural network (as state-of-the-art, with a baseline error of 75 mm) the proposed autoencoder achieves a 29% higher accuracy. The proposed approach leverages robust and accurate ultra-wideband localization, which reduces the errors from 214 mm without correction to 58 mm with correction. Validation of edge inference using the proposed autoencoder on a NVIDIA Jetson Nano demonstrates significant uplink bandwidth savings and allows up to 20 rapidly ranging anchors per edge GPU
Automatic correction of grammatical errors in non-native English text
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2009.Cataloged from PDF version of thesis.Includes bibliographical references (p. 99-107).Learning a foreign language requires much practice outside of the classroom. Computer-assisted language learning systems can help fill this need, and one desirable capability of such systems is the automatic correction of grammatical errors in texts written by non-native speakers. This dissertation concerns the correction of non-native grammatical errors in English text, and the closely related task of generating test items for language learning, using a combination of statistical and linguistic methods. We show that syntactic analysis enables extraction of more salient features. We address issues concerning robustness in feature extraction from non-native texts; and also design a framework for simultaneous correction of multiple error types. Our proposed methods are applied on some of the most common usage errors, including prepositions, verb forms, and articles. The methods are evaluated on sentences with synthetic and real errors, and in both restricted and open domains. A secondary theme of this dissertation is that of user customization. We perform a detailed analysis on a non-native corpus, illustrating the utility of an error model based on the mother tongue. We study the benefits of adjusting the correction models based on the quality of the input text; and also present novel methods to generate high-quality multiple-choice items that are tailored to the interests of the user.by John Sie Yuen Lee.Ph.D
- …