4,940 research outputs found
Ceramic applications in turbine engines
Development testing activities on the 1900 F-configuration ceramic parts were completed, 2070 F-configuration ceramic component rig and engine testing was initiated, and the conceptual design for the 2265 F-configuration engine was identified. Fabrication of the 2070 F-configuration ceramic parts continued, along with burner rig development testing of the 2070 F-configuration metal combustor in preparation for 1132 C (2070 F) qualification test conditions. Shakedown testing of the hot engine simulator (HES) rig was also completed in preparation for testing of a spin rig-qualified ceramic-bladed rotor assembly at 1132 C (2070 F) test conditions. Concurrently, ceramics from new sources and alternate materials continued to be evaluated, and fabrication of 2070 F-configuration ceramic component from these new sources continued. Cold spin testing of the critical 2070 F-configuration blade continued in the spin test rig to qualify a set of ceramic blades at 117% engine speed for the gasifier turbine rotor. Rig testing of the ceramic-bladed gasifier turbine rotor assembly at 108% engine speed was also performed, which resulted in the failure of one blade. The new three-piece hot seal with the nickel oxide/calcium fluoride wearface composition was qualified in the regenerator rig and introduced to engine operation wiwth marginal success
Implications of Qudit Superselection rules for the Theory of Decoherence-free Subsystems
The use of d-state systems, or qudits, in quantum information processing is
discussed. Three-state and higher dimensional quantum systems are known to have
very different properties from two-state systems, i.e., qubits. In particular
there exist qudit states which are not equivalent under local unitary
transformations unless a selection rule is violated. This observation is shown
to be an important factor in the theory of decoherence-free, or noiseless,
subsystems. Experimentally observable consequences and methods for
distinguishing these states are also provided, including the explicit
construction of new decoherence-free or noiseless subsystems from qutrits.
Implications for simulating quantum systems with quantum systems are also
discussed.Comment: 13 pages, 1 figures, Version 2: Typos corrected, references fixed and
new ones added, also includes referees suggested changes and a new exampl
Monte Carlo Director Modeling and Display, Using the CERN Laboratory
Detectors for high energy nuclear physics experiments are being modeled using programs developed and maintained at CERN, the European Organization for Nuclear Research. These programs include data handling and display routines, as well as those using random-sampling Monte Carlo techniques to calculate energy depositions for high energy particles as they pass through the various parts of the detector system. The complete CERN library has been imported for use with our Workstation computers in a multiple user environment. The enormous CERN Monte Carlo program GEANT(French for GIANT) tracks the progress of a particle through a detector on a simulated event-by-event basis. GEANT is being used to predict energy loss in materials using several different energy-loss assumptions. The energy loss in a silicon slab is calculated for charged particles at moderately relativistic momenta. The response of these calculations is known to result in an asymmetric energy deposition in silicon. Predicted responses are scheduled for examination using test beams at two different accelerator facilities
Time Projection Chamber\u27s Efficiency, Obtained Using CERN\u27s GEANT Code
Geometrical acceptance and reconstruction of tracks have been carried out for a Time Projection Chamber (TPC) used in Experiment NA35: the 35th experiment in the North Area of the Super Proton Synchrotron (SPS), located at the European Organization for Nuclear Research (CERN). NA35 used the SPS at CERN to produce 6.4 TeV beams of 32S for central collisions with Au nuclei. The TPC modeling effort used a modified version of CERN\u27s Monte Carlo program GEANT, which simulates the response of the NA35 TPC to output from CERN\u27s primary event generators. GEANT was used to simulate three-dimensional pixel data in the same format as data taken by direct readout of the TPC. These simulated data were stored on magnetic tape and processed using the TPC analysis and reconstruction program TRAC. Analysis of these simulated data allowed a calculation of the efficiency of the TPC, to within about 1%, by comparing the output of TRAC with the known input from GEANT. Also, reconstructed events from GEANT were used to eliminate false tracks and to determine systematic errors in track position and momentum in data taken by NA35 in the Spring of 1992
Overview of Quantum Error Prevention and Leakage Elimination
Quantum error prevention strategies will be required to produce a scalable
quantum computing device and are of central importance in this regard. Progress
in this area has been quite rapid in the past few years. In order to provide an
overview of the achievements in this area, we discuss the three major classes
of error prevention strategies, the abilities of these methods and the
shortcomings. We then discuss the combinations of these strategies which have
recently been proposed in the literature. Finally we present recent results in
reducing errors on encoded subspaces using decoupling controls. We show how to
generally remove mixing of an encoded subspace with external states (termed
leakage errors) using decoupling controls. Such controls are known as ``leakage
elimination operations'' or ``LEOs.''Comment: 8 pages, no figures, submitted to the proceedings of the Physics of
Quantum Electronics, 200
Combined Error Correction Techniques for Quantum Computing Architectures
Proposals for quantum computing devices are many and varied. They each have
unique noise processes that make none of them fully reliable at this time.
There are several error correction/avoidance techniques which are valuable for
reducing or eliminating errors, but not one, alone, will serve as a panacea.
One must therefore take advantage of the strength of each of these techniques
so that we may extend the coherence times of the quantum systems and create
more reliable computing devices. To this end we give a general strategy for
using dynamical decoupling operations on encoded subspaces. These encodings may
be of any form; of particular importance are decoherence-free subspaces and
quantum error correction codes. We then give means for empirically determining
an appropriate set of dynamical decoupling operations for a given experiment.
Using these techniques, we then propose a comprehensive encoding solution to
many of the problems of quantum computing proposals which use exchange-type
interactions. This uses a decoherence-free subspace and an efficient set of
dynamical decoupling operations. It also addresses the problems of
controllability in solid state quantum dot devices.Comment: Contribution to Proceedings of the 2002 Physics of Quantum
Electronics Conference", to be published in J. Mod. Optics. This paper
provides a summary and review of quant-ph/0205156 and quant-ph/0112054, and
some new result
Universal Leakage Elimination
``Leakage'' errors are particularly serious errors which couple states within
a code subspace to states outside of that subspace thus destroying the error
protection benefit afforded by an encoded state. We generalize an earlier
method for producing leakage elimination decoupling operations and examine the
effects of the leakage eliminating operations on decoherence-free or noiseless
subsystems which encode one logical, or protected qubit into three or four
qubits. We find that by eliminating the large class of leakage errors, under
some circumstances, we can create the conditions for a decoherence free
evolution. In other cases we identify a combination decoherence-free and
quantum error correcting code which could eliminate errors in solid-state
qubits with anisotropic exchange interaction Hamiltonians and enable universal
quantum computing with only these interactions.Comment: 14 pages, no figures, new version has references updated/fixe
Measuring Strangeness Production from Relativisitic Collisions Between Pairs of Nuclei Using a Vertex Time Projection Chamber
At collider energies of 200A-GeV, tracking of charged particle pairs originating from neutrals is dominated by singlystrange KJ.\u27 decays. Counting the number ofsecondary vertex pairs is a method of measuring the strangeness production. The VTX is a four-layer micro-strip gas time projection chamber being designed for use with the STAR instrument in an experiment using the Relativistic Heavy Ion Collider under construction at Brookhaven National Laboratory. Simulated pixel data generated from CERN\u27s Monte Carlo detector-modeling program Geant were put into tables using the TAS sorting structures available from the STAR Collaboration. The response of VTX was mapped for charged pion pairs emerging from each secondary vertex resulting from the decay of a neutral kaon. Grouping each set oftwo charged pions ofopposite sign which originate from a vertex distinct from the collider vertex is the method being presented for measuring strangeness production. This method has three steps: (1) removing all charged particles originating directly from the collider vertex using established methods, (2) identifying which \u3c4 residual pixel tracks in the 4 micro-strip TPC planes belong to which particular individual pion, and (3) grouping these secondary pions into the appropriate pairs. Research presented will concentrate on steps (2) and (3) in idenfifying each K in order to measure strangeness production. Backgrounds were generated as part of the simulation process, and to help in the elimination of backgrounds, rough-set analysis was used to fine tune algorithm parameters using exemplars which are available from simuation data in TAS Tables
- …