175 research outputs found
Atmospheric neutrino flux around Super-Kamiokande
The simulated atmospheric neutrino flux around Super-Kamiokande detector is tabulated in this report. The corresponding fitting is also given
Does the threshold representation associated with the autoconversion process matter?
International audienceDifferent ad hoc threshold functions associated with the autoconversion process have been arbitrarily used in atmospheric models. However, it is unclear how these ad hoc functions impact model results. Here systematic investigations of the sensitivities of climatically-important properties: CF (cloud fraction), LWP (liquid water path), and AIE (aerosol indirect effect) to threshold functions have been performed using a 3-D cloud-resolving model. It is found that the effect of threshold representations is larger on instantaneous values than on daily averages; and the effect depends on the percentage of clouds in their transitional stages of converting cloud water to rain water. For both the instantaneous values and daily averages, the sensitivity to the specification of critical radius is more significant than the sensitivity to the "smoothness" of the threshold representation (as embodied in the relative dispersion of droplet size distribution) for drizzling clouds. Moreover, the impact of threshold representations on the AIE is stronger than that on CF and LWP
Comparing and combining measurement-based and driven-dissipative entanglement stabilization
We demonstrate and contrast two approaches to the stabilization of qubit
entanglement by feedback. Our demonstration is built on a feedback platform
consisting of two superconducting qubits coupled to a cavity which are measured
by a nearly-quantum-limited measurement chain and controlled by high-speed
classical logic circuits. This platform is used to stabilize entanglement by
two nominally distinct schemes: a "passive" reservoir engineering method and an
"active" correction based on conditional parity measurements. In view of the
instrumental roles that these two feedback paradigms play in quantum
error-correction and quantum control, we directly compare them on the same
experimental setup. Further, we show that a second layer of feedback can be
added to each of these schemes, which heralds the presence of a high-fidelity
entangled state in realtime. This "nested" feedback brings about a marked
entanglement fidelity improvement without sacrificing success probability.Comment: 40 pages, 12 figure
Effects of vegetation patterns on yields of the surface and subsurface waters in the Heishui Alpine Valley in west China
International audienceThe relationships between different vegetation types and water yields were investigated in the Heishui Valley of the upper Yangtze River in western China. Contributions of groundwater and the water from surface and subsurface in different tributaries were, respectively, computed based on the stable isotope data, while the percentages of different vegetation covers were achieved by remote sensing in landscape scale. Based on the relationships between different vegetation types and water yields in seven watersheds, we found that reduction in the total vegetation, forest and subalpine coniferous forest covers could cause increasing in surface and subsurface water yields, while the water yield increased with the alpine shrub and meadow cover increasing, respectively. All the relationships were displayed as the low altitude and high altitude patterns, which were caused by the different vegetation characteristics and topography. We also found effects of the total vegetation cover played the most important role on water yield at large scale while the coniferous forest cover would affect the water yield at relatively small scale
Toward a climate downscaling for the Eastern Mediterranean at high-resolution
International audienceAs a first step toward downscaling global model simulations of future climates for the eastern Mediterranean Sea and surrounding land area, mesoscale-model simulations with the Pennsylvania State University ? National Center for Atmospheric Research (NCAR) mesoscale model, version 5 (MM5) are verified in the context of precipitation amount. The simulations are driven with January NCAR-NCEP reanalysis project (NNRP) lateral-boundary conditions and assimilate surface and upper air observations. The results of the simulations compare reasonably well with rain gauge and satellite estimates of monthly total precipitation, and the model reproduces the overall trends in inter-annual precipitation variability for one test region. Cyclones during the period were tracked, and their properties identified
Volatile Organic Compound (VOC) measurements in the Pearl River Delta (PRD) region, China
International audienceWe measured levels of ambient volatile organic compounds (VOCs) at seven sites in the Pearl River Delta (PRD) region of China during the Air Quality Monitoring Campaign spanning 4 October to 3 November 2004. Two of the sites, Guangzhou (GZ) and Xinken (XK), were intensive sites at which we collected multiple daily canister samples. The observations reported here provide a look at the VOC distribution, speciation, and photochemical implications in the PRD region. Alkanes constituted the largest percentage (>40%) in mixing ratios of the quantified VOCs at six sites; the exception was one major industrial site that was dominated by aromatics (about 52%). Highly elevated VOC levels occurred at GZ during two pollution episodes; however, the chemical composition of the VOCs did not exhibit noticeable changes during these episodes, except that the fraction of aromatics was about 10% higher. We calculated the OH loss rate to estimate the chemical reactivity of all VOCs. Of the anthropogenic VOCs, alkenes played a predominant role in VOC reactivity at GZ, whereas the contributions of reactive aromatics were more important at XK. Our preliminary analysis of the VOC correlations suggests that the ambient VOCs at GZ came directly from local sources (i.e., automobiles); those at XK were influenced by both local emissions and transportation of air mass from upwind areas
Demonstrating Quantum Error Correction that Extends the Lifetime of Quantum Information
The remarkable discovery of Quantum Error Correction (QEC), which can
overcome the errors experienced by a bit of quantum information (qubit), was a
critical advance that gives hope for eventually realizing practical quantum
computers. In principle, a system that implements QEC can actually pass a
"break-even" point and preserve quantum information for longer than the
lifetime of its constituent parts. Reaching the break-even point, however, has
thus far remained an outstanding and challenging goal. Several previous works
have demonstrated elements of QEC in NMR, ions, nitrogen vacancy (NV) centers,
photons, and superconducting transmons. However, these works primarily
illustrate the signatures or scaling properties of QEC codes rather than test
the capacity of the system to extend the lifetime of quantum information over
time. Here we demonstrate a QEC system that reaches the break-even point by
suppressing the natural errors due to energy loss for a qubit logically encoded
in superpositions of coherent states, or cat states of a superconducting
resonator. Moreover, the experiment implements a full QEC protocol by using
real-time feedback to encode, monitor naturally occurring errors, decode, and
correct. As measured by full process tomography, the enhanced lifetime of the
encoded information is 320 microseconds without any post-selection. This is 20
times greater than that of the system's transmon, over twice as long as an
uncorrected logical encoding, and 10% longer than the highest quality element
of the system (the resonator's 0, 1 Fock states). Our results illustrate the
power of novel, hardware efficient qubit encodings over traditional QEC
schemes. Furthermore, they advance the field of experimental error correction
from confirming the basic concepts to exploring the metrics that drive system
performance and the challenges in implementing a fault-tolerant system
Recommended from our members
International Linear Collider Accelerator Physics R&D
ILC work at Illinois has concentrated primarily on technical issues relating to the design of the accelerator. Because many of the problems to be resolved require a working knowledge of classical mechanics and electrodynamics, most of our research projects lend themselves well to the participation of undergraduate research assistants. The undergraduates in the group are scientists, not technicians, and find solutions to problems that, for example, have stumped PhD-level staff elsewhere. The ILC Reference Design Report calls for 6.7 km circumference damping rings (which prepare the beams for focusing) using âconventionalâ stripline kickers driven by fast HV pulsers. Our primary goal was to determine the suitability of the 16 MeV electron beam in the AĂ region at Fermilab for precision kicker studies.We found that the low beam energy and lack of redundancy in the beam position monitor system complicated the analysis of our data. In spite of these issues we concluded that the precision we could obtain was adequate to measure the performance and stability of a production module of an ILC kicker, namely 0.5%. We concluded that the kicker was stable to an accuracy of ~2.0% and that we could measure this precision to an accuracy of ~0.5%. As a result, a low energy beam like that at AĂ could be used as a rapid-turnaround facility for testing ILC production kicker modules. The ILC timing precision for arrival of bunches at the collision point is required to be 0.1 picosecond or better. We studied the bunch-to-bunch timing accuracy of a âphase detectorâ installed in AĂ in order to determine its suitability as an ILC bunch timing device. A phase detector is an RF structure excited by the passage of a bunch. Its signal is fed through a 1240 MHz high-Q resonant circuit and then down-mixed with the AĂ 1300 MHz accelerator RF. We used a kind of autocorrelation technique to compare the phase detector signal with a reference signal obtained from the phase detectorâs response to an event at the beginning of the run. We determined that the device installed in our beam, which was instrumented with an 8-bit 500 MHz ADC, could measure the beam timing to an accuracy of 0.4 picoseconds. Simulations of the device showed that an increase in ADC clock rate to 2 GHz would improve measurement precision by the required factor of four. As a result, we felt that a device of this sort, assuming matters concerning dynamic range and long-term stability can be addressed successfully, would work at the ILC. Cost effective operation of the ILC will demand highly reliable, fault tolerant and adaptive solutions for both hardware and software. The large numbers of subsystems and large multipliers associated with the modules in those subsystems will cause even a strong level of unit reliability to become an unacceptable level of system availability. An evaluation effort is underway to evaluate standards associated with high availability, and to guide ILC development with standard practices and well-supported commercial solutions. One area of evaluation involves the Advanced Telecom Computing Architecture (ATCA) hardware and software. We worked with an ATCA crate, processor monitors, and a small amount of ATCA circuit boards in order to develop a backplane âspyâ board that would let us watch the ATCA backplane communications and pursue development of an inexpensive processor monitor that could be used as a physics-driven component of the crate-level controls system. We made good progress, and felt that we had determined a productive direction to extend this work. We felt that we had learned enough to begin designing a workable processor monitor chip if there were to be sufficient interest in ATCA shown by the ILC community. Fault recognition is a challenging issue in the crafting a high reliability controls system. With tens of thousands of independent processors running hundreds of thousands of critical processes, how can the system identify that a problem has arisen and determine the appropriate steps to take to correct, or compensate, for the failure? One possible solution might come through the use of the OpenClovis supervisory system, which runs on Linux processors and allows a select set of processors to monitor the behavior of individual processes and processors in a large, distributed controls network. We found that OpenClovis exhibited an irritating amount of sensitivity to the exact version of the Linux kernel running on the processors, and that it was poorly equipped to help us sort through problems that arose through conflicts so deep in the operating systems of the processors. But once this issue was addressed, we found that it performed as expected, recognizing crashes and process (and processor) failures
- âŠ