84,430 research outputs found
Precursors of extreme increments
We investigate precursors and predictability of extreme increments in a time
series. The events we are focusing on consist in large increments within
successive time steps. We are especially interested in understanding how the
quality of the predictions depends on the strategy to choose precursors, on the
size of the event and on the correlation strength. We study the prediction of
extreme increments analytically in an AR(1) process, and numerically in wind
speed recordings and long-range correlated ARMA data. We evaluate the success
of predictions via receiver operator characteristics (ROC-curves). Furthermore,
we observe an increase of the quality of predictions with increasing event size
and with decreasing correlation in all examples. Both effects can be understood
by using the likelihood ratio as a summary index for smooth ROC-curves
Research instrumentation for tornado electromagnetics emissions detection
Instrumentation for receiving, processing, and recording HF/VHF electromagnetic emissions from severe weather activity is described. Both airborne and ground-based instrumentation units are described on system and subsystem levels. Design considerations, design decisions, and the rationale behind the decisions are given. Performance characteristics are summarized and recommendations for improvements are given. The objectives, procedures, and test results of the following are presented: (1) airborne flight test in the Midwest U.S.A. (Spring 1975) and at the Kennedy Space Center, Florida (Summer 1975); (2) ground-based data collected in North Georgia (Summer/Fall 1975); and (3) airborne flight test in the Midwest (late Spring 1976) and at the Kennedy Space Center, Florida (Summer 1976). The Midwest tests concentrated on severe weather with tornadic activity; the Florida and Georgia tests monitored air mass convective thunderstorm characteristics. Supporting ground truth data from weather radars and sferics DF nets are described
Meta-analysis: the diagnostic accuracy of critical flicker frequency in minimal hepatic encephalopathy
BACKGROUND: Minimal hepatic encephalopathy (MHE) reduces quality of life, increases the risk of road traffic incidents and predicts progression to overt hepatic encephalopathy and death. Current psychometry-based diagnostic methods are effective, but time-consuming and a universal ‘gold standard’ test has yet to be agreed upon. Critical Flicker Frequency (CFF) is a proposed language-independent diagnostic tool for MHE, but its accuracy has yet to be confirmed. AIM: To assess the diagnostic accuracy of CFF for MHE by performing a systematic review and meta-analysis of all studies, which report on the diagnostic accuracy of this test. METHODS: A systematic literature search was performed to locate all publications reporting on the diagnostic accuracy of CFF for MHE. Data were extracted from 2 × 2 tables or calculated from reported accuracy data. Collated data were meta-analysed for sensitivity, specificity, diagnostic odds ratio (DOR) and summary receiver operator curve (sROC) analysis. Prespecified subgroup analysis and meta-regression were also performed. RESULTS: Nine studies with data for 622 patients were included. Summary sensitivity was 61% (95% CI: 55–67), specificity 79% (95% CI: 75–83) and DOR 10.9 (95% CI: 4.2–28.3). A symmetrical sROC gave an area under the receiver operator curve of 0.84 (SE = 0.06). The heterogeneity of the DOR was 74%. CONCLUSIONS: Critical Flicker Frequency has a high specificity and moderate sensitivity for diagnosing minimal hepatic encephalopathy. Given the advantages of language independence and being both simple to perform and interpret, we suggest the use of critical flicker frequency as an adjunct (but not replacement) to psychometric testing
Efficient many-party controlled teleportation of multi-qubit quantum information via entanglement
We present a way to teleport multi-qubit quantum information from a sender to
a distant receiver via the control of many agents in a network. We show that
the original state of each qubit can be restored by the receiver as long as all
the agents collaborate. However, even if one agent does not cooperate, the
receiver can not fully recover the original state of each qubit. The method
operates essentially through entangling quantum information during
teleportation, in such a way that the required auxiliary qubit resources, local
operation, and classical communication are considerably reduced for the present
purpose
Second-order coding rates for pure-loss bosonic channels
A pure-loss bosonic channel is a simple model for communication over
free-space or fiber-optic links. More generally, phase-insensitive bosonic
channels model other kinds of noise, such as thermalizing or amplifying
processes. Recent work has established the classical capacity of all of these
channels, and furthermore, it is now known that a strong converse theorem holds
for the classical capacity of these channels under a particular photon number
constraint. The goal of the present paper is to initiate the study of
second-order coding rates for these channels, by beginning with the simplest
one, the pure-loss bosonic channel. In a second-order analysis of
communication, one fixes the tolerable error probability and seeks to
understand the back-off from capacity for a sufficiently large yet finite
number of channel uses. We find a lower bound on the maximum achievable code
size for the pure-loss bosonic channel, in terms of the known expression for
its capacity and a quantity called channel dispersion. We accomplish this by
proving a general "one-shot" coding theorem for channels with classical inputs
and pure-state quantum outputs which reside in a separable Hilbert space. The
theorem leads to an optimal second-order characterization when the channel
output is finite-dimensional, and it remains an open question to determine
whether the characterization is optimal for the pure-loss bosonic channel.Comment: 18 pages, 3 figures; v3: final version accepted for publication in
Quantum Information Processin
Canonical time-frequency, time-scale, and frequency-scale representations of time-varying channels
Mobile communication channels are often modeled as linear time-varying
filters or, equivalently, as time-frequency integral operators with finite
support in time and frequency. Such a characterization inherently assumes the
signals are narrowband and may not be appropriate for wideband signals. In this
paper time-scale characterizations are examined that are useful in wideband
time-varying channels, for which a time-scale integral operator is physically
justifiable. A review of these time-frequency and time-scale characterizations
is presented. Both the time-frequency and time-scale integral operators have a
two-dimensional discrete characterization which motivates the design of
time-frequency or time-scale rake receivers. These receivers have taps for both
time and frequency (or time and scale) shifts of the transmitted signal. A
general theory of these characterizations which generates, as specific cases,
the discrete time-frequency and time-scale models is presented here. The
interpretation of these models, namely, that they can be seen to arise from
processing assumptions on the transmit and receive waveforms is discussed. Out
of this discussion a third model arises: a frequency-scale continuous channel
model with an associated discrete frequency-scale characterization.Comment: To appear in Communications in Information and Systems - special
issue in honor of Thomas Kailath's seventieth birthda
Transient elastography in the evaluation of cystic fibrosis-associated liver disease : systematic review and meta-analysis
Improved Deterministic N-To-One Joint Remote Preparation of an Arbitrary Qubit via EPR Pairs
Recently, Bich et al. (Int. J. Theor. Phys. 51: 2272, 2012) proposed two
deterministic joint remote state preparation (JRSP) protocols of an arbitrary
single-qubit state: one is for two preparers to remotely prepare for a receiver
by using two Einstein-Podolsky-Rosen (ERP) pairs; the other is its generalized
form in the case of arbitrary N>2 preparers via N ERP pairs. In this paper,
Through reviewing and analyzing Bich et al.'s second protocols with N>2
preparers, we find that the success probability P_{suc}=1/4 < 1. In order to
solve the problem, we firstly constructed two sets of projective measurement
bases: the real-coefficient basis and the complex-coefficient one, and further
proposed an improved deterministic N-to-one JRSP protocol for an arbitrary
single-qubit state with unit success probability (i.e, P_{suc}=1). Morever, our
protocol is also flexible and convenient, and it can be used in a practical
network.Comment: 13 pages, 2 figures, two table
- …