8,378 research outputs found
Strong converse rates for classical communication over thermal and additive noise bosonic channels
We prove that several known upper bounds on the classical capacity of thermal
and additive noise bosonic channels are actually strong converse rates. Our
results strengthen the interpretation of these upper bounds, in the sense that
we now know that the probability of correctly decoding a classical message
rapidly converges to zero in the limit of many channel uses if the
communication rate exceeds these upper bounds. In order for these theorems to
hold, we need to impose a maximum photon number constraint on the states input
to the channel (the strong converse property need not hold if there is only a
mean photon number constraint). Our first theorem demonstrates that Koenig and
Smith's upper bound on the classical capacity of the thermal bosonic channel is
a strong converse rate, and we prove this result by utilizing the structural
decomposition of a thermal channel into a pure-loss channel followed by an
amplifier channel. Our second theorem demonstrates that Giovannetti et al.'s
upper bound on the classical capacity of a thermal bosonic channel corresponds
to a strong converse rate, and we prove this result by relating success
probability to rate, the effective dimension of the output space, and the
purity of the channel as measured by the Renyi collision entropy. Finally, we
use similar techniques to prove that similar previously known upper bounds on
the classical capacity of an additive noise bosonic channel correspond to
strong converse rates.Comment: Accepted for publication in Physical Review A; minor changes in the
text and few reference
Has greater globalization made forecasting inflation more difficult?
U.S. inflation and real economic activity became more difficult to forecast during the Great Moderation. We investigate the possibility that the decline in the ability to forecast inflation may be due to greater globalization. As countries become more integrated through trade and financial flows, domestic inflation has a larger foreign component that is determined by variables typically excluded from forecasts.Forecasting ; Inflation (Finance) ; Globalization ; International trade
Recommended from our members
Exploiting iteration-level parallelism in dataflow programs
The term "dataflow" generally encompasses three distinct aspects of computation - a data-driven model of computation, a functional/declarative programming language, and a special-purpose multiprocessor architecture. In this paper we decouple the language and architecture issues by demonstrating that declarative programming is a suitable vehicle for the programming of conventional distributed-memory multiprocessors.This is achieved by appling several transformations to the compiled declarative program to achieve iteration-level (rather than instruction-level) parallelism. The transformations first group individual instructions into sequential light-weight processes, and then insert primitives to: (1) cause array allocation to be distributed over multiple processors, (2) cause computation to follow the data distribution by inserting an index filtering mechanism into a given loop and spawning a copy of it on all PEs; the filter causes each instance of that loop to operate on a different subrange of the index variable.The underlying model of computation is a dataflow/von Neumann hybrid in that exection within a process is control-driven while the creation, blocking, and activation of processes is data-driven.The performance of this process-oriented dataflow system (PODS) is demonstrated using the hydrodynamics simulation benchmark called SIMPLE, where a 19-fold speedup on a 32-processor architecture has been achieved
Recommended from our members
Executing matrix multiply on a process oriented data flow machine
The Process-Oriented Dataflow System (PODS) is an execution model that combines the von Neumann and dataflow models of computation to gain the benefits of each. Central to PODS is the concept of array distribution and its effects on partitioning and mapping of processes.In PODS arrays are partitioned by simply assigning consecutive elements to each processing element (PE) equally. Since PODS uses single assignment, there will be only one producer of each element. This producing PE owns that element and will perform the necessary computations to assign it. Using this approach the filling loop is distributed across the PEs. This simple partitioning and mapping scheme provides excellent results for executing scientific code on MIMD machines. In this way PODS allows MIMD machines to exploit vector and data parallelism easily while still providing the flexibility of MIMD over SIMD for multi-user systems.In this paper, the classic matrix multiply algorithm, with 1024 data points, is executed on a PODS simulator and the results are presented and discussed. Matrix multiply is a good example because it has several interesting properties: there are multiple code-blocks; a new array must be dynamically allocated and distributed; there is a loop-carried dependency in the innermost loop; the two input arrays have different access patterns; and the sizes of the input arrays are not known at compile time. Matrix multiply also forms the basis for many important scientific algorithms such as: LU decomposition, convolution, and the Fast-Fourier Transform.The results show that PODS is comparable to both Iannucci's Hybrid Architecture and MIT's TTDA in terms of overhead and instruction power. They also show that PODS easily distributes the work load evenly across the PEs. The key result is that PODS can scale matrix multiply in a near linear fashion until there is little or no work to be performed for each PE. Then overhead and message passing become a major component of the execution time. With larger problems (e.g., >/=16k data points) this limit would be reached at around 256 PEs
Recommended from our members
Automatic data/program partitioning using the single assignment principle
Loosely-coupled MIMD architectures do not suffer from memory contention; hence large numbers of processors may be utilized. The main problem, however, is how to partition data and programs in order to exploit the available parallelism. In this paper we show that efficient schemes for automatic data/program partitioning and synchronization may be employed if single assignment is used. Using simulations of program loops common to scientific computations (the Livermore Loops), we demonstrate that only a small fraction of data accesses are remote and thus the degradation in network performance due to multiprocessing is minimal
The use of phenyl-Sepharose for the affinity purification of proteinases
Phenyl-Sepharose is most often used as an adsorbent for hydrophobic interaction chromatography (HIC). We report on its effective use for the affinity purification of some extracellular thermostable proteinases from bacterial sources. Proteinases belonging to the serine, aspartate and metallo mechanistic classes were effective retained by the media. Purification factors in the range of 2.9–60 and enzyme activity yields in excess of 88% were obtained. In some cases homogeneous enzyme was obtained from culture supernatants in a single step. A number of other proteinases from mammalian sources were also retained. The specificity of the enzyme/support interaction was studied. Proteinases complexed with peptide inhibitors (pepstatin and chymostatin) showed reduced binding to phenyl Sepharose indicating with the active site cleft whereas modification with low molecular weight active site directed inactivators such as PMSF and DAN did not, indicating that binding may not be dependent on the catalytic site. Pepsinogen and the pro-enzyme form of the serine proteinase from the thermophilic Bacillus sp. strain Ak.1 were not retained by the media and could be resolved in an efficient manner from their active counterparts
Measurement and monitoring of atheromatous lesions of the femoral artery by duplex ultrasound.
In Western Societies atheromatous stenosis and occlusion of the superficial femoral artery cause intermittent claudication in up to 5% of the population over 55 years of age, and the associated morbidity and disability are considerable. A foreknowledge of impending lesion progression might allow prevention of clinical deterioration by early intervention. However, the natural history of these lesions needs to be more fully evaluated. Critical to the monitoring of early lesions is the need for accurate, repeatable and non-invasive investigations. The role of duplex ultrasound in this area is largely unexplored. In this thesis clinical and laboratory data demonstrate the accuracy and repeatability of duplex ultrasound in the measurement of femoral stenoses. A prospective study was carried out to determine the incidence of progression from stenosis to occlusion. There has been an enormous increase in the use of percutaneous transluminal angioplasty (PTA) in the treatment of patients with claudication. However, the relative benefits of PTA over conventional treatment have not been established. A study to determine the role of duplex in screening patients with claudication prior to PTA was carried out. The results demonstrate its accuracy and the consequent clinical benefits. A randomised controlled trial of PTA for patients with intermittent claudication has been established and the early patient data at trial entry are presented
Feeding behavior of crayfish snakes (Regina) : allometry, ontogeny and adaptations to an extremely specialized diet
Dietary specialists are often predicted to have specialized and stereotyped behaviors that increase the efficiency of foraging on their preferred prey, but which limit their ability to feed on nonpreferred prey. Although there is support for various aspects of this prediction, a number of studies suggest that specialists should not be characterized in such a simplified way. The purpose of this study was to describe the prey selectivity, prey handling behavior, and chemosensory behavior of crayfish snakes (Regina, Colubridae), which are extreme dietary specialists, and determine the effects of prey type, feeding experience and ontogeny.
Museum specimens and field captured snakes, together with published data, were used to determine the effect of predator and prey size on prey selectivity in each species of Regina. Snakes were videotaped feeding on different prey to determine the effects of prey type and size on prey handling behavior, its efficiency and stereotypy. Finally, snakes born in captivity were raised on different diets to determine the effect of prey availability and prey type on the ontogeny of chemosensory behavior.
This study confirmed the dietary specializations of Regina grahamii, R. septemvittata and R. alleni, and found that R. rigida, like R. alleni, includes odonate larvae in their diet as juveniles. Snake size and prey availability determines prey selection by R. alleni and R. rigida. This study also demonstrated that the relationships between dietary and behavioral specialization can be complex and depend on the characteristics of both the predator and its prey. For example, behavioral specializations in prey handling behavior were correlated with prey type rather than degree of dietary specialization. Hard crayfish required complex prey handling techniques, while soft crayfish did not. In R. alleni and R. rigida, such specialization appears to have permitted dietary expansion rather than restriction. Also, experience improved both prey handling efficiency and stereotypy irrespective of prey type consumed. As predicted the chemosensory response of each Regina species was greatest toward species characteristic prey. However, prey availability and type influenced these responses. In particular, R. septemvittata increased its chemosensory response toward hard crayfish (nonpreferred prey) when not permitted to eat soft crayfish
- …