272 research outputs found

    Worlds of events: deduction with partial knowledge about causality

    Get PDF
    Interactions between internet users are mediated by their devices and the common support infrastructure in data centres. Keeping track of causality amongst actions that take place in this distributed system is key to provide a seamless interaction where effects follow causes. Tracking causality in large scale interactions is difficult due to the cost of keeping large quantities of metadata; even more challenging when dealing with resource-limited devices. In this paper, we focus on keeping partial knowledge on causality and address deduction from that knowledge.We provide the first proof-theoretic causality modelling for distributed partial knowledge. We prove computability and consistency results. We also prove that the partial knowledge gives rise to a weaker model than classical causality. We provide rules for offline deduction about causality and refute some related folklore. We define two notions of forward and backward bisimilarity between devices, using which we prove two important results. Namely, no matter the order of addition/ removal, two devices deduce similarly about causality so long as: (1) the same causal information is fed to both. (2) they start bisimilar and erase the same causal information. Thanks to our establishment of forward and backward bisimilarity, respectively, proofs of the latter two results work by simple induction on length.-This work was partially funded by the SyncFree project in the European Seventh Framework Programme under Grant Agreement 609551 and by the Erasmus Mundus Joint Doctorate Programme under Grant Agreement 2012-0030. Our special thanks to the SyncFree peers for their prolific comments on the early versions of this work. We would like to also thank the anonymous referees for their constructive discussion over the ICE forum.info:eu-repo/semantics/publishedVersio

    Entanglement transfer from dissociated molecules to photons

    Get PDF
    We introduce and study the concept of a reversible transfer of the quantum state of two internally-translationally entangled fragments, formed by molecular dissociation, to a photon pair. The transfer is based on intracavity stimulated Raman adiabatic passage and it requires a combination of processes whose principles are well established.Comment: 5 pages, 3 figure

    Selection Effects on the Observed Redshift Dependence of GRB Jet Opening Angles

    Full text link
    Apparent redshift dependence of the jet opening angles (θj\theta_{\rm j}) of gamma-ray bursts (GRBs) is observed from current GRB sample. We investigate whether this dependence can be explained with instrumental selection effects and observational biases by a bootstrapping method. Assuming that (1) the GRB rate follows the star formation history and the cosmic metallicity history and (2) the intrinsic distributions of the jet-corrected luminosity (LγL_{\rm \gamma}) and θj\theta_{\rm j} are a Gaussian or a power-law function, we generate a mock {\em Swift}/BAT sample by considering various instrumental selection effects, including the flux threshold and the trigger probability of BAT, the probabilities of a GRB jet pointing to the instrument solid angle and the probability of redshift measurement. Our results well reproduce the observed θjz\theta_{\rm j}-z dependence. We find that in case of Lγθj2L_{\gamma}\propto \theta_{\rm j}^2 good consistency between the mock and observed samples can be obtained, indicating that both LγL_{\rm \gamma} and θj\theta_{\rm j} are degenerate for a flux-limited sample. The parameter set (Lγ,θj)=(4.9×1049erg s1, 0.054rad)(L_{\rm \gamma}, \theta_{\rm j})=(4.9\times 10^{49} \rm {erg\ s}^{-1},\ 0.054 {rad}) gives the best consistency for the current {\em Swift} GRB sample. Considering the beaming effect, the derived intrinsic local GRB rate accordingly is 2.85×1022.85\times 10^2 Gpc3^{-3} yr1^{-1}, inferring that 0.59\sim 0.59% of Type Ib/c SNe may be accompanied by a GRB.Comment: 25pages, 7 figures. ApJ in pres

    Economic analysis of the health impacts of housing improvement studies: a systematic review

    Get PDF
    Background: Economic evaluation of public policies has been advocated but rarely performed. Studies from a systematic review of the health impacts of housing improvement included data on costs and some economic analysis. Examination of these data provides an opportunity to explore the difficulties and the potential for economic evaluation of housing. Methods: Data were extracted from all studies included in the systematic review of housing improvement which had reported costs and economic analysis (n=29/45). The reported data were assessed for their suitability to economic evaluation. Where an economic analysis was reported the analysis was described according to pre-set definitions of various types of economic analysis used in the field of health economics. Results: 25 studies reported cost data on the intervention and/or benefits to the recipients. Of these, 11 studies reported data which was considered amenable to economic evaluation. A further four studies reported conducting an economic evaluation. Three of these studies presented a hybrid ‘balance sheet’ approach and indicated a net economic benefit associated with the intervention. One cost-effectiveness evaluation was identified but the data were unclearly reported; the cost-effectiveness plane suggested that the intervention was more costly and less effective than the status quo. Conclusions: Future studies planning an economic evaluation need to (i) make best use of available data and (ii) ensure that all relevant data are collected. To facilitate this, economic evaluations should be planned alongside the intervention with input from health economists from the outset of the study. When undertaken appropriately, economic evaluation provides the potential to make significant contributions to housing policy

    Quantum advantage in postselected metrology

    Get PDF
    Abstract: In every parameter-estimation experiment, the final measurement or the postprocessing incurs a cost. Postselection can improve the rate of Fisher information (the average information learned about an unknown parameter from a trial) to cost. We show that this improvement stems from the negativity of a particular quasiprobability distribution, a quantum extension of a probability distribution. In a classical theory, in which all observables commute, our quasiprobability distribution is real and nonnegative. In a quantum-mechanically noncommuting theory, nonclassicality manifests in negative or nonreal quasiprobabilities. Negative quasiprobabilities enable postselected experiments to outperform optimal postselection-free experiments: postselected quantum experiments can yield anomalously large information-cost rates. This advantage, we prove, is unrealizable in any classically commuting theory. Finally, we construct a preparation-and-postselection procedure that yields an arbitrarily large Fisher information. Our results establish the nonclassicality of a metrological advantage, leveraging our quasiprobability distribution as a mathematical tool

    Low-Luminosity Gamma-Ray Bursts as a Distinct GRB Population:A Firmer Case from Multiple Criteria Constraints

    Full text link
    The intriguing observations of Swift/BAT X-ray flash XRF 060218 and the BATSE-BeppoSAX gamma-ray burst GRB 980425, both with much lower luminosity and redshift compared to other observed bursts, naturally lead to the question of how these low-luminosity (LL) bursts are related to high-luminosity (HL) bursts. Incorporating the constraints from both the flux-limited samples observed with CGRO/BATSE and Swift/BAT and the redshift-known GRB sample, we investigate the luminosity function for both LL- and HL-GRBs through simulations. Our multiple criteria, including the log N - log P distributions from the flux-limited GRB sample, the redshift and luminosity distributions of the redshift-known sample, and the detection ratio of HL- and LL- GRBs with Swift/BAT, provide a set of stringent constraints to the luminosity function. Assuming that the GRB rate follows the star formation rate, our simulations show that a simple power law or a broken power law model of luminosity function fail to reproduce the observations, and a new component is required. This component can be modeled with a broken power, which is characterized by a sharp increase of the burst number at around L < 10^47 erg s^-1}. The lack of detection of moderate-luminosity GRBs at redshift ~0.3 indicates that this feature is not due to observational biases. The inferred local rate, rho_0, of LL-GRBs from our model is ~ 200 Gpc^-3 yr^-1 at ~ 10^47 erg s^-1, much larger than that of HL-GRBs. These results imply that LL-GRBs could be a separate GRB population from HL-GRBs. The recent discovery of a local X-ray transient 080109/SN 2008D would strengthen our conclusion, if the observed non-thermal emission has a similar origin as the prompt emission of most GRBs and XRFs.Comment: 22 pages, 9 figures, 3 tables; MNRAS, in press; Updated analysis and figure

    The source counts of submillimetre galaxies detected at 1.1 mm

    Get PDF
    The source counts of galaxies discovered at sub-millimetre and millimetre wavelengths provide important information on the evolution of infrared-bright galaxies. We combine the data from six blank-field surveys carried out at 1.1 mm with AzTEC, totalling 1.6 square degrees in area with root-mean-square depths ranging from 0.4 to 1.7 mJy, and derive the strongest constraints to date on the 1.1 mm source counts at flux densities S(1100) = 1-12 mJy. Using additional data from the AzTEC Cluster Environment Survey to extend the counts to S(1100) ~ 20 mJy, we see tentative evidence for an enhancement relative to the exponential drop in the counts at S(1100) ~ 13 mJy and a smooth connection to the bright source counts at >20 mJy measured by the South Pole Telescope; this excess may be due to strong lensing effects. We compare these counts to predictions from several semi-analytical and phenomenological models and find that for most the agreement is quite good at flux densities > 4 mJy; however, we find significant discrepancies (>3sigma) between the models and the observed 1.1 mm counts at lower flux densities, and none of them are consistent with the observed turnover in the Euclidean-normalised counts at S(1100) < 2 mJy. Our new results therefore may require modifications to existing evolutionary models for low luminosity galaxies. Alternatively, the discrepancy between the measured counts at the faint end and predictions from phenomenological models could arise from limited knowledge of the spectral energy distributions of faint galaxies in the local Universe.Comment: 16 pages, 3 figures, 4 tables; accepted for publication in MNRA

    Observing the Evolution of the Universe

    Full text link
    How did the universe evolve? The fine angular scale (l>1000) temperature and polarization anisotropies in the CMB are a Rosetta stone for understanding the evolution of the universe. Through detailed measurements one may address everything from the physics of the birth of the universe to the history of star formation and the process by which galaxies formed. One may in addition track the evolution of the dark energy and discover the net neutrino mass. We are at the dawn of a new era in which hundreds of square degrees of sky can be mapped with arcminute resolution and sensitivities measured in microKelvin. Acquiring these data requires the use of special purpose telescopes such as the Atacama Cosmology Telescope (ACT), located in Chile, and the South Pole Telescope (SPT). These new telescopes are outfitted with a new generation of custom mm-wave kilo-pixel arrays. Additional instruments are in the planning stages.Comment: Science White Paper submitted to the US Astro2010 Decadal Survey. Full list of 177 author available at http://cmbpol.uchicago.ed
    corecore