1,204 research outputs found

    Process-based analysis of terrestrial carbon flux predictability

    Get PDF
    Despite efforts to decrease the discrepancy between simulated and observed terrestrial carbon fluxes, the uncertainty in trends and patterns of the land carbon fluxes remains high. This difficulty raises the question to what extent the terrestrial carbon cycle is predictable, and which processes explain the predictability. Here, the perfect model approach is used to assess the potential predictability of net primary production (NPPpred) and heterotrophic respiration (Rhpred) by using ensemble simulations conducted with the Max-Planck-Institute Earth System Model. In order to asses the role of local carbon flux predictability (CFpred) on the predictability of the global carbon cycle, we suggest a new predictability metric weighted by the amplitude of the flux anomalies. Regression analysis is used to determine the contribution of the predictability of different environmental drivers to NPPpred and Rhpred (soil moisture, air temperature and radiation for NPP and soil organic carbon, air temperature and precipitation for Rh). NPPpred is driven to 62 and 30 % by the predictability of soil moisture and temperature, respectively. Rhpred is driven to 52 and 27 % by the predictability of soil organic carbon temperature, respectively. The decomposition of predictability shows that the relatively high Rhpred compared to NPPpred is due to the generally high predictability of soil organic carbon. The seasonality in NPPpred and Rhpred patterns can be explained by the change in limiting factors over the wet and dry months. Consequently, CFpred is controlled by the predictability of the currently limiting environmental factor. Differences in CFpred between ensemble simulations can be attributed to the occurrence of wet and dry years, which influences the predictability of soil moisture and temperature. This variability of predictability is caused by the state dependency of ecosystem processes. Our results reveal the crucial regions and ecosystem processes to be considered when initializing a carbon prediction system

    Trivial improvements of predictive skill due to direct reconstruction of global carbon cycle

    Get PDF
    State-of-the-art carbon cycle prediction systems are initialized from reconstruction simulations in which state variables of the climate system are assimilated. While currently only the physical state variables are assimilated, biogeochemical state variables adjust to the state acquired through this assimilation indirectly instead of being assimilated themselves. In the absence of comprehensive biogeochemical reanalysis products, such approach is pragmatic. Here we evaluate a potential advantage of having perfect carbon cycle observational products to be used for direct carbon cycle reconstruction. Within an idealized perfect-model framework, we define 50 years of a control simulation under pre-industrial CO2 levels as our target representing observations. We nudge variables from this target onto arbitrary initial conditions 150 years later mimicking an assimilation simulation generating initial conditions for hindcast experiments of prediction systems. We investigate the tracking performance, i.e. bias, correlation and root-mean-square-error between the reconstruction and the target, when nudging an increasing set of atmospheric, oceanic and terrestrial variables with a focus on the global carbon cycle explaining variations in atmospheric CO2. We compare indirect versus direct carbon cycle reconstruction against a resampled threshold representing internal variability. Afterwards, we use these reconstructions to initialize ensembles to assess how well the target can be predicted after reconstruction. Interested in the ability to reconstruct global atmospheric CO2, we focus on the global carbon cycle reconstruction and predictive skill. We find that indirect carbon cycle reconstruction through physical fields reproduces the target variations on a global and regional scale much better than the resampled threshold. While reproducing the large scale variations, nudging introduces systematic regional biases in the physical state variables, on which biogeochemical cycles react very sensitively. Global annual surface oceanic pCO2 initial conditions are indirectly reconstructed with an anomaly correlation coefficient (ACC) of 0.8 and debiased root mean square error (RMSE) of 0.3 ppm. Direct reconstruction slightly improves initial conditions in ACC by +0.1 and debiased RMSE by −0.1 ppm. Indirect reconstruction of global terrestrial carbon cycle initial conditions for vegetation carbon pools track the target by ACC of 0.5 and debiased RMSE of 0.5 PgC. Direct reconstruction brings negligible improvements for air-land CO2 flux. Global atmospheric CO2 is indirectly tracked by ACC of 0.8 and debiased RMSE of 0.4 ppm. Direct reconstruction of the marine and terrestrial carbon cycles improves ACC by 0.1 and debiased RMSE by −0.1 ppm. We find improvements in global carbon cycle predictive skill from direct reconstruction compared to indirect reconstruction. After correcting for mean bias, indirect and direct reconstruction both predict the target similarly well and only moderately worse than perfect initialization after the first lead year. Our perfect-model study shows that indirect carbon cycle reconstruction yields satisfying initial conditions for global CO2 flux and atmospheric CO2. Direct carbon cycle reconstruction adds little improvements in the global carbon cycle, because imperfect reconstruction of the physical climate state impedes better biogeochemical reconstruction. These minor improvements in initial conditions yield little improvement in initialized perfect-model predictive skill. We label these minor improvements due to direct carbon cycle reconstruction trivial, as mean bias reduction yields similar improvements. As reconstruction biases in real-world prediction systems are even stronger, our results add confidence to the current practice of indirect reconstruction in carbon cycle prediction systems

    Quantum teleportation on a photonic chip

    Full text link
    Quantum teleportation is a fundamental concept in quantum physics which now finds important applications at the heart of quantum technology including quantum relays, quantum repeaters and linear optics quantum computing (LOQC). Photonic implementations have largely focussed on achieving long distance teleportation due to its suitability for decoherence-free communication. Teleportation also plays a vital role in the scalability of photonic quantum computing, for which large linear optical networks will likely require an integrated architecture. Here we report the first demonstration of quantum teleportation in which all key parts - entanglement preparation, Bell-state analysis and quantum state tomography - are performed on a reconfigurable integrated photonic chip. We also show that a novel element-wise characterisation method is critical to mitigate component errors, a key technique which will become increasingly important as integrated circuits reach higher complexities necessary for quantum enhanced operation.Comment: Originally submitted version - refer to online journal for accepted manuscript; Nature Photonics (2014

    Regulation of CD44 binding to hyaluronan by glycosylation of variably spliced exons

    Get PDF
    The hyaluronan (HA)-binding function (lectin function) of the leukocyte homing receptor, CD44, is tightly regulated. Herein we address possible mechanisms that regulate CD44 isoform-specific HA binding. Binding studies with melanoma transfectants expressing CD44H, CD44E, or with soluble immunoglobulin fusions of CD44H and CD44E (CD44H-Rg, CD44E-Rg) showed that although both CD44 isoforms can bind HA, CD44H binds HA more efficiently than CD44E. Using CD44-Rg fusion proteins we show that the variably spliced exons in CD44E, V8-V10, specifically reduce the lectin function of CD44, while replacement of V8-V10 by an ICAM-1 immunoglobulin domain restores binding to a level comparable to that of CD44H. Conversely, CD44 bound HA very weakly when exons V8-V10 were replaced with a CD34 mucin domain, which is heavily modified by O-linked glycans. Production of CD44E-Rg or incubation of CD44E-expressing transfectants in the presence of an O-linked glycosylation inhibitor restored HA binding to CD44H-Rg and to cell surface CD44H levels, respectively. We conclude that differential splicing provides a regulatory mechanism for CD44 lectin function and that this effect is due in part to O-linked carbohydrate moieties which are added to the Ser/Thr rich regions encoded by the variably spliced CD44 exons. Alternative splicing resulting in changes in protein glycosylation provide a novel mechanism for the regulation of lectin activit

    Emotions and Digital Well-being. The rationalistic bias of social media design in online deliberations

    Get PDF
    In this chapter we argue that emotions are mediated in an incomplete way in online social media because of the heavy reliance on textual messages which fosters a rationalistic bias and an inclination towards less nuanced emotional expressions. This incompleteness can happen either by obscuring emotions, showing less than the original intensity, misinterpreting emotions, or eliciting emotions without feedback and context. Online interactions and deliberations tend to contribute rather than overcome stalemates and informational bubbles, partially due to prevalence of anti-social emotions. It is tempting to see emotions as being the cause of the problem of online verbal aggression and bullying. However, we argue that social media are actually designed in a predominantly rationalistic way, because of the reliance on text-based communication, thereby filtering out social emotions and leaving space for easily expressed antisocial emotions. Based on research on emotions that sees these as key ingredients to moral interaction and deliberation, as well as on research on text-based versus non-verbal communication, we propose a richer understanding of emotions, requiring different designs of online deliberation platforms. We propose that such designs should move from text-centred designs and should find ways to incorporate the complete expression of the full range of human emotions so that these can play a constructive role in online deliberations

    Neuroanatomical and functional characterization of CRF neurons of the amygdala using a novel transgenic mouse model

    Get PDF
    The corticotropin-releasing factor (CRF)-producing neurons of the amygdala have been implicated in behavioral and physiological responses associated with fear, anxiety, stress, food intake and reward. To overcome the difficulties in identifying CRF neurons within the amygdala, a novel transgenic mouse line, in which the humanized recombinant Renilla reniformis green fluorescent protein (hrGFP) is under the control of the CRF promoter (CRF-hrGFP mice), was developed. First, the CRF-hrGFP mouse model was validated and the localization of CRF neurons within the amygdala was systematically mapped. Amygdalar hrGFP-expressing neurons were located primarily in the interstitial nucleus of the posterior limb of the anterior commissure, but also present in the central amygdala. Secondly, the marker of neuronal activation c-Fos was used to explore the response of amygdalar CRF neurons in CRF-hrGFP mice under different experimental paradigms. C-Fos induction was observed in CRF neurons of CRF-hrGFP mice exposed to an acute social defeat stress event, a fasting/refeeding paradigm or lipopolysaccharide (LPS) administration. In contrast, no c-Fos induction was detected in CRF neurons of CRF-hrGFP mice exposed to restraint stress, forced swimming test, 48-h fasting, acute high-fat diet (HFD) consumption, intermittent HFD consumption, ad libitum HFD consumption, HFD withdrawal, conditioned HFD aversion, ghrelin administration or melanocortin 4 receptor agonist administration. Thus, this study fully characterizes the distribution of amygdala CRF neurons in mice and suggests that they are involved in some, but not all, stress or food intake-related behaviors recruiting the amygdala

    Metaphors considered harmful? An exploratory study of the effectiveness of functional metaphors for end-to-end encryption

    Get PDF
    Background: Research has shown that users do not use encryption and fail to understand the security properties which encryption provides. We hypothesise that one contributing factor to failed user understanding is poor explanations of security properties, as the technical descriptions used to explain encryption focus on structural mental models. Purpose: We methodically generate metaphors for end-to-end (E2E) encryption that cue functional models and develop and test the metaphors’ effect on users’ understanding of E2E-encryption. Data: Transcripts of 98 interviews with users of various E2Eencrypted messaging apps and 211 survey responses. Method: First, we code the user interviews and extract promising explanations. These user-provided explanations inform the creation of metaphors using a framework for generating metaphors adapted from literature. The generated metaphors and existing industry descriptions of E2E-encryption are analytically evaluated. Finally, we design and conduct a survey to test whether exposing users to these descriptions improves their understanding of the functionality provided by E2E-encrypted messaging apps. Results: While the analytical evaluation showed promising results, none of the descriptions tested in the survey improve understanding; descriptions frequently cue users in a way that undoes their previously correct understanding. Metaphors developed from user language are better than existing industry descriptions, in that ours cause less harm. Conclusion: Creating explanatory metaphors for encryption technologies is hard. Short statements that attempt to cue mental models do not improve participants’ understanding. Better solutions should build on our methodology to test a variety of potential metaphors, to understand both the improvement and harm that metaphors may elicit

    Integrated Photonic Sensing

    Full text link
    Loss is a critical roadblock to achieving photonic quantum-enhanced technologies. We explore a modular platform for implementing integrated photonics experiments and consider the effects of loss at different stages of these experiments, including state preparation, manipulation and measurement. We frame our discussion mainly in the context of quantum sensing and focus particularly on the use of loss-tolerant Holland-Burnett states for optical phase estimation. In particular, we discuss spontaneous four-wave mixing in standard birefringent fibre as a source of pure, heralded single photons and present methods of optimising such sources. We also outline a route to programmable circuits which allow the control of photonic interactions even in the presence of fabrication imperfections and describe a ratiometric characterisation method for beam splitters which allows the characterisation of complex circuits without the need for full process tomography. Finally, we present a framework for performing state tomography on heralded states using lossy measurement devices. This is motivated by a calculation of the effects of fabrication imperfections on precision measurement using Holland-Burnett states.Comment: 19 pages, 7 figure
    corecore