11,023 research outputs found

    Functional advantages offered by many-body coherences in biochemical systems

    Full text link
    Quantum coherence phenomena driven by electronic-vibrational (vibronic) interactions, are being reported in many pulse (e.g. laser) driven chemical and biophysical systems. But what systems-level advantage(s) do such many-body coherences offer to future technologies? We address this question for pulsed systems of general size N, akin to the LHCII aggregates found in green plants. We show that external pulses generate vibronic states containing particular multipartite entanglements, and that such collective vibronic states increase the excitonic transfer efficiency. The strength of these many-body coherences and their robustness to decoherence, increase with aggregate size N and do not require strong electronic-vibrational coupling. The implications for energy and information transport are discussed.Comment: arXiv admin note: text overlap with arXiv:1706.0776

    Human-Aligned Calibration for AI-Assisted Decision Making

    Full text link
    Whenever a binary classifier is used to provide decision support, it typically provides both a label prediction and a confidence value. Then, the decision maker is supposed to use the confidence value to calibrate how much to trust the prediction. In this context, it has been often argued that the confidence value should correspond to a well calibrated estimate of the probability that the predicted label matches the ground truth label. However, multiple lines of empirical evidence suggest that decision makers have difficulties at developing a good sense on when to trust a prediction using these confidence values. In this paper, our goal is first to understand why and then investigate how to construct more useful confidence values. We first argue that, for a broad class of utility functions, there exist data distributions for which a rational decision maker is, in general, unlikely to discover the optimal decision policy using the above confidence values -- an optimal decision maker would need to sometimes place more (less) trust on predictions with lower (higher) confidence values. However, we then show that, if the confidence values satisfy a natural alignment property with respect to the decision maker's confidence on her own predictions, there always exists an optimal decision policy under which the level of trust the decision maker would need to place on predictions is monotone on the confidence values, facilitating its discoverability. Further, we show that multicalibration with respect to the decision maker's confidence on her own predictions is a sufficient condition for alignment. Experiments on four different AI-assisted decision making tasks where a classifier provides decision support to real human experts validate our theoretical results and suggest that alignment may lead to better decisions
    • …
    corecore