83 research outputs found

    Experimental Heat-Bath Cooling of Spins

    Get PDF
    Algorithmic cooling (AC) is a method to purify quantum systems, such as ensembles of nuclear spins, or cold atoms in an optical lattice. When applied to spins, AC produces ensembles of highly polarized spins, which enhance the signal strength in nuclear magnetic resonance (NMR). According to this cooling approach, spin-half nuclei in a constant magnetic field are considered as bits, or more precisely, quantum bits, in a known probability distribution. Algorithmic steps on these bits are then translated into specially designed NMR pulse sequences using common NMR quantum computation tools. The algorithmicalgorithmic cooling of spins is achieved by alternately combining reversible, entropy-preserving manipulations (borrowed from data compression algorithms) with selectiveselective resetreset, the transfer of entropy from selected spins to the environment. In theory, applying algorithmic cooling to sufficiently large spin systems may produce polarizations far beyond the limits due to conservation of Shannon entropy. Here, only selective reset steps are performed, hence we prefer to call this process "heat-bath" cooling, rather than algorithmic cooling. We experimentally implement here two consecutive steps of selective reset that transfer entropy from two selected spins to the environment. We performed such cooling experiments with commercially-available labeled molecules, on standard liquid-state NMR spectrometers. Our experiments yielded polarizations that bypassbypass ShannonsShannon's entropyentropy-conservationconservation boundbound, so that the entire spin-system was cooled. This paper was initially submitted in 2005, first to Science and then to PNAS, and includes additional results from subsequent years (e.g. for resubmission in 2007). The Postscriptum includes more details.Comment: 20 pages, 8 figures, replaces quant-ph/051115

    Sandbar Breaches Control of the Biogeochemistry of a Micro-Estuary

    Get PDF
    Micro-estuaries in semi-arid areas, despite their small size (shallow depth of a few meters, length of a few kilometers, and a surface area of less than 1 km2) are important providers of ecosystem services. Despite their high abundance, tendency to suffer from eutrophication and vulnerability to other anthropogenic impacts, such systems are among the least studied water bodies in the world. In low tidal amplitude regions, micro-estuaries often have limited rate of sea-river water exchange, somewhat similar to fjord circulation, caused by a shallow sandbar forming at the coastline. The long-term study, we report here was inspired by the idea that, due to their small size and low discharges regime, relatively small interventions can have large effects on micro-estuaries. We used a stationary array of sensors and detailed monthly water sampling to characterize the Alexander estuary, a typical micro-estuary in the S.E. Mediterranean, and to identify the main stress factors in this aquatic ecosystem. The Alexander micro-estuary is stratified throughout the year with median bottom salinity of 18 PSU. Prolonged periods of hypoxia were identified as the main stress factor. Those were alleviated by breaching of the sandbar at the estuary mouth by sea-waves or stormwater runoff events (mostly during winter) that flush the anoxic bottom water. Analysis of naturally occurring sandbar breaches, and an artificial breach experiment indicate that the current oxygen consumption rate of the Alexander micro-estuary is too high to consider sandbar breaches as a remedy for the anoxia. Nevertheless, it demonstrates and provides the tools to assess the feasibility of small-scale interventions to control micro-estuaries hydrology and biogeochemistry

    Novel homozygous variants in PRORP expand the genotypic spectrum of combined oxidative phosphorylation deficiency 54

    Get PDF
    Biallelic hypomorphic variants in PRORP have been recently described as causing the autosomal recessive disorder combined oxidative phosphorylation deficiency type 54 (COXPD54). COXPD54 encompasses a phenotypic spectrum of sensorineural hearing loss and ovarian insufficiency (Perrault syndrome) to leukodystrophy. Here, we report three additional families with homozygous missense PRORP variants with pleiotropic phenotypes. Each missense variant altered a highly conserved residue within the metallonuclease domain. In vitro mitochondrial tRNA processing assays with recombinant TRMT10C, SDR5C1 and PRORP indicated two COXPD54-associated PRORP variants, c.1159A>G (p.Thr387Ala) and c.1241C>T (p.Ala414Val), decreased pre-tRNAIle cleavage, consistent with both variants impacting tRNA processing. No significant decrease in tRNA processing was observed with PRORP c.1093T>C (p.Tyr365His), which was identified in an individual with leukodystrophy. These data provide independent evidence that PRORP variants are associated with COXPD54 and that the assessment of 5' leader mitochondrial tRNA processing is a valuable assay for the functional analysis and clinical interpretation of novel PRORP variants

    Robust and Pareto Optimality of Insurance Contract

    Get PDF
    The optimal insurance problem represents a fast growing topic that explains the most efficient contract that an insurance player may get. The classical problem investigates the ideal contract under the assumption that the underlying risk distribution is known, i.e. by ignoring the parameter and model risks. Taking these sources of risk into account, the decision-maker aims to identify a robust optimal contract that is not sensitive to the chosen risk distribution. We focus on Value-at-Risk (VaR) and Conditional Value-at-Risk (CVaR)-based decisions, but further extensions for other risk measures are easily possible. The Worst-case scenario and Worst-case regret robust models are discussed in this paper, which have been already used in robust optimisation literature related to the investment portfolio problem. Closed-form solutions are obtained for the VaR Worst-case scenario case, while Linear Programming (LP) formulations are provided for all other cases. A caveat of robust optimisation is that the optimal solution may not be unique, and therefore, it may not be economically acceptable, i.e. Pareto optimal. This issue is numerically addressed and simple numerical methods are found for constructing insurance contracts that are Pareto and robust optimal. Our numerical illustrations show weak evidence in favour of our robust solutions for VaR-decisions, while our robust methods are clearly preferred for CVaR-based decisions

    Towards Machine Wald

    Get PDF
    The past century has seen a steady increase in the need of estimating and predicting complex systems and making (possibly critical) decisions with limited information. Although computers have made possible the numerical evaluation of sophisticated statistical models, these models are still designed \emph{by humans} because there is currently no known recipe or algorithm for dividing the design of a statistical model into a sequence of arithmetic operations. Indeed enabling computers to \emph{think} as \emph{humans} have the ability to do when faced with uncertainty is challenging in several major ways: (1) Finding optimal statistical models remains to be formulated as a well posed problem when information on the system of interest is incomplete and comes in the form of a complex combination of sample data, partial knowledge of constitutive relations and a limited description of the distribution of input random variables. (2) The space of admissible scenarios along with the space of relevant information, assumptions, and/or beliefs, tend to be infinite dimensional, whereas calculus on a computer is necessarily discrete and finite. With this purpose, this paper explores the foundations of a rigorous framework for the scientific computation of optimal statistical estimators/models and reviews their connections with Decision Theory, Machine Learning, Bayesian Inference, Stochastic Optimization, Robust Optimization, Optimal Uncertainty Quantification and Information Based Complexity.Comment: 37 page

    Practical Fully Secure Three-Party Computation via Sublinear Distributed Zero-Knowledge Proofs

    Get PDF
    Secure multiparty computation enables a set of parties to securely carry out a joint computation on their private inputs without revealing anything but the output. A particularly motivated setting is that of three parties with a single corruption (hereafter denoted 3PC). This 3PC setting is particularly appealing for two main reasons: (1) it admits more efficient MPC protocols than in other standard settings; (2) it allows in principle to achieve full security (and fairness). Highly efficient protocols exist within this setting with security against a semi-honest adversary; however, a significant gap remains between these and protocols with stronger security against a malicious adversary. In this paper, we narrow this gap within concretely efficient protocols. More explicitly, we have the following contributions: * Concretely Efficient Malicious 3PC. We present an optimized 3PC protocol for arithmetic circuits over rings with (amortized) communication of 1 ring element per multiplication gate per party, matching the best semi-honest protocols. The protocol applies also to Boolean circuits, significantly improving over previous protocols even for small circuits. Our protocol builds on recent techniques of Boneh et al.\ (Crypto 2019) for sublinear zero-knowledge proofs on distributed data, together with an efficient semi-honest protocol based on replicated secret sharing (Araki et al., CCS 2016). We present a concrete analysis of communication and computation costs, including several optimizations. For example, for 40-bit statistical security, and Boolean circuit with a million (nonlinear) gates, the overhead on top of the semi-honest protocol can involve less than 0.5KB of communication {\em for the entire circuit}, while the computational overhead is dominated by roughly 30 multiplications per gate in the field F247F_{2^{47}}. In addition, we implemented and benchmarked the protocol for varied circuit sizes. * Full Security. We augment the 3PC protocol to further provide full security (with guaranteed output delivery) while maintaining amortized 1 ring element communication per party per multiplication gate, and with hardly any impact on concrete efficiency. This is contrasted with the best previous 3PC protocols from the literature, which allow a corrupt party to mount a denial-of-service attack without being detected

    Finding Common Ground When Experts Disagree: Robust Portfolio Decision Analysis

    Full text link

    Loss-of-function mutations in UDP-Glucose 6-Dehydrogenase cause recessive developmental epileptic encephalopathy

    Get PDF
    AbstractDevelopmental epileptic encephalopathies are devastating disorders characterized by intractable epileptic seizures and developmental delay. Here, we report an allelic series of germline recessive mutations in UGDH in 36 cases from 25 families presenting with epileptic encephalopathy with developmental delay and hypotonia. UGDH encodes an oxidoreductase that converts UDP-glucose to UDP-glucuronic acid, a key component of specific proteoglycans and glycolipids. Consistent with being loss-of-function alleles, we show using patients’ primary fibroblasts and biochemical assays, that these mutations either impair UGDH stability, oligomerization, or enzymatic activity. In vitro, patient-derived cerebral organoids are smaller with a reduced number of proliferating neuronal progenitors while mutant ugdh zebrafish do not phenocopy the human disease. Our study defines UGDH as a key player for the production of extracellular matrix components that are essential for human brain development. Based on the incidence of variants observed, UGDH mutations are likely to be a frequent cause of recessive epileptic encephalopathy.</jats:p

    Loss-of-function mutations in UDP-Glucose 6-Dehydrogenase cause recessive developmental epileptic encephalopathy

    Get PDF
    Developmental epileptic encephalopathies are devastating disorders characterized by intractable epileptic seizures and developmental delay. Here, we report an allelic series of germline recessive mutations in UGDH in 36 cases from 25 families presenting with epileptic encephalopathy with developmental delay and hypotonia. UGDH encodes an oxidoreductase that converts UDP-glucose to UDP-glucuronic acid, a key component of specific proteoglycans and glycolipids. Consistent with being loss-of-function alleles, we show using patients’ primary fibroblasts and biochemical assays, that these mutations either impair UGDH stability, oligomerization, or enzymatic activity. In vitro, patient-derived cerebral organoids are smaller with a reduced number of proliferating neuronal progenitors while mutant ugdh zebrafish do not phenocopy the human disease. Our study defines UGDH as a key player for the production of extracellular matrix components that are essential for human brain development. Based on the incidence of variants observed, UGDH mutations are likely to be a frequent cause of recessive epileptic encephalopathy

    The decision rule approach to optimization under uncertainty: methodology and applications

    Get PDF
    Dynamic decision-making under uncertainty has a long and distinguished history in operations research. Due to the curse of dimensionality, solution schemes that naïvely partition or discretize the support of the random problem parameters are limited to small and medium-sized problems, or they require restrictive modeling assumptions (e.g., absence of recourse actions). In the last few decades, several solution techniques have been proposed that aim to alleviate the curse of dimensionality. Amongst these is the decision rule approach, which faithfully models the random process and instead approximates the feasible region of the decision problem. In this paper, we survey the major theoretical findings relating to this approach, and we investigate its potential in two applications areas
    corecore