11 research outputs found

    Intracellular Assembly of Interacting Enzymes Yields Highly‐Active Nanoparticles for Flow Biocatalysis

    Get PDF
    All-enzyme hydrogel (AEH) particles with a hydrodynamic diameter of up to 120 nm were produced intracellularly with an Escherichia coli-based in vivo system. The inCell-AEH nanoparticles were generated from polycistronic vectors enabling simultaneous expression of two interacting enzymes, the Lactobacillus brevis alcohol dehydrogenase (ADH) and the Bacillus subtilis glucose-1-dehydrogenase (GDH), fused with a SpyCatcher or SpyTag, respectively. Formation of inCell-AEH was analyzed by dynamic light scattering and atomic force microscopy. Using the stereoselective two-step reduction of a prochiral diketone substrate, we show that the inCell-AEH approach can be advantageously used in whole-cell flow biocatalysis, by which flow reactors could be operated for >4 days under constant substrate perfusion. More importantly, the inCell-AEH concept enables the recovery of efficient catalyst materials for stable flow bioreactors in a simple and economical one-step procedure from crude bacterial lysates. We believe that our method will contribute to further optimization of sustainable biocatalytic processes

    Schematic diagram of the experimental setup.

    No full text
    <p>(A) heated platform, (B) thermostatic water bath, (C) thermal insulating bridges, and (D) unheated platform.</p

    Certifiably Adversarially Robust Detection of Out-of-Distribution Data

    No full text
    Deep neural networks are known to be overconfident when applied to out-of-distribution (OOD) inputs which clearly do not belong to any class. This is a problem in safety-critical applications since a reliable assessment of the uncertainty of a classifier is a key property, allowing the system to trigger human intervention or to transfer into a safe state. In this paper, we aim for certifiable worst case guarantees for OOD detection by enforcing not only low confidence at the OOD point but also in an l∞l_\infty-ball around it. For this purpose, we use interval bound propagation (IBP) to upper bound the maximal confidence in the l∞l_\infty-ball and minimize this upper bound during training time. We show that non-trivial bounds on the confidence for OOD data generalizing beyond the OOD dataset seen at training time are possible. Moreover, in contrast to certified adversarial robustness which typically comes with significant loss in prediction performance, certified guarantees for worst case OOD detection are possible without much loss in accuracy.Comment: Published and presented at NeurIPS 2020. Code available at https://gitlab.com/Bitterwolf/GOOD v3: added missing acknowledgemen

    Provably Robust Detection of Out-of-distribution Data (almost) for free

    No full text
    When applying machine learning in safety-critical systems, a reliable assessment of the uncertainy of a classifier is required. However, deep neural networks are known to produce highly overconfident predictions on out-of-distribution (OOD) data and even if trained to be non-confident on OOD data one can still adversarially manipulate OOD data so that the classifer again assigns high confidence to the manipulated samples. In this paper we propose a novel method where from first principles we combine a certifiable OOD detector with a standard classifier into an OOD aware classifier. In this way we achieve the best of two worlds: certifiably adversarially robust OOD detection, even for OOD samples close to the in-distribution, without loss in prediction accuracy and close to state-of-the-art OOD detection performance for non-manipulated OOD data. Moreover, due to the particular construction our classifier provably avoids the asymptotic overconfidence problem of standard neural networks
    corecore