108 research outputs found

    Donor to recipient age matching in lung transplantation: A European experience

    Get PDF
    \ua9 2024 The AuthorsBackground: The age profile of organ donors and patients on lung transplantation (LT) waiting lists have changed over time. In Europe, the donor population has aged much more rapidly than the recipient population, making allocation decisions on lungs from older donors common. In this study we assessed the impact of donor and recipient age discrepancy on LT outcomes in the UK and France. Methods: A retrospective analysis of all adult single or bilateral LT in France and the UK between 2010 and 2021. Recipients were stratified into 3 age author groups: young (≤30 years), middle-aged (30–60) and older (≥60). Their donors were also stratified into 2 groups <60, ≥60. Primary graft dysfunction (PGD) rates and recipient survival was compared between matched and mismatched donor and recipient age groups. Propensity matching was employed to minimize covariate imbalances and to improve the internal validity of our results. Results: Our study cohort was 4,696 lung transplant recipients (LTRs). In young and older LTRs, there was no significant difference in 1 and 5-year post-transplant survival dependent on the age category of the donor. Young LTRs who received older donor grafts had a higher risk of severe grade 3 PGD. Conclusion: Our findings show that clinically usable organs from older donors can be utilized safely in LT, even for younger recipients. Further research is needed to assess if the higher rate of PGD3 associated with use of older donors has an effect on long-term outcomes

    Complexes of Iron(II) with silylated pentalene ligands; building blocks for homo- and heterobimetallics

    Get PDF
    A range of iron(II) complexes incorporating the silylated pentalene ligands (Pn†H = 1,4-{SiiPr3}2C8H5 and Pn† = 1,4-{SiiPr3}2C8H4) have been investigated as model molecules/building blocks for metallocene-based polymers. Six complexes have been synthesised and extensively characterised by a range of techniques, including by cyclic voltammetry and X-ray diffraction studies. Amongst these compounds are the homobimetallic [Cp∗Fe]2(μ-Pn†) which is a fused analogue of biferrocene, and the 3d/4s heterobimetallic [Cp∗Fe(η5-Pn†)][K] which forms an organometallic polymer in the solid state. DFT calculations on model mono-Fe(η5-Pn) compounds reveal the charge densities on the uncoordinated carbon atoms of the pentalene ligand, and hence the potential for incorporating these units into heteronuclear bimetallic complexes is assessed

    A base-free synthetic route to anti-bimetallic lanthanide pentalene complexes

    Get PDF
    We report the synthesis and structural characterisation of three homobimetallic complexes featuring divalent lanthanide metals (Ln = Yb, Eu and Sm) bridged by the silylated pentalene ligand [1,4-{SiiPr3}2C8H4]2− (= Pn†). Magnetic measurements and cyclic voltammetry have been used to investigate the extent of intermetallic communication in these systems, in the context of molecular models for organolanthanide based conducting materials

    Formal verification of neural networks

    No full text
    Machine learning models and in particular Deep Neural Networks are being deployed in an ever increasing number of applications, making it crucial for methods capable of verifying their behaviour to be developed. In order to go beyond treating them as black boxes, we study the feasibility of verifying that certain properties on the mapping that the models encode should always hold and propose algorithms to perform this verification. We advocate the developments of these methods through the lens of optimisation. By reformulating the verification of properties as optimisation problems over Neural Networks, we introduce a unified framework to reason about algorithms and identify the necessary components of verification systems. An attempt at verifying a property either results in a formal proof that the property holds or in concrete examples of cases where it is violated. Current state of the art in verification technology of machine learning artifacts is constrained by both the restricted applicability of existing techniques and by their limited scalability. With the aim of making it more feasible to prove statements over widely used models, we present general methods that increase the diversity of verifiable properties and architectures amenable to verification. In order to get closer to the scale of models that power industrial applications, we refine heuristics used during the proving process and introduce algorithmic improvements to the optimisation algorithms underlying the methods, leading to order of magnitude speedups. Benchmarks empirically validate our runtime improvement claims, showing our contribution to addressing the problem of limited scalability of formal methods in the context of Neural Networks.</p

    Efficient continuous relaxations for dense CRF

    No full text
    Dense conditional random fields (CRF) with Gaussian pairwise potentials have emerged as a popular framework for several computer vision applications such as stereo correspondence and semantic segmentation. By modeling long-range interactions, dense CRFs provide a more detailed labelling compared to their sparse counterparts. Variational inference in these dense models is performed using a filtering-based mean-field algorithm in order to obtain a fully-factorized distribution minimising the Kullback-Leibler divergence to the true distribution. In contrast to the continuous relaxation-based energy minimisation algorithms used for sparse CRFs, the mean-field algorithm fails to provide strong theoretical guarantees on the quality of its solutions. To address this deficiency, we show that it is possible to use the same filtering approach to speed-up the optimisation of several continuous relaxations. Specifically, we solve a convex quadratic programming (QP) relaxation using the eldscient Frank-Wolfe algorithm. This also allows us to solve difference-of-convex relaxations via the iterative concave-convex procedure where each iteration requires solving a convex QP. Finally, we develop a novel divide-and-conquer method to compute the subgradients of a linear programming relaxation that provides the best theoretical bounds for energy minimisation. We demonstrate the advantage of continuous relaxations over the widely used mean-field algorithm on publicly available datasets

    Adaptive neural compilation

    No full text
    This paper proposes an adaptive neural-compilation framework to address the problem of efficient program learning. Traditional code optimisation strategies used in compilers are based on applying pre-specified set of transformations that make the code faster to execute without changing its semantics. In contrast, our work involves adapting programs to make them more efficient while considering correctness only on a target input distribution. Our approach is inspired by the recent works on differentiable representations of programs. We show that it is possible to compile programs written in a low-level language to a differentiable representation. We also show how programs in this representation can be optimised to make them efficient on a target distribution of inputs. Experimental results demonstrate that our approach enables learning specifically-tuned algorithms for given data distributions with a high success rate

    A Unified view of piecewise linear neural network verification

    No full text
    The success of Deep Learning and its potential use in many safety-critical applications has motivated research on formal verification of Neural Network (NN) models. Despite the reputation of learned NN models to behave as black boxes and the theoretical hardness of proving their properties, researchers have been successful in verifying some classes of models by exploiting their piecewise linear structure and taking insights from formal methods such as Satisifiability Modulo Theory. These methods are however still far from scaling to realistic neural networks. To facilitate progress on this crucial area, we make two key contributions. First, we present a unified framework that encompasses previous methods. This analysis results in the identification of new methods that combine the strengths of multiple existing approaches, accomplishing a speedup of two orders of magnitude compared to the previous state of the art. Second, we propose a new data set of benchmarks which includes a collection of previously released testcases. We use the benchmark to provide the first experimental comparison of existing algorithms and identify the factors impacting the hardness of verification problems

    Learning to superoptimize programs

    No full text
    Code super-optimization is the task of transforming any given program to a more efficient version while preserving its input-output behaviour. In some sense, it is similar to the paraphrase problem from natural language processing where the intention is to change the syntax of an utterance without changing its semantics. Code-optimization has been the subject of years of research that has resulted in the development of rule-based transformation strategies that are used by compilers. More recently, however, a class of stochastic search based methods have been shown to outperform these strategies. This approach involves repeated sampling of modifications to the program from a proposal distribution, which are accepted or rejected based on whether they preserve correctness, and the improvement they achieve. These methods, however, neither learn from past behaviour nor do they try to leverage the semantics of the program under consideration. Motivated by this observation, we present a novel learning based approach for code super-optimization. Intuitively, our method works by learning the proposal distribution using unbiased estimators of the gradient of the expected improvement. Experiments on benchmarks comprising of automatically generated as well as existing ("Hacker's Delight") programs show that the proposed method is able to significantly outperform state of the art approaches for code super-optimization

    Learning to superoptimize programs

    No full text
    Code super-optimization is the task of transforming any given program to a more efficient version while preserving its input-output behaviour. In some sense, it is similar to the paraphrase problem from natural language processing where the intention is to change the syntax of an utterance without changing its semantics. Code-optimization has been the subject of years of research that has resulted in the development of rule-based transformation strategies that are used by compilers. More recently, however, a class of stochastic search based methods have been shown to outperform these strategies. This approach involves repeated sampling of modifications to the program from a proposal distribution, which are accepted or rejected based on whether they preserve correctness, and the improvement they achieve. These methods, however, neither learn from past behaviour nor do they try to leverage the semantics of the program under consideration. Motivated by this observation, we present a novel learning based approach for code super-optimization. Intuitively, our method works by learning the proposal distribution using unbiased estimators of the gradient of the expected improvement. Experiments on benchmarks comprising of automatically generated as well as existing ("Hacker's Delight") programs show that the proposed method is able to significantly outperform state of the art approaches for code super-optimization
    • …
    corecore