9 research outputs found

    Effect of Local Fibrinogen Administration on Postoperative Bleeding in Coronary Artery Bypass Graft Patients

    Get PDF
    Background: There are always concerns regarding postoperative bleeding in coronary artery bypass graft (CABG) patients; several different strategies have been used to compensate for it and to reduce the amount of blood transfusion. We designed this study to investigate local fibrinogen conditions in postoperative bleeding in adult patients undergoing CABG. Materials and Methods: In a double-blind clinical trial, after matching the inclusion and exclusion criteria, 50 patients entered the study. Pre- and postoperative data, including clinical and laboratory variables were assessed. Among them, preoperative and postoperative fibrinogen levels were measured. One group received 1g fibrinogen as a 50mL solution flushed over the epicardium before sternal closure; the other received 50mL normal saline as a placebo in the same method. Results: No difference between the two groups regarding preoperative and postoperative fibrinogen levels. Also, the two groups were the same regarding PT, PTT, INR, and platelet. However, bleeding was less and hematocrit level was higher in the local fibrinogen group. Conclusion: Local fibrinogen could decrease postoperative bleeding in CABG patients leading to decreased need for blood transfusion. &nbsp

    Text-to-SQL Error Correction with Language Models of Code

    Full text link
    Despite recent progress in text-to-SQL parsing, current semantic parsers are still not accurate enough for practical use. In this paper, we investigate how to build automatic text-to-SQL error correction models. Noticing that token-level edits are out of context and sometimes ambiguous, we propose building clause-level edit models instead. Besides, while most language models of code are not specifically pre-trained for SQL, they know common data structures and their operations in programming languages such as Python. Thus, we propose a novel representation for SQL queries and their edits that adheres more closely to the pre-training corpora of language models of code. Our error correction model improves the exact set match accuracy of different parsers by 2.4-6.5 and obtains up to 4.3 point absolute improvement over two strong baselines. Our code and data are available at https://github.com/OSU-NLP-Group/Auto-SQL-Correction.Comment: ACL 2023 Short Pape

    Learning Nonlinear Loop Invariants with Gated Continuous Logic Networks (Extended Version)

    Full text link
    Verifying real-world programs often requires inferring loop invariants with nonlinear constraints. This is especially true in programs that perform many numerical operations, such as control systems for avionics or industrial plants. Recently, data-driven methods for loop invariant inference have shown promise, especially on linear invariants. However, applying data-driven inference to nonlinear loop invariants is challenging due to the large numbers of and magnitudes of high-order terms, the potential for overfitting on a small number of samples, and the large space of possible inequality bounds. In this paper, we introduce a new neural architecture for general SMT learning, the Gated Continuous Logic Network (G-CLN), and apply it to nonlinear loop invariant learning. G-CLNs extend the Continuous Logic Network (CLN) architecture with gating units and dropout, which allow the model to robustly learn general invariants over large numbers of terms. To address overfitting that arises from finite program sampling, we introduce fractional sampling---a sound relaxation of loop semantics to continuous functions that facilitates unbounded sampling on real domain. We additionally design a new CLN activation function, the Piecewise Biased Quadratic Unit (PBQU), for naturally learning tight inequality bounds. We incorporate these methods into a nonlinear loop invariant inference system that can learn general nonlinear loop invariants. We evaluate our system on a benchmark of nonlinear loop invariants and show it solves 26 out of 27 problems, 3 more than prior work, with an average runtime of 53.3 seconds. We further demonstrate the generic learning ability of G-CLNs by solving all 124 problems in the linear Code2Inv benchmark. We also perform a quantitative stability evaluation and show G-CLNs have a convergence rate of 97.5%97.5\% on quadratic problems, a 39.2%39.2\% improvement over CLN models

    Differentiable neural logic networks and their application onto inductive logic programming

    Get PDF
    Despite the impressive performance of Deep Neural Networks (DNNs), they usually lack the explanatory power of disciplines such as logic programming. Even though they can learn to solve very difficult problems, the learning is usually implicit and it is very difficult, if not impossible, to interpret the underlying explanations that is implicitly stored in the weights of the neural network models. On the other hand, standard logic programming is usually limited in scope and application compared to the DNNs. The objective of this dissertation is to bridge the gap between these two disciplines by presenting a novel paradigm for learning algorithmic and discrete tasks via neural networks. This novel approach, uses the differentiable neural network to design interpretable and explanatory models that can learn and represent Boolean functions efficiently. We will investigate the application of these differentiable Neural Logic (dNL) networks in disciplines such as Inductive Logic Programming, Relational Reinforcement Learning, as well as in discrete algorithmic tasks such as decoding LDPC codes over Binary erasure Channels. In particular, in this dissertation we reformulate the ILP as a differentiable neural network by exploiting the explanatory power of dNL networks and we show that the proposed dNL-ILP outperforms the current state of the art ILP solvers in a variety of benchmark tasks. We further show that the proposed differentiable ILP solver can be effectively combined with the standard deep learning techniques to formulate a relational reinforcement learning framework. Via experiments, we demonstrate that the proposed deep relational policy learning framework can incorporate human expertise to learn efficient policies directly from images and outperforms the traditional RRL systems in some tasks.Ph.D

    Mitigating Group Bias in Federated Learning: Beyond Local Fairness

    Full text link
    The issue of group fairness in machine learning models, where certain sub-populations or groups are favored over others, has been recognized for some time. While many mitigation strategies have been proposed in centralized learning, many of these methods are not directly applicable in federated learning, where data is privately stored on multiple clients. To address this, many proposals try to mitigate bias at the level of clients before aggregation, which we call locally fair training. However, the effectiveness of these approaches is not well understood. In this work, we investigate the theoretical foundation of locally fair training by studying the relationship between global model fairness and local model fairness. Additionally, we prove that for a broad class of fairness metrics, the global model's fairness can be obtained using only summary statistics from local clients. Based on that, we propose a globally fair training algorithm that directly minimizes the penalized empirical loss. Real-data experiments demonstrate the promising performance of our proposed approach for enhancing fairness while retaining high accuracy compared to locally fair training methods
    corecore