278 research outputs found

    Safety Model Checking with Complementary Approximations

    Full text link
    Formal verification techniques such as model checking, are becoming popular in hardware design. SAT-based model checking techniques such as IC3/PDR, have gained a significant success in hardware industry. In this paper, we present a new framework for SAT-based safety model checking, named Complementary Approximate Reachability (CAR). CAR is based on standard reachability analysis, but instead of maintaining a single sequence of reachable- state sets, CAR maintains two sequences of over- and under- approximate reachable-state sets, checking safety and unsafety at the same time. To construct the two sequences, CAR uses standard Boolean-reasoning algorithms, based on satisfiability solving, one to find a satisfying cube of a satisfiable Boolean formula, and one to provide a minimal unsatisfiable core of an unsatisfiable Boolean formula. We applied CAR to 548 hardware model-checking instances, and compared its performance with IC3/PDR. Our results show that CAR is able to solve 42 instances that cannot be solved by IC3/PDR. When evaluated against a portfolio that includes IC3/PDR and other approaches, CAR is able to solve 21 instances that the other approaches cannot solve. We conclude that CAR should be considered as a valuable member of any algorithmic portfolio for safety model checking

    A case crossover study on the impact of heat waves on non-accidental deaths in Jinan, China

    Get PDF
    Background: Heat waves can not only cause direct death from heat stroke but also lead to excess deaths due to other illnesses. Identifying contributing factors of population vulnerability to heat waves is particularly crucial because heat waves will affect the most disadvantaged populations aggravating health disparities. There has been little evidence on the risk of deaths from heat waves and associated contributing factors to the population vulnerability in Jinan. Purpose: To assess the impact of heat waves on non-accidental deaths and identify individual vulnerability factors to heat wave-related deaths in Jinan, China

    Experimenting a New Programming Practice with LLMs

    Full text link
    The recent development on large language models makes automatically constructing small programs possible. It thus has the potential to free software engineers from low-level coding and allow us to focus on the perhaps more interesting parts of software development, such as requirement engineering and system testing. In this project, we develop a prototype named AISD (AI-aided Software Development), which is capable of taking high-level (potentially vague) user requirements as inputs, generates detailed use cases, prototype system designs, and subsequently system implementation. Different from existing attempts, AISD is designed to keep the user in the loop, i.e., by repeatedly taking user feedback on use cases, high-level system designs, and prototype implementations through system testing. AISD has been evaluated with a novel benchmark of non-trivial software projects. The experimental results suggest that it might be possible to imagine a future where software engineering is reduced to requirement engineering and system testing only

    Towards Better Fairness-Utility Trade-off: A Comprehensive Measurement-Based Reinforcement Learning Framework

    Full text link
    Machine learning is widely used to make decisions with societal impact such as bank loan approving, criminal sentencing, and resume filtering. How to ensure its fairness while maintaining utility is a challenging but crucial issue. Fairness is a complex and context-dependent concept with over 70 different measurement metrics. Since existing regulations are often vague in terms of which metric to use and different organizations may prefer different fairness metrics, it is important to have means of improving fairness comprehensively. Existing mitigation techniques often target at one specific fairness metric and have limitations in improving multiple notions of fairness simultaneously. In this work, we propose CFU (Comprehensive Fairness-Utility), a reinforcement learning-based framework, to efficiently improve the fairness-utility trade-off in machine learning classifiers. A comprehensive measurement that can simultaneously consider multiple fairness notions as well as utility is established, and new metrics are proposed based on an in-depth analysis of the relationship between different fairness metrics. The reward function of CFU is constructed with comprehensive measurement and new metrics. We conduct extensive experiments to evaluate CFU on 6 tasks, 3 machine learning models, and 15 fairness-utility measurements. The results demonstrate that CFU can improve the classifier on multiple fairness metrics without sacrificing its utility. It outperforms all state-of-the-art techniques and has witnessed a 37.5% improvement on average

    SAT-based Explicit LTLf Satisfiability Checking

    Get PDF
    We present here a SAT-based framework for LTLf (Linear Temporal Logic on Finite Traces) satisfiability checking. We use propositional SAT-solving techniques to construct a transition system for the input LTLf formula; satisfiability checking is then reduced to a path-search problem over this transition system. Furthermore, we introduce CDLSC (Conflict-Driven LTLf Satisfiability Checking), a novel algorithm that leverages information produced by propositional SAT solvers from both satisfiability and unsatisfiability results. Experimental evaluations show that CDLSC outperforms all other existing approaches for LTLf satisfiability checking, by demonstrating an approximate four-fold speedup compared to the second-best solver
    • …
    corecore