481 research outputs found

    Automated and foundational verification of low-level programs

    Get PDF
    Formal verification is a promising technique to ensure the reliability of low-level programs like operating systems and hypervisors, since it can show the absence of whole classes of bugs and prevent critical vulnerabilities. However, to realize the full potential of formal verification for real-world low-level programs one has to overcome several challenges, including: (1) dealing with the complexities of realistic models of real-world programming languages; (2) ensuring the trustworthiness of the verification, ideally by providing foundational proofs (i.e., proofs that can be checked by a general-purpose proof assistant); and (3) minimizing the manual effort required for verification by providing a high degree of automation. This dissertation presents multiple projects that advance formal verification along these three axes: RefinedC provides the first approach for verifying C code that combines foundational proofs with a high degree of automation via a novel refinement and ownership type system. Islaris shows how to scale verification of assembly code to realistic models of modern instruction set architectures-in particular, Armv8-A and RISC-V. DimSum develops a decentralized approach for reasoning about programs that consist of components written in multiple different languages (e.g., assembly and C), as is common for low-level programs. RefinedC and Islaris rest on Lithium, a novel proof engine for separation logic that combines automation with foundational proofs.Formale Verifikation ist eine vielversprechende Technik, um die Verlässlichkeit von grundlegenden Programmen wie Betriebssystemen sicherzustellen. Um das volle Potenzial formaler Verifikation zu realisieren, müssen jedoch mehrere Herausforderungen gemeistert werden: Erstens muss die Komplexität von realistischen Modellen von Programmiersprachen wie C oder Assembler gehandhabt werden. Zweitens muss die Vertrauenswürdigkeit der Verifikation sichergestellt werden, idealerweise durch maschinenüberprüfbare Beweise. Drittens muss die Verifikation automatisiert werden, um den manuellen Aufwand zu minimieren. Diese Dissertation präsentiert mehrere Projekte, die formale Verifikation entlang dieser Achsen weiterentwickeln: RefinedC ist der erste Ansatz für die Verifikation von C Code, der maschinenüberprüfbare Beweise mit einem hohen Grad an Automatisierung vereint. Islaris zeigt, wie die Verifikation von Assembler zu realistischen Modellen von modernen Befehlssatzarchitekturen wie Armv8-A oder RISC-V skaliert werden kann. DimSum entwickelt einen neuen Ansatz für die Verifizierung von Programmen, die aus Komponenten in mehreren Programmiersprachen bestehen (z.B., C und Assembler), wie es oft bei grundlegenden Programmen wie Betriebssystemen der Fall ist. RefinedC und Islaris basieren auf Lithium, eine neue Automatisierungstechnik für Separationslogik, die maschinenüberprüfbare Beweise und Automatisierung verbindet.This research was supported in part by a Google PhD Fellowship, in part by awards from Android Security's ASPIRE program and from Google Research, and in part by a European Research Council (ERC) Consolidator Grant for the project "RustBelt", funded under the European Union’s Horizon 2020 Framework Programme (grant agreement no. 683289)

    (Im)probable stories:combining Bayesian and explanation-based accounts of rational criminal proof

    Get PDF
    A key question in criminal trials is, ‘may we consider the facts of the case proven?’ Partially in response to miscarriages of justice, philosophers, psychologists and mathematicians have considered how we can answer this question rationally. The two most popular answers are the Bayesian and the explanation-based accounts. Bayesian models cast criminal evidence in terms of probabilities. Explanation-based approaches view the criminal justice process as a comparison between causal explanations of the evidence. Such explanations usually take the form of scenarios – stories about how a crime was committed. The two approaches are often seen as rivals. However, this thesis argues that both perspectives are necessary for a good theory of rational criminal proof. By comparing scenarios, we can, among other things, determine what the key evidence is, how the items of evidence interrelate, and what further evidence to collect. Bayesian probability theory helps us pinpoint when we can and cannot conclude that a scenario is likely to be true. This thesis considers several questions regarding criminal evidence from this combined perspective, such as: can a defendant sometimes be convicted on the basis of an implausible guilt scenario? When can we assume that we are not overlooking scenarios or evidence? Should judges always address implausible innocence scenarios of the accused? When is it necessary to look for new evidence? How do we judge whether an eyewitness is reliable? By combining the two theories, we arrive at new insights on how to rationally reason about these, and other questions surrounding criminal evidence

    Synchronising Wisdom and Implementation: A Formal ODD Approach to Expressing Insights on Bullying

    Get PDF
    Paper IV and V is excluded from the dissertation until it is published.The Social Simulation methodology, a mix of traditionally unassociated fields, utilises computer models to describe, understand, predict, and reflect on social phenomena. The model creation process typically requires the integration of knowledge insights from academic and non-academic knowledge holders. To ensure model quality, different processes are established in an effort to verify the alignment of knowledge insights and their implementation in the simulation model by the modelling team. However, due to a lack of technical skills, knowledge holders, who may not fully understand the model code, often perform these verification checks indirectly, for example by evaluating conceptual model descriptions. Initially motivated to create quality models of social conflict, this dissertation approaches the model quality challenge with a Domain Specific Modelling (DSM) approach. The objective was to develop a DSM tool using the Design Methodology, supplemented by a case study to provide first-hand experience with the quality challenge. Based on our project requirements, we selected University bullying as the case study subject. The Design Methodology included the problem exploration, the identification of a DSM solution, the selection of a domain and programming language for the DSM tool, the agile development of the domain language aspects with test models, and a final evaluation using the case study model.acceptedVersio

    Tri-State Circuits: A Circuit Model that Captures RAM

    Get PDF
    We introduce tri-state circuits (TSCs). TSCs form a natural model of computation that, to our knowledge, has not been considered by theorists. The model captures a surprising combination of simplicity and power. TSCs are simple in that they allow only three wire values (0,1,0,1, and undefined - Z\mathcal{Z}) and three types of fan-in two gates; they are powerful in that their statically placed gates fire (execute) eagerly as their inputs become defined, implying orders of execution that depend on input. This behavior is sufficient to efficiently evaluate RAM programs. We construct a TSC that emulates TT steps of any RAM program and that has only O(Tlog3TloglogT)O(T \cdot \log^3 T \cdot \log \log T) gates. Contrast this with the reduction from RAM to Boolean circuits, where the best approach scans all of memory on each access, incurring quadratic cost. We connect TSCs with cryptography by using them to improve Yao\u27s Garbled Circuit (GC) technique. TSCs capture the power of garbling far better than Boolean Circuits, offering a more expressive model of computation that leaves per-gate cost essentially unchanged. As an important application, we construct authenticated Garbled RAM (GRAM), enabling constant-round maliciously-secure 2PC of RAM programs. Let λ\lambda denote the security parameter. We extend authenticated garbling to TSCs; by simply plugging in our TSC-based RAM, we obtain authenticated GRAM running at cost O(Tlog3TloglogTλ)O(T \cdot \log^3 T \cdot \log \log T \cdot \lambda), outperforming all prior work, including prior semi-honest GRAM. We also give semi-honest garbling of TSCs from a one-way function (OWF). This yields OWF-based GRAM at cost O(Tlog3TloglogTλ)O(T \cdot \log^3 T \cdot \log \log T \cdot \lambda), outperforming the best prior OWF-based GRAM by more than factor λ\lambda

    Formal model of multi-agent architecture of a software system based on knowledge interpretation

    Get PDF
    The use of agents across diverse domains within computer science and artificial intelligence is experiencing a notable surge in response to the imperatives of adaptability, efficiency, and scalability. The subject of this study is the application of formal methods to furnish a framework for knowledge interpretation with a specific focus on the agent-based paradigm in software engineering. This study aims to advance a formal approach to knowledge interpretation by leveraging the agent-based paradigm. The objectives are as follows: 1) to examine the current state of the agent-based paradigm in software engineering; 2) to describe the basic concepts of the knowledge interpretation approach; 3) to study the general structure of the rule extraction task; 4) to develop the reference structure of knowledge interpretation; 5) to develop a multi-agent system architecture; 6) and to discuss the research results. This study employs formal methods, including the use of closed path rules and predicate logic. Specifically, the integration of closed path rules contributes to the extraction and explication of facts from extensive knowledge bases. The obtained results encompass the following: 1) a rule mining approach grounded in closed path rules and tailored for processing extensive datasets; 2) a formalization of relevance that facilitates the scrutiny and automated exclusion of irrelevant fragments from the explanatory framework; and 3) the realization of a multi-agent system predicated on the synergy among five distinct types of agents, dedicated to rule extraction and the interpretation of acquired knowledge. This paper provides an example of the application of the proposed formal tenets, demonstrating their practical context. The conclusion underscores that the agent-based paradigm, with its emphasis on decentralized and autonomous entities, presents an innovative framework for handling the intricacies of knowledge processing. It extends to the retrieval of facts and rules. By distributing functions across multiple agents, the framework offers a dynamic and scalable solution to effectively interpret vast knowledge repositories. This approach is particularly valuable in scenarios where traditional methods may struggle to cope with the volume and complexity of information

    Managing healthcare transformation towards P5 medicine (Published in Frontiers in Medicine)

    Get PDF
    Health and social care systems around the world are facing radical organizational, methodological and technological paradigm changes to meet the requirements for improving quality and safety of care as well as efficiency and efficacy of care processes. In this they’re trying to manage the challenges of ongoing demographic changes towards aging, multi-diseased societies, development of human resources, a health and social services consumerism, medical and biomedical progress, and exploding costs for health-related R&D as well as health services delivery. Furthermore, they intend to achieve sustainability of global health systems by transforming them towards intelligent, adaptive and proactive systems focusing on health and wellness with optimized quality and safety outcomes. The outcome is a transformed health and wellness ecosystem combining the approaches of translational medicine, 5P medicine (personalized, preventive, predictive, participative precision medicine) and digital health towards ubiquitous personalized health services realized independent of time and location. It considers individual health status, conditions, genetic and genomic dispositions in personal social, occupational, environmental and behavioural context, thus turning health and social care from reactive to proactive. This requires the advancement communication and cooperation among the business actors from different domains (disciplines) with different methodologies, terminologies/ontologies, education, skills and experiences from data level (data sharing) to concept/knowledge level (knowledge sharing). The challenge here is the understanding and the formal as well as consistent representation of the world of sciences and practices, i.e. of multidisciplinary and dynamic systems in variable context, for enabling mapping between the different disciplines, methodologies, perspectives, intentions, languages, etc. Based on a framework for dynamically, use-case-specifically and context aware representing multi-domain ecosystems including their development process, systems, models and artefacts can be consistently represented, harmonized and integrated. The response to that problem is the formal representation of health and social care ecosystems through an system-oriented, architecture-centric, ontology-based and policy-driven model and framework, addressing all domains and development process views contributing to the system and context in question. Accordingly, this Research Topic would like to address this change towards 5P medicine. Specifically, areas of interest include, but are not limited: • A multidisciplinary approach to the transformation of health and social systems • Success factors for sustainable P5 ecosystems • AI and robotics in transformed health ecosystems • Transformed health ecosystems challenges for security, privacy and trust • Modelling digital health systems • Ethical challenges of personalized digital health • Knowledge representation and management of transformed health ecosystems Table of Contents: 04 Editorial: Managing healthcare transformation towards P5 medicine Bernd Blobel and Dipak Kalra 06 Transformation of Health and Social Care Systems—An Interdisciplinary Approach Toward a Foundational Architecture Bernd Blobel, Frank Oemig, Pekka Ruotsalainen and Diego M. Lopez 26 Transformed Health Ecosystems—Challenges for Security, Privacy, and Trust Pekka Ruotsalainen and Bernd Blobel 36 Success Factors for Scaling Up the Adoption of Digital Therapeutics Towards the Realization of P5 Medicine Alexandra Prodan, Lucas Deimel, Johannes Ahlqvist, Strahil Birov, Rainer Thiel, Meeri Toivanen, Zoi Kolitsi and Dipak Kalra 49 EU-Funded Telemedicine Projects – Assessment of, and Lessons Learned From, in the Light of the SARS-CoV-2 Pandemic Laura Paleari, Virginia Malini, Gabriella Paoli, Stefano Scillieri, Claudia Bighin, Bernd Blobel and Mauro Giacomini 60 A Review of Artificial Intelligence and Robotics in Transformed Health Ecosystems Kerstin Denecke and Claude R. Baudoin 73 Modeling digital health systems to foster interoperability Frank Oemig and Bernd Blobel 89 Challenges and solutions for transforming health ecosystems in low- and middle-income countries through artificial intelligence Diego M. López, Carolina Rico-Olarte, Bernd Blobel and Carol Hullin 111 Linguistic and ontological challenges of multiple domains contributing to transformed health ecosystems Markus Kreuzthaler, Mathias Brochhausen, Cilia Zayas, Bernd Blobel and Stefan Schulz 126 The ethical challenges of personalized digital health Els Maeckelberghe, Kinga Zdunek, Sara Marceglia, Bobbie Farsides and Michael Rigb

    Foundational research in accounting: professional memoirs and beyond

    Get PDF
    It was with particular pleasure that, several years ago, I accepted the invitation of ChuoUniversity to write a professional, biographical essay about my own experience with accounting. My relation with this university is a long-standing one. Shortly after two of my books, Accounting and Analytical Methods and Simulation of the Firm Through a Budget Computer Program, were published in the USA in 1964, Professor Kenji Aizaki (then at Chuo University) and his former student, Professor Fujio Harada, and later other scholars from Chuo University, began actively promoting my ideas in Japan. And after a two volume Japanese translation of the first of these books was published in 1972 and 1975 (through the mediation of Professor Shinzaburo Koshimura, then President of Yokohama National University), my research found fertile ground in Japan through continuing efforts of three generations of accounting academics from Chuo University. I suppose it is thanks to these endeavours that my efforts became so well known in Japan, and that during some three decades many Japanese accounting professors contacted me either personally or by correspondence. Then from 1988 to 1990 Prof. Yoshiaki Koguchi, again from Chuo University, came as a visiting scholar to the University of British Columbia, audited some of my classes, and became a good friend and collaborator, which further strengthened my ties to this university

    Understanding and Mitigating Flaky Software Test Cases

    Get PDF
    A flaky test is a test case that can pass or fail without changes to the test case code or the code under test. They are a wide-spread problem with serious consequences for developers and researchers alike. For developers, flaky tests lead to time wasted debugging spurious failures, tempting them to ignore future failures. While unreliable, flaky tests can still indicate genuine issues in the code under test, so ignoring them can lead to bugs being missed. The non-deterministic behaviour of flaky tests is also a major snag to continuous integration, where a single flaky test can fail an entire build. For researchers, flaky tests challenge the assumption that a test failure implies a bug, an assumption that many fundamental techniques in software engineering research rely upon, including test acceleration, mutation testing, and fault localisation. Despite increasing research interest in the topic, open problems remain. In particular, there has been relatively little attention paid to the views and experiences of developers, despite a considerable body of empirical work. This is essential to guide the focus of research into areas that are most likely to be beneficial to the software engineering industry. Furthermore, previous automated techniques for detecting flaky tests are typically either based on exhaustively rerunning test cases or machine learning classifiers. The prohibitive runtime of the rerunning approach and the demonstrably poor inter-project generalisability of classifiers leaves practitioners with a stark choice when it comes to automatically detecting flaky tests. In response to these challenges, I set two high-level goals for this thesis: (1) to enhance the understanding of the manifestation, causes, and impacts of flaky tests; and (2) to develop and empirically evaluate efficient automated techniques for mitigating flaky tests. In pursuit of these goals, this thesis makes five contributions: (1) a comprehensive systematic literature review of 76 published papers; (2) a literature-guided survey of 170 professional software developers; (3) a new feature set for encoding test cases in machine learning-based flaky test detection; (4) a novel approach for reducing the time cost of rerunning-based techniques for detecting flaky tests by combining them with machine learning classifiers; and (5) an automated technique that detects and classifies existing flaky tests in a project and produces reusable project-specific machine learning classifiers able to provide fast and accurate predictions for future test cases in that project

    Quantitative Verification and Synthesis of Resilient Networks

    Get PDF

    Surveillance Graphs: Vulgarity and Cloud Orthodoxy in Linked Data Infrastructures

    Get PDF
    Information is power, and that power has been largely enclosed by a handful of information conglomerates. The logic of the surveillance-driven information economy demands systems for handling mass quantities of heterogeneous data, increasingly in the form of knowledge graphs. An archaeology of knowledge graphs and their mutation from the liberatory aspirations of the semantic web gives us an underexplored lens to understand contemporary information systems. I explore how the ideology of cloud systems steers two projects from the NIH and NSF intended to build information infrastructures for the public good to inevitable corporate capture, facilitating the development of a new kind of multilayered public/private surveillance system in the process. I argue that understanding technologies like large language models as interfaces to knowledge graphs is critical to understand their role in a larger project of informational enclosure and concentration of power. I draw from multiple histories of liberatory information technologies to develop Vulgar Linked Data as an alternative to the Cloud Orthodoxy, resisting the colonial urge for universality in favor of vernacular expression in peer to peer systems
    corecore