7 research outputs found

    Towards Scientific Incident Response

    Get PDF
    A scientific incident analysis is one with a methodical, justifiable approach to the human decision-making process. Incident analysis is a good target for additional rigor because it is the most human-intensive part of incident response. Our goal is to provide the tools necessary for specifying precisely the reasoning process in incident analysis. Such tools are lacking, and are a necessary (though not sufficient) component of a more scientific analysis process. To reach this goal, we adapt tools from program verification that can capture and test abductive reasoning. As Charles Peirce coined the term in 1900, “Abduction is the process of forming an explanatory hypothesis. It is the only logical operation which introduces any new idea.” We reference canonical examples as paradigms of decision-making during analysis. With these examples in mind, we design a logic capable of expressing decision-making during incident analysis. The result is that we can express, in machine-readable and precise language, the abductive hypotheses than an analyst makes, and the results of evaluating them. This result is beneficial because it opens up the opportunity of genuinely comparing analyst processes without revealing sensitive system details, as well as opening an opportunity towards improved decision-support via limited automation

    On Malfunction, Mechanisms and Malware Classification

    Get PDF
    Malware has been around since the 1980s and is a large and expensive security concern today, constantly growing over the past years. As our social, professional and financial lives become more digitalised, they present larger and more profitable targets for malware. The problem of classifying and preventing malware is therefore urgent, and it is complicated by the existence of several specific approaches. In this paper, we use an existing malware taxonomy to formulate a general, language independent functional description of malware as transformers between states of the host system and described by a trust relation with its components. This description is then further generalised in terms of mechanisms, thereby contributing to a general understanding of malware. The aim is to use the latter in order to present an improved classification method for malware

    The development and deployment of formal methods in the UK

    Full text link
    UK researchers have made major contributions to the technical ideas underpinning formal approaches to the specification and development of computer systems. Perhaps as a consequence of this, some of the significant attempts to deploy theoretical ideas into practical environments have taken place in the UK. The authors of this paper have been involved in formal methods for many years and both have tracked a significant proportion of the whole story. This paper both lists key ideas and indicates where attempts were made to use the ideas in practice. Not all of these deployment stories have been a complete success and an attempt is made to tease out lessons that influence the probability of long-term impact.Comment: This work has been submitted to the IEEE for possible publication. Copyright may be transferred without notice, after which this version may no longer be accessibl

    An Analysis of How Many Undiscovered Vulnerabilities Remain in Information Systems

    Full text link
    Vulnerability management strategy, from both organizational and public policy perspectives, hinges on an understanding of the supply of undiscovered vulnerabilities. If the number of undiscovered vulnerabilities is small enough, then a reasonable investment strategy would be to focus on finding and removing the remaining undiscovered vulnerabilities. If the number of undiscovered vulnerabilities is and will continue to be large, then a better investment strategy would be to focus on quick patch dissemination and engineering resilient systems. This paper examines a paradigm, namely that the number of undiscovered vulnerabilities is manageably small, through the lens of mathematical concepts from the theory of computing. From this perspective, we find little support for the paradigm of limited undiscovered vulnerabilities. We then briefly support the notion that these theory-based conclusions are relevant to practical computers in use today. We find no reason to believe undiscovered vulnerabilities are not essentially unlimited in practice and we examine the possible economic impacts should this be the case. Based on our analysis, we recommend vulnerability management strategy adopts an approach favoring quick patch dissemination and engineering resilient systems, while continuing good software engineering practices to reduce (but never eliminate) vulnerabilities in information systems

    Get rid of inline assembly through verification-oriented lifting

    Full text link
    Formal methods for software development have made great strides in the last two decades, to the point that their application in safety-critical embedded software is an undeniable success. Their extension to non-critical software is one of the notable forthcoming challenges. For example, C programmers regularly use inline assembly for low-level optimizations and system primitives. This usually results in driving state-of-the-art formal analyzers developed for C ineffective. We thus propose TInA, an automated, generic, trustable and verification-oriented lifting technique turning inline assembly into semantically equivalent C code, in order to take advantage of existing C analyzers. Extensive experiments on real-world C code with inline assembly (including GMP and ffmpeg) show the feasibility and benefits of TInA

    Why Separation Logic Works

    Get PDF
    One might poetically muse that computers have the essence both of logic and machines. Through the case of the history of Separation Logic, we explore how this assertion is more than idle poetry. Separation Logic works because it merges the software engineer’s conceptual model of a program’s manipulation of computer memory with the logical model that interprets what sentences in the logic are true, and because it has a proof theory which aids in the crucial problem of scaling the reasoning task. Scalability is a central problem, and some would even say the central problem, in appli- cations of logic in computer science. Separation Logic is an interesting case because of its widespread success in verification tools. For these two senses of model—the engineering/conceptual and the logical—to merge in a genuine sense, each must maintain their norms of use from their home disciplines. When this occurs, both the logic and engineering benefit greatly. Seeking this intersection of two different senses of model provides a strategy for how computer scientists and logicians may be successful. Furthermore, the history of Separation Logic for analysing programs provides a novel case for philosophers of science of how software engineers and computer scientists develop models and the components of such models. We provide three contributions: an exploration of the extent of models merging that is necessary for success in computer science; an introduction to the technical details of Separation Logic, which can be used for reasoning about other exhaustible resources; and an introduction to (a subset of) the problems, process, and results of computer scientists for those outside the field

    Human decision-making in computer security incident response

    Get PDF
    Background: Cybersecurity has risen to international importance. Almost every organization will fall victim to a successful cyberattack. Yet, guidance for computer security incident response analysts is inadequate. Research Questions: What heuristics should an incident analyst use to construct general knowledge and analyse attacks? Can we construct formal tools to enable automated decision support for the analyst with such heuristics and knowledge? Method: We take an interdisciplinary approach. To answer the first question, we use the research tradition of philosophy of science, specifically the study of mechanisms. To answer the question on formal tools, we use the research tradition of program verification and logic, specifically Separation Logic. Results: We identify several heuristics from biological sciences that cybersecurity researchers have re-invented to varying degrees. We consolidate the new mechanisms literature to yield heuristics related to the fact that knowledge is of clusters of multi-field mechanism schema on four dimensions. General knowledge structures such as the intrusion kill chain provide context and provide hypotheses for filling in details. The philosophical analysis answers this research question, and also provides constraints on building the logic. Finally, we succeed in defining an incident analysis logic resembling Separation Logic and translating the kill chain into it as a proof of concept. Conclusion: These results benefits incident analysis, enabling it to expand from a tradecraft or art to also integrate science. Future research might realize our logic into automated decision-support. Additionally, we have opened the field of cybersecuity to collaboration with philosophers of science and logicians
    corecore