12 research outputs found

    A Domain Specific Language for Digital Forensics and Incident Response Analysis

    Get PDF
    One of the longstanding conceptual problems in digital forensics is the dichotomy between the need for verifiable and reproducible forensic investigations, and the lack of practical mechanisms to accomplish them. With nearly four decades of professional digital forensic practice, investigator notes are still the primary source of reproducibility information, and much of it is tied to the functions of specific, often proprietary, tools. The lack of a formal means of specification for digital forensic operations results in three major problems. Specifically, there is a critical lack of: a) standardized and automated means to scientifically verify accuracy of digital forensic tools; b) methods to reliably reproduce forensic computations (their results); and c) framework for inter-operability among forensic tools. Additionally, there is no standardized means for communicating software requirements between users, researchers and developers, resulting in a mismatch in expectations. Combined with the exponential growth in data volume and complexity of applications and systems to be investigated, all of these concerns result in major case backlogs and inherently reduce the reliability of the digital forensic analyses. This work proposes a new approach to the specification of forensic computations, such that the above concerns can be addressed on a scientific basis with a new domain specific language (DSL) called nugget. DSLs are specialized languages that aim to address the concerns of particular domains by providing practical abstractions. Successful DSLs, such as SQL, can transform an application domain by providing a standardized way for users to communicate what they need without specifying how the computation should be performed. This is the first effort to build a DSL for (digital) forensic computations with the following research goals: 1) provide an intuitive formal specification language that covers core types of forensic computations and common data types; 2) provide a mechanism to extend the language that can incorporate arbitrary computations; 3) provide a prototype execution environment that allows the fully automatic execution of the computation; 4) provide a complete, formal, and auditable log of computations that can be used to reproduce an investigation; 5) demonstrate cloud-ready processing that can match the growth in data volumes and complexity

    Decision Support Systems: Issues and Challenges; Proceedings of an International Task Force Meeting, June 23-25, 1980

    Get PDF
    This book reports on a three-day meeting on Decision Support Systems held at IIASA. IIASA's interest in sponsoring the meeting was spurred by several factors. First, the term DSS clearly is used in a wide range of contexts; we hoped to develop a deeper understanding of the term and the new field to which it refers. Second, we felt that ongoing work in the DSS field would be enhanced by interaction between professionals who had been working on such systems and people from fields that function as "resource disciplines" for DSS. Finally we wished to bring professionals from several nations together, from the east as well as the west, to share experiences and to assess the viability of the DSS concept in different cultures. The broad objectives set for this meeting were realized in a number of ways. Virtually all the participants testified that they had gained a deeper understanding of DSS, the role it can play in asssisting managers in organizations, and the need for further development in key areas

    Computer and data security: a comprehensive annotated bibliography.

    Get PDF
    Massachusetts Institute of Technology, Alfred P. Sloan School of Management. Thesis. 1973. M.S.MICROFICHE COPY ALSO AVAILABLE IN DEWEY LIBRARY.M.S

    Faculty Senate Monthly Packet November 1978

    Get PDF
    The November 1978 Monthly packet includes the November agenda and appendices and the Faculty Senate minutes and attachments from the meeting held October 1978

    The semantic database model as a basis for an automated database design tool

    Get PDF
    Bibliography: p.257-80.The automatic database design system is a design aid for network database creation. It obtains a requirements specification from a user and generates a prototype database. This database is compatible with the Data Definition Language of DMS 1100, the database system on the Univac 1108 at the University of Cape Town. The user interface has been constructed in such a way that a computer-naive user can submit a description of his organisation to the system. Thus it constitutes a powerful database design tool, which should greatly alleviate the designer's tasks of communicating with users, and of creating an initial database definition. The requirements are formulated using the semantic database model, and semantic information in this model is incorporated into the database as integrity constraints. A relation scheme is also generated from the specification. As a result of this research, insight has been gained into the advantages and shortcomings of the semantic database model, and some principles for 'good' data models and database design methodologies have emerged

    ICL Technical Journal 4(4): CAFS-ISP

    Get PDF
    The special issue of the ICL Technical Journal on CAFS-ISP. This closely followed the award to ICL of the Queen's Award for Technology in April, 1985. The contents include the history of the hardware and software, its status and future, perspectives from leading developers and users, and a list of related patents

    Secure and efficient processing of outsourced data structures using trusted execution environments

    Full text link
    In recent years, more and more companies make use of cloud computing; in other words, they outsource data storage and data processing to a third party, the cloud provider. From cloud computing, the companies expect, for example, cost reductions, fast deployment time, and improved security. However, security also presents a significant challenge as demonstrated by many cloud computing–related data breaches. Whether it is due to failing security measures, government interventions, or internal attackers, data leakages can have severe consequences, e.g., revenue loss, damage to brand reputation, and loss of intellectual property. A valid strategy to mitigate these consequences is data encryption during storage, transport, and processing. Nevertheless, the outsourced data processing should combine the following three properties: strong security, high efficiency, and arbitrary processing capabilities. Many approaches for outsourced data processing based purely on cryptography are available. For instance, encrypted storage of outsourced data, property-preserving encryption, fully homomorphic encryption, searchable encryption, and functional encryption. However, all of these approaches fail in at least one of the three mentioned properties. Besides approaches purely based on cryptography, some approaches use a trusted execution environment (TEE) to process data at a cloud provider. TEEs provide an isolated processing environment for user-defined code and data, i.e., the confidentiality and integrity of code and data processed in this environment are protected against other software and physical accesses. Additionally, TEEs promise efficient data processing. Various research papers use TEEs to protect objects at different levels of granularity. On the one end of the range, TEEs can protect entire (legacy) applications. This approach facilitates the development effort for protected applications as it requires only minor changes. However, the downsides of this approach are that the attack surface is large, it is difficult to capture the exact leakage, and it might not even be possible as the isolated environment of commercially available TEEs is limited. On the other end of the range, TEEs can protect individual, stateless operations, which are called from otherwise unchanged applications. This approach does not suffer from the problems stated before, but it leaks the (encrypted) result of each operation and the detailed control flow through the application. It is difficult to capture the leakage of this approach, because it depends on the processed operation and the operation’s location in the code. In this dissertation, we propose a trade-off between both approaches: the TEE-based processing of data structures. In this approach, otherwise unchanged applications call a TEE for self-contained data structure operations and receive encrypted results. We examine three data structures: TEE-protected B+-trees, TEE-protected database dictionaries, and TEE-protected file systems. Using these data structures, we design three secure and efficient systems: an outsourced system for index searches; an outsourced, dictionary-encoding–based, column-oriented, in-memory database supporting analytic queries on large datasets; and an outsourced system for group file sharing supporting large and dynamic groups. Due to our approach, the systems have a small attack surface, a low likelihood of security-relevant bugs, and a data owner can easily perform a (formal) code verification of the sensitive code. At the same time, we prevent low-level leakage of individual operation results. For all systems, we present a thorough security evaluation showing lower bounds of security. Additionally, we use prototype implementations to present upper bounds on performance. For our implementations, we use a widely available TEE that has a limited isolated environment—Intel Software Guard Extensions. By comparing our systems to related work, we show that they provide a favorable trade-off regarding security and efficiency

    The use of semantic analysis in the development of information systems

    Get PDF
    This research has accomplished a clarification of what exactly constitutes semantic analysis using the specification language NORMA. Having clarified the essential elements of the language, this work has shown how the language can be put into practice. The technique has been exemplified on three applications. This process of indicating how the technique can be applied to a case is necessary if we are to show the practical usability. Another important contribution has been the setting out of more precise rules for the constraints; the sketching out of a metaschema. Although more work is needed here to express the full range of metaphysical relationships that underlie any semantic schema in NORMA, a start has been made. This work will support the building of a computer system to aid the analyst. With the large range of constraints and fundamental assumptions associated with NORMA, the need for a method of applying them was paramount. In this work we have attempted to set out a rational agenda of work comprised in the performing of semantic analysis, and in a manner which is easily accessible. A simple set of ten stages in the work spans the range of tasks that are required. At no stage has there existed such a straightforward introduction, rather, the tendency has been to point to the possibility of beginning the analysis in a number of ways. As a further contribution this work has examined some of the examples of semantic analysis and identified a few 'classic' errors. The importance of this is to focus on what are likely to be common mistakes that spring from an inadequate grasp of the language, which if corrected can lead to better results quite quickly and avoid a significant part of the problems associated with the 'learning curve'
    corecore