524 research outputs found

    Measuring the Impact of Spectre and Meltdown

    Full text link
    The Spectre and Meltdown flaws in modern microprocessors represent a new class of attacks that have been difficult to mitigate. The mitigations that have been proposed have known performance impacts. The reported magnitude of these impacts varies depending on the industry sector and expected workload characteristics. In this paper, we measure the performance impact on several workloads relevant to HPC systems. We show that the impact can be significant on both synthetic and realistic workloads. We also show that the performance penalties are difficult to avoid even in dedicated systems where security is a lesser concern

    Identifying common problems in the acquisition and deployment of large-scale software projects in the US and UK healthcare systems

    Get PDF
    Public and private organizations are investing increasing amounts into the development of healthcare information technology. These applications are perceived to offer numerous benefits. Software systems can improve the exchange of information between healthcare facilities. They support standardised procedures that can help to increase consistency between different service providers. Electronic patient records ensure minimum standards across the trajectory of care when patients move between different specializations. Healthcare information systems also offer economic benefits through efficiency savings; for example by providing the data that helps to identify potential bottlenecks in the provision and administration of care. However, a number of high-profile failures reveal the problems that arise when staff must cope with the loss of these applications. In particular, teams have to retrieve paper based records that often lack the detail on electronic systems. Individuals who have only used electronic information systems face particular problems in learning how to apply paper-based fallbacks. The following pages compare two different failures of Healthcare Information Systems in the UK and North America. The intention is to ensure that future initiatives to extend the integration of electronic patient records will build on the ‘lessons learned’ from previous systems

    Research in nonlinear structural and solid mechanics

    Get PDF
    Recent and projected advances in applied mechanics, numerical analysis, computer hardware and engineering software, and their impact on modeling and solution techniques in nonlinear structural and solid mechanics are discussed. The fields covered are rapidly changing and are strongly impacted by current and projected advances in computer hardware. To foster effective development of the technology perceptions on computing systems and nonlinear analysis software systems are presented

    DBKnot: A Transparent and Seamless, Pluggable Tamper Evident Database

    Get PDF
    Database integrity is crucial to organizations that rely on databases of important data. They suffer from the vulnerability to internal fraud. Database tampering by internal malicious employees with high technical authorization to their infrastructure or even compromised by externals is one of the important attack vectors. This thesis addresses such challenge in a class of problems where data is appended only and is immutable. Examples of operations where data does not change is a) financial institutions (banks, accounting systems, stock market, etc., b) registries and notary systems where important data is kept but is never subject to change, and c) system logs that must be kept intact for performance and forensic inspection if needed. The target of the approach is implementation seamlessness with little-or-no changes required in existing systems. Transaction tracking for tamper detection is done by utilizing a common hashtable that serially and cumulatively hashes transactions together while using an external time-stamper and signer to sign such linkages together. This allows transactions to be tracked without any of the organizations’ data leaving their premises and going to any third-party which also reduces the performance impact of tracking. This is done so by adding a tracking layer and embedding it inside the data workflow while keeping it as un-invasive as possible. DBKnot implements such features a) natively into databases, or b) embedded inside Object Relational Mapping (ORM) frameworks, and finally c) outlines a direction of implementing it as a stand-alone microservice reverse-proxy. A prototype ORM and database layer has been developed and tested for seamlessness of integration and ease of use. Additionally, different models of optimization by implementing pipelining parallelism in the hashing/signing process have been tested in order to check their impact on performance. Stock-market information was used for experimentation with DBKnot and the initial results gave a slightly less than 100% increase in transaction time by using the most basic, sequential, and synchronous version of DBKnot. Signing and hashing overhead does not show significant increase per record with the increased amount of data. A number of different alternate optimizations were done to the design that via testing have resulted in significant increase in performance

    Locality-Adaptive Parallel Hash Joins Using Hardware Transactional Memory

    Get PDF
    Previous work [1] has claimed that the best performing implementation of in-memory hash joins is based on (radix-)partitioning of the build-side input. Indeed, despite the overhead of partitioning, the benefits from increased cache-locality and synchronization free parallelism in the build-phase outweigh the costs when the input data is randomly ordered. However, many datasets already exhibit significant spatial locality (i.e., non-randomness) due to the way data items enter the database: through periodic ETL or trickle loaded in the form of transactions. In such cases, the first benefit of partitioning — increased locality — is largely irrelevant. In this paper, we demonstrate how hardware transactional memory (HTM) can render the other benefit, freedom from synchronization, irrelevant as well. Specifically, using careful analysis and engineering, we develop an adaptive hash join implementation that outperforms parallel radix-partitioned hash joins as well as sort-merge joins on data with high spatial locality. In addition, we show how, through lightweight (less than 1% overhead) runtime monitoring of the transaction abort rate, our implementation can detect inputs with low spatial locality and dynamically fall back to radix-partitioning of the build-side input. The result is a hash join implementation that is more than 3 times faster than the state-of-the-art on high-locality data and never more than 1% slower

    An evaluation of integrity control facilities in an AS/400 environment

    Get PDF
    M.Com. (Computer Auditing)Both the auditor, faced with the task of determining an effective and efficient audit approach, as well as management, charged with implementing and monitoring need purer security, need to evaluate integrity controls. This need to evaluate integrity controls is increasing, due to the growing complexity of computer environments, the breakdown of the paper audit trail, and the replacement of application controls by integrity controls. By applying the Access Path and Path Context Models, an evaluation was performed of integrity controls and risks in an AS/400 environment. The operating system (08/400) was delineated into functional categories to assist in the evaluation, in a manner consistent with that outlined in the Access Path Model. It was found that sufficient integrity control facilities exist in an AS/400 environment to meet the control objectives, although several risks were identified which could only be addressed by application controls

    Formal Verification of the AAMP-FV Microcode

    Get PDF
    This report describes the experiences of Collins Avionics & Communications and SRI International in formally specifying and verifying the microcode in a Rockwell proprietary microprocessor, the AAMP-FV, using the PVS verification system. This project built extensively on earlier experiences using PVS to verify the microcode in the AAMP5, a complex, pipelined microprocessor designed for use in avionics displays and global positioning systems. While the AAMP5 experiment demonstrated the technical feasibility of formal verification of microcode, the steep learning curve encountered left unanswered the question of whether it could be performed at reasonable cost. The AAMP-FV project was conducted to determine whether the experience gained on the AAMP5 project could be used to make formal verification of microcode cost effective for safety-critical and high volume devices

    Hardware Acceleration for Conditional State-Based Communication Scheduling on Real-Time Ethernet

    Get PDF
    Distributed real-time applications implement distributed applications with timeliness requirements. Such systems require a deterministic communication medium with bounded communication delays. Ethernet is a widely used commodity network with many appliances and network components and represents a natural fit for real-time application; unfortunately, standard Ethernet provides no bounded communication delays. Conditional state-based communication schedules provide expressive means for specifying and executing with choice points, while staying verifiable. Such schedules implement an arbitration scheme and provide the developer with means to fit the arbitration scheme to the application demands instead of requiring the developer to tweak the application to fit a predefined scheme. An evaluation of this approach as software prototypes showed that jitter and execution overhead may diminish the gains. This work successfully addresses this problem with a synthesized soft processor. We present results around the development of the soft processor, the design choices, and the measurements on throughput and robustness
    • …
    corecore