299,571 research outputs found

    MSRR: Leveraging dynamic measurement for establishing trust in remote attestation

    Get PDF
    Measurers are critical to a remote attestation (RA) system to verify the integrity of a remote untrusted host. Runtime measurers in a dynamic RA system sample the dynamic program state of the host to form evidence in order to establish trust by a remote appraisal system. However, existing runtime measurers are tightly integrated with specific software. Such measurers need to be generated anew for each software, which is a manual process that is both challenging and tedious. In this paper we present a novel approach to decouple application-specific measurement policies from the measurers tasked with performing the actual runtime measurement. We describe the MSRR (MeaSeReR) Measurement Suite, a system of tools designed with the primary goal of reducing the high degree of manual effort required to produce measurement solutions at a per application basis. The MSRR suite prototypes a novel general-purpose measurement system, the MSRR Measurement System, that is agnostic of the target application. Furthermore, we describe a robust high-level measurement policy language, MSRR-PL, that can be used to write per application policies for the MSRR Measurer. Finally, we provide a tool to automatically generate MSRR-PL policies for target applications by leveraging state of the art static analysis tools. In this work, we show how the MSRR suite can be used to significantly reduce the time and effort spent on designing measurers anew for each application. We describe MSRR's robust querying language, which allows the appraisal system to accurately specify the what, when, and how to measure. We describe the capabilities and the limitations of our measurement policy generation tool. We evaluate MSRR's overhead and demonstrate its functionality by employing real-world case studies. We show that MSRR has an acceptable overhead on a host of applications with various measurement workloads

    RadBench : benchmarking image interpretation skills

    Get PDF
    Purpose: The key aim of this research was to develop an objective, accurate assessment tool with which to provide regular measurement and monitoring of image interpretation performance. The tool was a specially developed software program (RadBench) by which to objectively measure image interpretation performance en masse and identify development needs. Method: Two test banks were generated (Test 1 & Test 2), each containing twenty appendicular musculoskeletal images, half were normal, half contained fractures. All images were double reported by radiologists and anonymised. A study (n ¼ 42) was carried out within one calendar month to test the method and analysis approach. The participants included general radiographers (34), reporting radiographers (3), radiologists (2) (all from one UK NHS Trust) and medical imaging academics (3). Results: The RadBench software generated calculations of sensitivity, specificity, and accuracy in addition to a decision making map for each respondent. Early findings highlighted a 5% mean difference between image banks, confirming that benchmarking must be related to a specific test. The benchmarking option within the software enabled the user to compare their score with the highest, lowest and mean score of others who had taken the same test. Reporting radiographers and radiologists all scored 95% or above accuracy in both tests. The general radiographer population scored between 60 and 95%. Conclusions: The evidence from this research indicates that the Radbench tool is capable of providing benchmark measures of image interpretation accuracy, with the potential for comparison across populations

    Real-time scheduling algorithms, task visualization

    Get PDF
    Real-time systems are computer systems that require responses to events within specified time limits or constraints. Many real-time systems are digital control systems comprised entirely of binary logic or a microprocessor dedicated to one software application that is its own operating system. In recent years, the reliability of general-purpose real-time operating systems (RTOS) consisting of a scheduler and system resource management have improved. In this project, I write a real-time simulator, a workload generator, analysis tools, several test cases, and run and interpret results. My experiments focus on providing evidence to support the claim that for the Rate Monotonic scheduling algorithm (RM), workloads with harmonically non-similar, periodic tasks are more difficult to schedule. The analysis tool I have developed is a measurement system and real-time simulator that analyzes real-time scheduling strategies. I have also developed a visualization system to display the scheduling decisions of a real-time scheduler. Using the measurement and visualization systems, I investigate scheduling algorithms for real-time schedulers and compare their performance. I run different workloads to test the scheduling algorithms and analyze what types of workload characteristics are preferred for real-time benchmarks

    Verification, Analytical Validation, and Clinical Validation (V3): The Foundation of Determining Fit-for-Purpose for Biometric Monitoring Technologies (BioMeTs)

    Get PDF
    Digital medicine is an interdisciplinary field, drawing together stakeholders with expertize in engineering, manufacturing, clinical science, data science, biostatistics, regulatory science, ethics, patient advocacy, and healthcare policy, to name a few. Although this diversity is undoubtedly valuable, it can lead to confusion regarding terminology and best practices. There are many instances, as we detail in this paper, where a single term is used by different groups to mean different things, as well as cases where multiple terms are used to describe essentially the same concept. Our intent is to clarify core terminology and best practices for the evaluation of Biometric Monitoring Technologies (BioMeTs), without unnecessarily introducing new terms. We focus on the evaluation of BioMeTs as fit-for-purpose for use in clinical trials. However, our intent is for this framework to be instructional to all users of digital measurement tools, regardless of setting or intended use. We propose and describe a three-component framework intended to provide a foundational evaluation framework for BioMeTs. This framework includes (1) verification, (2) analytical validation, and (3) clinical validation. We aim for this common vocabulary to enable more effective communication and collaboration, generate a common and meaningful evidence base for BioMeTs, and improve the accessibility of the digital medicine field

    A novel haptic model and environment for maxillofacial surgical operation planning and manipulation

    Get PDF
    This paper presents a practical method and a new haptic model to support manipulations of bones and their segments during the planning of a surgical operation in a virtual environment using a haptic interface. To perform an effective dental surgery it is important to have all the operation related information of the patient available beforehand in order to plan the operation and avoid any complications. A haptic interface with a virtual and accurate patient model to support the planning of bone cuts is therefore critical, useful and necessary for the surgeons. The system proposed uses DICOM images taken from a digital tomography scanner and creates a mesh model of the filtered skull, from which the jaw bone can be isolated for further use. A novel solution for cutting the bones has been developed and it uses the haptic tool to determine and define the bone-cutting plane in the bone, and this new approach creates three new meshes of the original model. Using this approach the computational power is optimized and a real time feedback can be achieved during all bone manipulations. During the movement of the mesh cutting, a novel friction profile is predefined in the haptical system to simulate the force feedback feel of different densities in the bone
    • …
    corecore