18,472 research outputs found

    Verification, Analytical Validation, and Clinical Validation (V3): The Foundation of Determining Fit-for-Purpose for Biometric Monitoring Technologies (BioMeTs)

    Get PDF
    Digital medicine is an interdisciplinary field, drawing together stakeholders with expertize in engineering, manufacturing, clinical science, data science, biostatistics, regulatory science, ethics, patient advocacy, and healthcare policy, to name a few. Although this diversity is undoubtedly valuable, it can lead to confusion regarding terminology and best practices. There are many instances, as we detail in this paper, where a single term is used by different groups to mean different things, as well as cases where multiple terms are used to describe essentially the same concept. Our intent is to clarify core terminology and best practices for the evaluation of Biometric Monitoring Technologies (BioMeTs), without unnecessarily introducing new terms. We focus on the evaluation of BioMeTs as fit-for-purpose for use in clinical trials. However, our intent is for this framework to be instructional to all users of digital measurement tools, regardless of setting or intended use. We propose and describe a three-component framework intended to provide a foundational evaluation framework for BioMeTs. This framework includes (1) verification, (2) analytical validation, and (3) clinical validation. We aim for this common vocabulary to enable more effective communication and collaboration, generate a common and meaningful evidence base for BioMeTs, and improve the accessibility of the digital medicine field

    Mapping AADL models to a repository of multiple schedulability analysis techniques

    Get PDF
    To fill the gap between the modeling of real-time systems and the scheduling analysis, we propose a framework that supports seamlessly the two aspects: 1) modeling a system using a methodology, in our case study, the Architecture Analysis and Design Language (AADL), and 2) helping to easily check temporal requirements (schedulability analysis, worst-case response time, sensitivity analysis, etc.). We introduce an intermediate framework called MoSaRT, which supports a rich semantic concerning temporal analysis. We show with a case study how the input model is transformed into a MoSaRT model, and how our framework is able to generate the proper models as inputs to several classic temporal analysis tools

    Test-Driven, Model-Based Systems Engineering.

    Get PDF

    Performance Evaluation of Software using Formal Methods

    Get PDF
    Formal Methods (FMs) can be used in varied areas of applications and to solve critical and fundamental problems of Performance Evaluation (PE). Modelling and analysis techniques can be used for both system and software performance evaluation. The functional features and performance properties of modern software used for performance evaluation has become so intertwined. Traditional models and methods for performance evaluation has been studied widely which culminated into the modern models and methods for system and software engineering evaluation such as formal methods. Techniques have transcended from functionality to performance modeling and analysis. Formal models help in identifying faulty reasoning far earlier than in traditional design; and formal specification has proved useful even on already existing software and systems. Formal approach eliminates ambiguity. The basic and final goal of the performance evaluation technique is to come to a conclusion, whether the software and system are working in a good condition or satisfactorily

    Payload training methodology study

    Get PDF
    The results of the Payload Training Methodology Study (PTMS) are documented. Methods and procedures are defined for the development of payload training programs to be conducted at the Marshall Space Flight Center Payload Training Complex (PCT) for the Space Station Freedom program. The study outlines the overall training program concept as well as the six methodologies associated with the program implementation. The program concept outlines the entire payload training program from initial identification of training requirements to the development of detailed design specifications for simulators and instructional material. The following six methodologies are defined: (1) The Training and Simulation Needs Assessment Methodology; (2) The Simulation Approach Methodology; (3) The Simulation Definition Analysis Methodology; (4) The Simulator Requirements Standardization Methodology; (5) The Simulator Development Verification Methodology; and (6) The Simulator Validation Methodology
    corecore