4,889 research outputs found

    Independent Verification and Validation of Complex User Interfaces: A Human Factors Approach

    Get PDF
    The Usability Testing and Analysis Facility (UTAF) at the NASA Johnson Space Center has identified and evaluated a potential automated software interface inspection tool capable of assessing the degree to which space-related critical and high-risk software system user interfaces meet objective human factors standards across each NASA program and project. Testing consisted of two distinct phases. Phase 1 compared analysis times and similarity of results for the automated tool and for human-computer interface (HCI) experts. In Phase 2, HCI experts critiqued the prototype tool's user interface. Based on this evaluation, it appears that a more fully developed version of the tool will be a promising complement to a human factors-oriented independent verification and validation (IV&V) process

    Information systems evaluation methodologies

    Get PDF
    Due to the prevalent use of Information Systems (IS) in modern organisations nowadays, evaluation research in this field is becoming more and more important. In light of this, a set of rigorous methodologies were developed and used by IS researchers and practitioners to evaluate the increasingly complex IS implementation used. Moreover, different types of IS and different focusing perspectives of the evaluation require the selection and use of different evaluation approaches and methodologies. This paper aims to identify, explore, investigate and discuss the various key methodologies that can be used in IS evaluation from different perspectives, namely in nature (e.g. summative vs. formative evaluation) and in strategy (e.g. goal-based, goal-free and criteria-based evaluation). The paper concludes that evaluation methodologies should be selected depending on the nature of the IS and the specific goals and objectives of the evaluation. Nonetheless, it is also proposed that formative criteria-based evaluation and summative criteria-based evaluation are currently among the most and more widely used in IS research. The authors suggest that the combines used of one or more of these approaches can be applied at different stages of the IS life cycle in order to generate more rigorous and reliable evaluation outcomes

    Automated Knowledge Modeling for Cancer Clinical Practice Guidelines

    Full text link
    Clinical Practice Guidelines (CPGs) for cancer diseases evolve rapidly due to new evidence generated by active research. Currently, CPGs are primarily published in a document format that is ill-suited for managing this developing knowledge. A knowledge model of the guidelines document suitable for programmatic interaction is required. This work proposes an automated method for extraction of knowledge from National Comprehensive Cancer Network (NCCN) CPGs in Oncology and generating a structured model containing the retrieved knowledge. The proposed method was tested using two versions of NCCN Non-Small Cell Lung Cancer (NSCLC) CPG to demonstrate the effectiveness in faithful extraction and modeling of knowledge. Three enrichment strategies using Cancer staging information, Unified Medical Language System (UMLS) Metathesaurus & National Cancer Institute thesaurus (NCIt) concepts, and Node classification are also presented to enhance the model towards enabling programmatic traversal and querying of cancer care guidelines. The Node classification was performed using a Support Vector Machine (SVM) model, achieving a classification accuracy of 0.81 with 10-fold cross-validation
    • …
    corecore