9 research outputs found

    Are three methods better than one? A comparative assessment of usability evaluation methods in an EHR

    No full text
    Objective: To comparatively evaluate the effectiveness of three different methods involving end-users for detecting usability problems in an EHR: user testing, semi-structured interviews and surveys. Materials and methods: Data were collected at two major urban dental schools from faculty, residents and dental students to assess the usability of a dental EHR for developing a treatment plan. These included user testing (N= 32), semi-structured interviews (N= 36), and surveys (N= 35). Results: The three methods together identified a total of 187 usability violations: 54% via user testing, 28% via the semi-structured interview and 18% from the survey method, with modest overlap. These usability problems were classified into 24 problem themes in 3 broad categories. User testing covered the broadest range of themes (83%), followed by the interview (63%) and survey (29%) methods. Discussion: Multiple evaluation methods provide a comprehensive approach to identifying EHR usability challenges and specific problems. The three methods were found to be complementary, and thus each can provide unique insights for software enhancement. Interview and survey methods were found not to be sufficient by themselves, but when used in conjunction with the user testing method, they provided a comprehensive evaluation of the EHR. Conclusion: We recommend using a multi-method approach when testing the usability of health information technology because it provides a more comprehensive picture of usability challenges

    Detection and characterization of usability problems in structured data entry interfaces in dentistry

    No full text
    BACKGROUND: Poor usability is one of the major barriers for optimally using electronic health records (EHRs). Dentists are increasingly adopting EHRs, and are using structured data entry interfaces to enter data such that the data can be easily retrieved and exchanged. Until recently, dentists have lacked a standardized terminology to consistently represent oral health diagnoses. OBJECTIVES: In this study we evaluated the usability of a widely used EHR interface that allow the entry of diagnostic terms, using multi-faceted methods to identify problems and work with the vendor to correct them using an iterative design method. METHODS: Fieldwork was undertaken at two clinical sites, and dental students as subjects participated in user testing (n=32), interviews (n=36) and observations (n=24). RESULTS: User testing revealed that only 22–41% of users were able to successfully complete a simple task of entering one diagnosis, while no user was able to complete a more complex task. We identified and characterized 24 high-level usability problems reducing efficiency and causing user errors. Interface-related problems included unexpected approaches for displaying diagnosis, lack of visibility, and inconsistent use of UI widgets. Terminology related issues included missing and mis-categorized concepts. Work domain issues involved both absent and superfluous functions. In collaboration with the vendor, each usability problem was prioritized and a timeline set to resolve the concerns. DISCUSSION: Mixed methods evaluations identified a number of critical usability issues relating to the user interface, underlying terminology of the work domain. The usability challenges were found to prevent most users from successfully completing the tasks. Our further work we will determine if changes to the interface, terminology and work domain do result in improved usability
    corecore