37,918 research outputs found
Automated usability evaluation during model-based interactive system development
Abstract. In this paper we describe an approach to efficiently evaluate the usability of an interactive application that has been realized to support various platforms and modalities. Therefore we combine our Multi-Access Service Platform (MASP), a model-based runtime environment to offer multimodal user interfaces with the MeMo workbench which is a tool supporting an automated usability analysis. Instead of deriving a system model by reverse-engineering or annotating screenshots for the automated usability analysis, we use the semantics of the runtime models of the MASP. This allows us to reduce the evaluation effort by automating parts of the testing process for various combinations of platforms and user groups that should be addressed by the application. Furthermore, by testing the application at runtime, the usability evaluation can also consider system dynamics and information that are unavailable at design time
Intelligent and adaptive tutoring for active learning and training environments
Active learning facilitated through interactive and adaptive learning environments differs substantially from traditional instructor-oriented, classroom-based teaching. We present a Web-based e-learning environment that integrates knowledge learning and skills training. How these tools are used most effectively is still an open question. We propose knowledge-level interaction and adaptive feedback and guidance as central features. We discuss these features and evaluate the effectiveness of this Web-based environment, focusing on different aspects of learning behaviour and tool usage. Motivation, acceptance of the approach, learning organisation and actual tool usage are aspects of behaviour that require different evaluation techniques to be used
Usability Evaluation in Virtual Environments: Classification and Comparison of Methods
Virtual environments (VEs) are a relatively new type of human-computer interface in which users perceive and act in a three-dimensional world. The designers of such systems cannot rely solely on design guidelines for traditional two-dimensional interfaces, so usability evaluation is crucial for VEs. We present an overview of VE usability evaluation. First, we discuss some of the issues that differentiate VE usability evaluation from evaluation of traditional user interfaces such as GUIs. We also present a review of VE evaluation methods currently in use, and discuss a simple classification space for VE usability evaluation methods. This classification space provides a structured means for comparing evaluation methods according to three key characteristics: involvement of representative users, context of evaluation, and types of results produced. To illustrate these concepts, we compare two existing evaluation approaches: testbed evaluation [Bowman, Johnson, & Hodges, 1999], and sequential evaluation [Gabbard, Hix, & Swan, 1999]. We conclude by presenting novel ways to effectively link these two approaches to VE usability evaluation
Recommended from our members
Towards a tool for the subjective assessment of speech system interfaces (SASSI)
Applications of speech recognition are now widespread, but user-centred evaluation methods are necessary to ensure their success. Objective evaluation techniques are fairly well established, but previous subjective techniques have been unstructured and unproven. This paper reports on the first stage of the development of a questionnaire measure for the Subjective Assessment of Speech System Interfaces (SASSI). The aim of the research programme is to produce a valid, reliable and sensitive measure of users' subjective experiences with speech recognition systems. Such a technique could make an important contribution to theory and practice in the design and evaluation of speech recognition systems according to best human factors practice. A prototype questionnaire was designed, based on established measures for evaluating the usability of other kinds of user interface, and on a review of the research literature into speech system design. This consisted of 50 statements with which respondents rated their level of agreement. The questionnaire was given to users of four different speech applications, and Exploratory Factor Analysis of 214 completed questionnaires was conducted. This suggested the presence of six main factors in users' perceptions of speech systems: System Response Accuracy, Likeability, Cognitive Demand, Annoyance, Habitability and Speed. The six factors have face validity, and a reasonable level of statistical reliability. The findings form a userful theoretical and practical basis for the subjective evaluation of any speech recognition interface. However, further work is recommended, to establish the validity and sensitivity of the approach, before a final tool can be produced which warrants general use
Digital service analysis and design : the role of process modelling
Digital libraries are evolving from content-centric systems to person-centric systems. Emergent services are interactive and multidimensional, associated systems multi-tiered and distributed. A holistic perspective is essential to their effective analysis and design, for beyond technical considerations, there are complex social, economic, organisational, and ergonomic requirements and relationships to consider. Such a perspective cannot be gained without direct user involvement, yet evidence suggests that development teams may be failing to effectively engage with users, relying on requirements derived from anecdotal evidence or prior experience. In such instances, there is a risk that services might be well designed, but functionally useless. This paper highlights the role of process modelling in gaining such perspective. Process modelling challenges, approaches, and success factors are considered, discussed with reference to a recent evaluation of usability and usefulness of a UK National Health Service (NHS) digital library. Reflecting on lessons learnt, recommendations are made regarding appropriate process modelling approach and application
Spott : on-the-spot e-commerce for television using deep learning-based video analysis techniques
Spott is an innovative second screen mobile multimedia application which offers viewers relevant information on objects (e.g., clothing, furniture, food) they see and like on their television screens. The application enables interaction between TV audiences and brands, so producers and advertisers can offer potential consumers tailored promotions, e-shop items, and/or free samples. In line with the current views on innovation management, the technological excellence of the Spott application is coupled with iterative user involvement throughout the entire development process. This article discusses both of these aspects and how they impact each other. First, we focus on the technological building blocks that facilitate the (semi-) automatic interactive tagging process of objects in the video streams. The majority of these building blocks extensively make use of novel and state-of-the-art deep learning concepts and methodologies. We show how these deep learning based video analysis techniques facilitate video summarization, semantic keyframe clustering, and (similar) object retrieval. Secondly, we provide insights in user tests that have been performed to evaluate and optimize the application's user experience. The lessons learned from these open field tests have already been an essential input in the technology development and will further shape the future modifications to the Spott application
- …