17,608 research outputs found
Methodological development
Book description: Human-Computer Interaction draws on the fields of computer science, psychology, cognitive science, and organisational and social sciences in order to understand how people use and experience interactive technology. Until now, researchers have been forced to return to the individual subjects to learn about research methods and how to adapt them to the particular challenges of HCI. This is the first book to provide a single resource through which a range of commonly used research methods in HCI are introduced. Chapters are authored by internationally leading HCI researchers who use examples from their own work to illustrate how the methods apply in an HCI context. Each chapter also contains key references to help researchers find out more about each method as it has been used in HCI. Topics covered include experimental design, use of eyetracking, qualitative research methods, cognitive modelling, how to develop new methodologies and writing up your research
Critical Success Factors for Positive User Experience in Hotel Websites: Applying Herzberg's Two Factor Theory for User Experience Modeling
This research presents the development of a critical success factor matrix
for increasing positive user experience of hotel websites based upon user
ratings. Firstly, a number of critical success factors for web usability have
been identified through the initial literature review. Secondly, hotel websites
were surveyed in terms of critical success factors identified through the
literature review. Thirdly, Herzberg's motivation theory has been applied to
the user rating and the critical success factors were categorized into two
areas. Finally, the critical success factor matrix has been developed using the
two main sets of data.Comment: Journal articl
Structured Inspections of Search Interfaces: A Practitioners Guide
In this paper we present a practitioners guide on how to apply a new inspection framework that evaluates search interfaces for their support of different searcher types. Vast amounts of money are being invested into search, and so it is becoming increasingly important to identify problems in design early, while it is relatively cheap to rectify them. The inspection method presented here can be applied quickly to early prototypes, as well as existing systems, and goes beyond other inspection methods, like Cognitive Walkthroughs, to produces rich analyses, including the support provided for different search tactics and user types. The guide is presented as a detailed example, assessing a previously unevaluated search interface: the Tabulator, and so also provides design recommendations for improving it. We conclude with a summary of the benefits of the evaluation framework, and discuss our plans for future enhancements
Recommended from our members
Towards a tool for the subjective assessment of speech system interfaces (SASSI)
Applications of speech recognition are now widespread, but user-centred evaluation methods are necessary to ensure their success. Objective evaluation techniques are fairly well established, but previous subjective techniques have been unstructured and unproven. This paper reports on the first stage of the development of a questionnaire measure for the Subjective Assessment of Speech System Interfaces (SASSI). The aim of the research programme is to produce a valid, reliable and sensitive measure of users' subjective experiences with speech recognition systems. Such a technique could make an important contribution to theory and practice in the design and evaluation of speech recognition systems according to best human factors practice. A prototype questionnaire was designed, based on established measures for evaluating the usability of other kinds of user interface, and on a review of the research literature into speech system design. This consisted of 50 statements with which respondents rated their level of agreement. The questionnaire was given to users of four different speech applications, and Exploratory Factor Analysis of 214 completed questionnaires was conducted. This suggested the presence of six main factors in users' perceptions of speech systems: System Response Accuracy, Likeability, Cognitive Demand, Annoyance, Habitability and Speed. The six factors have face validity, and a reasonable level of statistical reliability. The findings form a userful theoretical and practical basis for the subjective evaluation of any speech recognition interface. However, further work is recommended, to establish the validity and sensitivity of the approach, before a final tool can be produced which warrants general use
A research review of quality assessment for software
Measures were recommended to assess the quality of software submitted to the AdaNet program. The quality factors that are important to software reuse are explored and methods of evaluating those factors are discussed. Quality factors important to software reuse are: correctness, reliability, verifiability, understandability, modifiability, and certifiability. Certifiability is included because the documentation of many factors about a software component such as its efficiency, portability, and development history, constitute a class for factors important to some users, not important at all to other, and impossible for AdaNet to distinguish between a priori. The quality factors may be assessed in different ways. There are a few quantitative measures which have been shown to indicate software quality. However, it is believed that there exists many factors that indicate quality and have not been empirically validated due to their subjective nature. These subjective factors are characterized by the way in which they support the software engineering principles of abstraction, information hiding, modularity, localization, confirmability, uniformity, and completeness
Reviewing and extending the five-user assumption: A grounded procedure for interaction evaluation
" © ACM, 2013. This is the author's version of the work. It is posted here by permission of ACM for your personal use. Not for redistribution. The definitive version was published in ACM Transactions on Computer-Human Interaction (TOCHI), {VOL 20, ISS 5, (November 2013)} http://doi.acm.org/10.1145/2506210 "The debate concerning how many participants represents a sufficient number for interaction testing is
well-established and long-running, with prominent contributions arguing that five users provide a good
benchmark when seeking to discover interaction problems. We argue that adoption of five users in this
context is often done with little understanding of the basis for, or implications of, the decision. We present
an analysis of relevant research to clarify the meaning of the five-user assumption and to examine the
way in which the original research that suggested it has been applied. This includes its blind adoption and
application in some studies, and complaints about its inadequacies in others. We argue that the five-user
assumption is often misunderstood, not only in the field of Human-Computer Interaction, but also in fields
such as medical device design, or in business and information applications. The analysis that we present
allows us to define a systematic approach for monitoring the sample discovery likelihood, in formative and
summative evaluations, and for gathering information in order to make critical decisions during the
interaction testing, while respecting the aim of the evaluation and allotted budget. This approach – which
we call the ‘Grounded Procedure’ – is introduced and its value argued.The MATCH programme (EPSRC Grants: EP/F063822/1 EP/G012393/1
Revisión sistemática de los métodos de evaluación de experiencia de usuario de sitios web informativos
El presente trabajo de tesis consiste en una revisión sistemática, presentada como artÃculo cientÃfico, sobre los métodos de evaluación que son empleados actualmente para la evaluación de la experiencia de usuario en sitios Web informativos. El trabajo de investigación consiste en una revisión de la literatura para identificar los métodos, criterios y herramientas empleadas para evaluar la experiencia de usuario en sitios web de acuerdo a la definición planteada para ambos términos en la ISO 9241. Las investigaciones consideradas para la revisión fueron encuestas, estudios de casos, estudios comparativos y experimentos que incluyan la descripción de la metodologÃa aplicada. El artÃculo fue publicado en Springer como parte de la participación en el evento "HCI International 2017", realizado en Vancouver (Canadá) en el 2017.Trabajo de Investigació
- …