456 research outputs found

    The use of analytical models in human-computer interface design

    Get PDF
    Some of the many analytical models in human-computer interface design that are currently being developed are described. The usefulness of analytical models for human-computer interface design is evaluated. Can the use of analytical models be recommended to interface designers? The answer, based on the empirical research summarized here, is: not at this time. There are too many unanswered questions concerning the validity of models and their ability to meet the practical needs of design organizations

    Scoping analytical usability evaluation methods: A case study

    Get PDF
    Analytical usability evaluation methods (UEMs) can complement empirical evaluation of systems: for example, they can often be used earlier in design and can provide accounts of why users might experience difficulties, as well as what those difficulties are. However, their properties and value are only partially understood. One way to improve our understanding is by detailed comparisons using a single interface or system as a target for evaluation, but we need to look deeper than simple problem counts: we need to consider what kinds of accounts each UEM offers, and why. Here, we report on a detailed comparison of eight analytical UEMs. These eight methods were applied to it robotic arm interface, and the findings were systematically compared against video data of the arm ill use. The usability issues that were identified could be grouped into five categories: system design, user misconceptions, conceptual fit between user and system, physical issues, and contextual ones. Other possible categories such as User experience did not emerge in this particular study. With the exception of Heuristic Evaluation, which supported a range of insights, each analytical method was found to focus attention on just one or two categories of issues. Two of the three "home-grown" methods (Evaluating Multimodal Usability and Concept-based Analysis of Surface and Structural Misfits) were found to occupy particular niches in the space, whereas the third (Programmable User Modeling) did not. This approach has identified commonalities and contrasts between methods and provided accounts of why a particular method yielded the insights it did. Rather than considering measures such as problem count or thoroughness, this approach has yielded insights into the scope of each method

    Updated Goals, Operators, Methods, and Selection Rules (GOMS) with Touch Screen Operations for Quantitative Analysis of User Interfaces

    Get PDF
    When developing applications, User Interfaces (UI) are considered as an important and integral component of any application.  With badly designed UI, users are less likely to use the application, leading to low adoption rates and is not desirable in any application development setting.  The process of developing good intuitive and stream-lined UI for users is complex, that requires many processes and experts from many fields to contribute.  When evaluating potential UI designs, there are many attributes and features that could be examined either in a qualitative and/or quantitative standpoint.  The Updated Goals, Operators, Methods, and Selection Rules (GOMS) model is an approach that has been used in the area.  With the Keystroke Level (KLM) extension, it is possible to quantitatively estimate the time requirement or efficiency of UI for completing different tasks with minimal effort, and has been adopted in many GUI improvement projects.  Due to the usefulness, extensions to the GOMS model had been proposed over the years including extensions to account for motion control interfaces.  Though the GOMS model has been useful for quantitatively evaluating UI design, the main input device of modern smartphones and tablets are touch screens, which are different in nature when compared to traditional computer inputs.  The differences leads to the point that GOMS model with KLM is ill-suited for touch screen based applications.  This research paper addresses the issue by proposing extensions to the GOMS model to account for UI that are based on touch screen input devices

    GTA: Groupware task analysis Modeling complexity

    Get PDF
    The task analysis methods discussed in this presentation stem from Human-Computer Interaction (HCI) and Ethnography (as applied for the design of Computer Supported Cooperative Work CSCW), different disciplines that often are considered conflicting approaches when applied to the same design problems. Both approaches have their strength and weakness, and an integration of them does add value to the early stages of design of cooperation technology. In order to develop an integrated method for groupware task analysis (GTA) a conceptual framework is presented that allows a systematic perspective on complex work phenomena. The framework features a triple focus, considering (a) people, (b) work, and (c) the situation. Integrating various task-modeling approaches requires vehicles for making design information explicit, for which an object oriented formalism will be suggested. GTA consists of a method and framework that have been developed during practical design exercises. Examples from some of these cases will illustrate our approach

    The use of analytical models in human-computer interface design

    Get PDF
    Recently, a large number of human-computer interface (HCI) researchers have investigated building analytical models of the user, which are often implemented as computer models. These models simulate the cognitive processes and task knowledge of the user in ways that allow a researcher or designer to estimate various aspects of an interface's usability, such as when user errors are likely to occur. This information can lead to design improvements. Analytical models can supplement design guidelines by providing designers rigorous ways of analyzing the information-processing requirements of specific tasks (i.e., task analysis). These models offer the potential of improving early designs and replacing some of the early phases of usability testing, thus reducing the cost of interface design. This paper describes some of the many analytical models that are currently being developed and evaluates the usefulness of analytical models for human-computer interface design. This paper will focus on computational, analytical models, such as the GOMS model, rather than less formal, verbal models, because the more exact predictions and task descriptions of computational models may be useful to designers. The paper also discusses some of the practical requirements for using analytical models in complex design organizations such as NASA

    An Overview of Personalized Recommendation System to Improve Web Navigation

    Get PDF
    We present a new personalized recommendation system, which means the searches of each user is done according to their interest which is based on ranking or preference method. It also maintains the logs which records the sessions of each user and brings out the exact data required by the user. This is done by fetching the data that is already stored in the database. Web server logs maintains history of page results and consists of a log file which automatically creates and maintains the list of activities performed by the users. For extracting the data according to the user’s previous searches, we are using Stemming Algorithm. The Stemming Algorithm is a process where the exact, meaningful words are extracted from the URL. Because of this process the user’s search time will be reduced. It also improves the quality of web navigation and overcomes the limitation of existing system. In the proposed system we extract user’s behaviour from web server logs in the actual process whereas, in the anticipated system, the user’s behaviour is done with the help of cognitive user model and we perform the comparison between the two usage processes. The data produced from this comparison can help the users to discover usability issues and take actions to improve usability. In the anticipated usage the cognitive user model is done that can be used to simulate or predict human behaviour or by performance and task. Finally, the system is executed by using the top-k ranking algorithm. The advantage of this system are accuracy and better processing speed. The user’s convenience deals with the ease of navigation which helps the users to interact with their interface

    On the role of domain ontologies in the design of domain-specific visual modeling langages

    Get PDF
    Domain-Specific Visual Modeling Languages should provide notations and abstractions that suitably support problem solving in well-defined application domains. From their user’s perspective, the language’s modeling primitives must be intuitive and expressive enough in capturing all intended aspects of domain conceptualizations. Over the years formal and explicit representations of domain conceptualizations have been developed as domain ontologies. In this paper, we show how the design of these languages can benefit from conceptual tools developed by the ontology engineering community

    Design & Development of Web-based Information Systems for Port Operations

    Get PDF

    The Effects of Design on Performance for Data-based and Task-based Sonification Designs: Evaluation of a Task-based Approach to Sonification Design for Surface Electromyography

    Get PDF
    The goal of this work was to evaluate a task-analysis-based approach to sonification design for surface electromyography (sEMG) data. A sonification is a type of auditory display that uses sound to convey information about data to a listener. Sonifications work by mapping changes in a parameter of sound (e.g., pitch) to changes in data values and they have been shown to be useful in biofeedback and movement analysis applications. However, research that investigates and evaluates sonifications has been difficult due to the highly interdisciplinary nature of the field. Progress has been made but to date, many sonification designs have not been empirically evaluated and have been described as annoying, confusing, or fatiguing. Sonification design decisions have also often been based on characteristics of the data being sonified, and not on the listener’s data analysis task. The hypothesis for this thesis was that focusing on the listener’s task when designing sonifications could result in sonifications that were more readily understood and less annoying to listen to. Task analysis methods have been developed in fields like Human Factors and Human Computer Interaction, and their purpose is to break tasks down into their most basic elements so that products and software can be developed to meet user needs. Applying this approach to sonification design, a type of task analysis focused on Goals, Operators, Methods, and Selection rules (GOMS) was used to analyze two sEMG data evaluation tasks, identify design criteria that a sonification would need to meet in order to allow a listener to perform these two tasks, and two sonification designs were created to facilitate accomplishment of these tasks. These two Task-based sonification designs were then empirically compared to two Data-based sonification designs. The Task-based designs resulted in better listener performance for both sEMG data evaluation tasks, demonstrating the effectiveness of the Task-based approach and suggesting that sonification designers may benefit from adopting a task-based approach to sonification design

    Quantification of Reef Fish Assemblages: A Comparison of Several In Situ Methods

    Get PDF
    On two coral reef biotopes off St. Croix in the U.S. Virgin Islands a total of 41 in situ visual assessments of reef fish assemblages were conducted using six different methods. These methods included: transect, quadrat, random count, clnetransect, cineturret, and still photography. The dependent variables (numbers of species and species diversity) were examined for possible influence by the independent sample variables (time of day, amount of observation, time, reef site, and census method). Cluster analyses indicated that all methods gather data which allow community separation based on the sample variables. However, methods which tend to produce more information in terms of more species and numbers of individuals tend to recognize these sample variables more distinctly. Census assessment methods strongly Influenced the dependent variables. It is suspected that the amount of time employed for each method may be the most Important feature influencing in situ reef fish assemblage assessments
    • …
    corecore