197,605 research outputs found

    Acquisition and production of skilled behavior in dynamic decision-making tasks

    Get PDF
    This status report consists of a thesis entitled 'Ecological Task Analysis: A Method for Display Enhancements.' Previous use of various analysis processes for the purpose of display interface design or enhancement has run the risk of failing to improve user performance due to the analysis resulting in only a sequencial listing of user tasks. Adopting an ecological approach to performing the task analysis, however, may result in the necessary modeling of an unpredictable and variable task domain required to improve user performance. Kirlik has proposed an Ecological Task Analysis framework which is designed for this purpose. It is the purpose of this research to measure this framework's effectiveness at enhancing display interfaces in order to improve user performance. Following the proposed framework, an ecological task analysis of experienced users of a complex and dynamic laboratory task, Star Cruiser, was performed. Based on this analysis, display enhancements were proposed and implemented. An experiment was then conducted to compare this new version of Star Cruiser to the original. By measuring user performance at different tasks, it was determined that during early sessions, use of the enhanced display contributed to better user performance compared to that achieved using the original display. Furthermore, the results indicate that the enhancements proposed as a result of the ecological task analysis affected user performance differently depending on whether they are enhancements which aid in the selection of a possible action or in the performance of an action. Generalizations of these findings to larger, more complex systems were avoided since the analysis was only performed on this one particular system

    User-centered visual analysis using a hybrid reasoning architecture for intensive care units

    Get PDF
    One problem pertaining to Intensive Care Unit information systems is that, in some cases, a very dense display of data can result. To ensure the overview and readability of the increasing volumes of data, some special features are required (e.g., data prioritization, clustering, and selection mechanisms) with the application of analytical methods (e.g., temporal data abstraction, principal component analysis, and detection of events). This paper addresses the problem of improving the integration of the visual and analytical methods applied to medical monitoring systems. We present a knowledge- and machine learning-based approach to support the knowledge discovery process with appropriate analytical and visual methods. Its potential benefit to the development of user interfaces for intelligent monitors that can assist with the detection and explanation of new, potentially threatening medical events. The proposed hybrid reasoning architecture provides an interactive graphical user interface to adjust the parameters of the analytical methods based on the users' task at hand. The action sequences performed on the graphical user interface by the user are consolidated in a dynamic knowledge base with specific hybrid reasoning that integrates symbolic and connectionist approaches. These sequences of expert knowledge acquisition can be very efficient for making easier knowledge emergence during a similar experience and positively impact the monitoring of critical situations. The provided graphical user interface incorporating a user-centered visual analysis is exploited to facilitate the natural and effective representation of clinical information for patient care

    Multimedia Interface In Smart Home Monitoring

    Get PDF
    Smart home environment monitoring systems will incorporate more and more multimedia information and technology, bringing a sense of visual reality into the control room and providing more effective communication using a richer vocabulary of media This is a prototype system where the top 80%, approximately, of the window is associated with the console-based interface and the bottom 20% with the command based interface This Multimedia Interface (MUI) prototype is to convey as much possible information in the main screen display as possible, without forcing the user to burrow down through different layers of screens or menus Secondly to facilitate user-initiated changes to the system with minimal mouse/keyboard action (console) or keyboard (command-based) action on the user's part Lastly to facilitate rapid learning on the user's part, and to couple the visual feedback of both systems so that command-based system changes are indicated on the console-based system and vice-versa The console-based interface is activated by clicking on the appropriate widget like buttons in most cases, check boxes and radio buttons for a few systems The prototype command-based interface includes an edit box at the extreme bottom of the screen, where the user can type a command The user then clicks on the "Process Command Line" button to execute the command Immediately above the edit box is a list box (read only) in which the user's command is duplicated, and then followed by the program's response The results are based on analytical results, questionnaire analysis and console and command based interface results From the results tell that the prototype interface is very easy to use, and that no real major changes need to be made in order to increase learn ability The analysis also showed that the open standards and security is a priority of designing the multimedia interface of smart house

    A Collaborative Augmented Reality System Based On Real Time Hand Gesture Recognition

    Get PDF
    Human computer interaction is a major issue in research industry. In order to offer a way to enable untrained users to interact with computer more easily and efficiently gesture based interface has been paid more attention. Gesture based interface provides the most effective means for non-verbal interaction. Various devices like head mounted display and hand glove could be used by the user but they may be cumbersome to use and they limits the user action and make them tired. This problem can be solved by the real time bare hand gesture recognition technique for human computer interaction using computer vision Computer vision is becoming very popular now a days since it can hold a lot of information at a very low cost. With this increasing popularity of computer vision there is a rapid development in the field of virtual reality as it provides an easy and efficient virtual interface between human and computer. At the same time much research is going on to provide more natural interface for human-computer interaction with the power of computer vision .The most powerful and natural interface for human-computer interaction is the hand gesture. In this project we focus our attention to vision based recognition of hand gesture for personal authentication where hand gesture is used as a password. Different hand gestures are used as password for different personals

    Investigating graphical user interface usability on task sequence and display structure dependencies

    Get PDF
    Designing Graphical User Interfaces (GUI) requires the consideration of task sequence requirements (sequence of operations arising from task structures and application constraints) and display structure (layout of the elements of the interface) relationships. The basic purpose was to understand the usability differences of the interfaces through efficiency, motor performance, and search performance. Thirty-two subjects performed experiments in four groups. The experiments differed in display structure and compatibility of task sequences. Subject mouse actions, mouse coordinates and eye positions were recorded. The derived measures, click efficiency, mouse traversal and eye visits to different areas of interest (namely the tool, object, and goal), were analyzed in a repeated measures factorial design with compatibility and display structure as the between subjects factors and phase of learning as the within subject factor. A significant interaction between compatibility and phase of learning (p\u3c.01) was observed. Mouse traversal per unit time increased significantly (p\u3c. 05) across phases of learning. The phase of learning affected the number of eye visits for all groups. Compatibility had a significant ((p\u3c.005) effect on the average processing time during search. The results establish that the compatibility of task sequence requirement with the display structure affecting the performance of subjects and hence the usability of the interface was thus obtained. However, through learning, subject performance showed considerable improvement and the effects of task sequence and display structure diminished at final stages of user learning. Based on this evidence, a systemic structural activity approach was used to develop a model of human performance on the eye movement and mouse action data. This structural model of human performance is defined as an algorithm and can be used for estimating complexity of task performance. In this study only the assumptions for development of the model and the formulation of the model are explained as an application of the results of the study. The study hence served a dual purpose in the long run: understanding the compatibility of the task sequence with the interface display structure as well as establishing eye and mouse movements as a viable tool to study task performance at human computer interfaces

    Rapid prototyping 3D virtual world interfaces within a virtual factory environment

    Get PDF
    On-going work into user requirements analysis using CLIPS (NASA/JSC) expert systems as an intelligent event simulator has led to research into three-dimensional (3D) interfaces. Previous work involved CLIPS and two-dimensional (2D) models. Integral to this work was the development of the University of Massachusetts Lowell parallel version of CLIPS, called PCLIPS. This allowed us to create both a Software Bus and a group problem-solving environment for expert systems development. By shifting the PCLIPS paradigm to use the VEOS messaging protocol we have merged VEOS (HlTL/Seattle) and CLIPS into a distributed virtual worlds prototyping environment (VCLIPS). VCLIPS uses the VEOS protocol layer to allow multiple experts to cooperate on a single problem. We have begun to look at the control of a virtual factory. In the virtual factory there are actors and objects as found in our Lincoln Logs Factory of the Future project. In this artificial reality architecture there are three VCLIPS entities in action. One entity is responsible for display and user events in the 3D virtual world. Another is responsible for either simulating the virtual factory or communicating with the real factory. The third is a user interface expert. The interface expert maps user input levels, within the current prototype, to control information for the factory. The interface to the virtual factory is based on a camera paradigm. The graphics subsystem generates camera views of the factory on standard X-Window displays. The camera allows for view control and object control. Control or the factory is accomplished by the user reaching into the camera views to perform object interactions. All communication between the separate CLIPS expert systems is done through VEOS

    Achieving User Interface Heterogeneity in a Distributed Environment

    No full text
    The introduction of distribution into the field of computing has enhanced the possibilities of information processing and interchange on scales which could not previously be achieved with stand-alone machines. However, the successful distribution of a process across a distributed system requires three problems to be considered; how the functionality of a process is distributed, how the data set on which the process works is distributed and how the interface that allows the process to communicate with the outside world is distributed. The focus of the work in this paper lies in describing a model that attempts to provide a solution to the latter problem. The model that has been developed allows the functionality of a process to be separated from and to exist independently from its interface and employs user interface independent display languages to provide distributed and heterogeneous user interfaces to processes. This separation also facilitates access to a service from diverse platforms and can support user interface mobility and third-party application integration. The goals and advantages of this model are partially realised in a prototype that has been designed around the WWW and its associated protocols, and it is predicted how the model could be fully realised by adopting a modular and object-oriented approach, as advocated by the Java programming environment

    SWiM: A Simple Window Mover

    Full text link
    As computers become more ubiquitous, traditional two-dimensional interfaces must be replaced with interfaces based on a three-dimensional metaphor. However, these interfaces must still be as simple and functional as their two-dimensional predecessors. This paper introduces SWiM, a new interface for moving application windows between various screens, such as wall displays, laptop monitors, and desktop displays, in a three-dimensional physical environment. SWiM was designed based on the results of initial "paper and pencil" user tests of three possible interfaces. The results of these tests led to a map-like interface where users select the destination display for their application from various icons. If the destination is a mobile display it is not displayed on the map. Instead users can select the screen's name from a list of all possible destination displays. User testing of SWiM was conducted to discover whether it is easy to learn and use. Users that were asked to use SWiM without any instructions found the interface as intuitive to use as users who were given a demonstration. The results show that SWiM combines simplicity and functionality to create an interface that is easy to learn and easy to use.Comment: 7 pages, 4 figure

    ISML: an interface specification meta-language

    Get PDF
    In this paper we present an abstract metaphor model situated within a model-based user interface framework. The inclusion of metaphors in graphical user interfaces is a well established, but mostly craft-based strategy to design. A substantial body of notations and tools can be found within the model-based user interface design literature, however an explicit treatment of metaphor and its mappings to other design views has yet to be addressed. We introduce the Interface Specification Meta-Language (ISML) framework and demonstrate its use in comparing the semantic and syntactic features of an interactive system. Challenges facing this research are outlined and further work proposed
    • ā€¦
    corecore