576,698 research outputs found

    The Marital and Physician Privileges—A Reprint of a Letter to a Congressman

    Get PDF
    A design of computer systems, that effectively supports the user, is a major goal within human-computer interaction. To achieve this, we must understand and master several tasks. These tasks concern firstly what to develop and secondly how to develop the system. The design and implementation of effective and efficient user interfaces is a prerequisite for the successful introduction of computer support in the medical domain. We base our work on a fundamental understanding of cognitive aspects of human-computer interaction, as well as on detailed analysis of the specific needs and requirements of the end users, i.e., the medical professionals. This thesis presents several approaches for development of systems for computer-supported work in health care. The solutions described concern vital problem areas: (1) the focus on the work tasks to be performed, (2) the cost of software and the way competition works in a networked world. Solutions to these problems can lead to more usable systems from a user's perspective but may also change the nature of computer applications

    Human Factors Considerations in System Design

    Get PDF
    Human factors considerations in systems design was examined. Human factors in automated command and control, in the efficiency of the human computer interface and system effectiveness are outlined. The following topics are discussed: human factors aspects of control room design; design of interactive systems; human computer dialogue, interaction tasks and techniques; guidelines on ergonomic aspects of control rooms and highly automated environments; system engineering for control by humans; conceptual models of information processing; information display and interaction in real time environments

    Human/computer control of undersea teleoperators

    Get PDF
    The potential of supervisory controlled teleoperators for accomplishment of manipulation and sensory tasks in deep ocean environments is discussed. Teleoperators and supervisory control are defined, the current problems of human divers are reviewed, and some assertions are made about why supervisory control has potential use to replace and extend human diver capabilities. The relative roles of man and computer and the variables involved in man-computer interaction are next discussed. Finally, a detailed description of a supervisory controlled teleoperator system, SUPERMAN, is presented

    On human motion prediction using recurrent neural networks

    Full text link
    Human motion modelling is a classical problem at the intersection of graphics and computer vision, with applications spanning human-computer interaction, motion synthesis, and motion prediction for virtual and augmented reality. Following the success of deep learning methods in several computer vision tasks, recent work has focused on using deep recurrent neural networks (RNNs) to model human motion, with the goal of learning time-dependent representations that perform tasks such as short-term motion prediction and long-term human motion synthesis. We examine recent work, with a focus on the evaluation methodologies commonly used in the literature, and show that, surprisingly, state-of-the-art performance can be achieved by a simple baseline that does not attempt to model motion at all. We investigate this result, and analyze recent RNN methods by looking at the architectures, loss functions, and training procedures used in state-of-the-art approaches. We propose three changes to the standard RNN models typically used for human motion, which result in a simple and scalable RNN architecture that obtains state-of-the-art performance on human motion prediction.Comment: Accepted at CVPR 1

    Physical limitations of a Android smart phone when used as a platform for mobile canine computer interaction

    Get PDF
    There has been a great deal of research recently into human computer interaction, but we have largely ignored the rest of the animal kingdom. There is no simple and effective way for any animal, other than humans, to do even simple computing tasks. The ???rst step in changing this is to find devices that non-humans can safely and effectively interact with. In this paper we look at the feasibility of using a G1 Android phone for mobile canine computer interaction. Specifically, we???ll explore the durability limitations of the phone during use by a canine.unpublishedis peer reviewe

    Modeling Three-Dimensional Interaction Tasks for Desktop Virtual Reality

    Get PDF
    A virtual environment is an interactive, head-referenced computer display that gives a user the illusion of presence in real or imaginary worlds. Two most significant differences between a virtual environment and a more traditional interactive 3D computer graphics system are the extent of the user's sense of presence and the level of user participation that can be obtained in the virtual environment. Over the years, advances in computer display hardware and software have substantially progressed the realism of computer-generated images, which dramatically enhanced user’s sense of presence in virtual environments. Unfortunately, such progress of user’s interaction with a virtual environment has not been observed. The scope of the thesis lies in the study of human-computer interaction that occurs in a desktop virtual environment. The objective is to develop/verify 3D interaction models that can be used to quantitatively describe users’ performance for 3D pointing, steering and object pursuit tasks and through the analysis of the interaction models and experimental results to gain a better understanding of users’ movements in the virtual environment. The approach applied throughout the thesis is a modeling methodology that is composed of three procedures, including identifying the variables involved for modeling a 3D interaction task, formulating and verifying the interaction model through user studies and statistical analysis, and applying the model to the evaluation of interaction techniques and input devices and gaining an insight into users’ movements in the virtual environment. In the study of 3D pointing tasks, a two-component model is used to break the tasks into a ballistic phase and a correction phase, and comparison is made between the real-world and virtual-world tasks in each phase. The results indicate that temporal differences arise in both phases, but the difference is significantly greater in the correction phase. This finding inspires us to design a methodology with two-component model and Fitts’ law, which decomposes a pointing task into the ballistic and correction phase and decreases the index of the difficulty of the task during the correction phase. The methodology allows for the development and evaluation of interaction techniques for 3D pointing tasks. For 3D steering tasks, the steering law, which was proposed to model 2D steering tasks, is adapted to 3D tasks by introducing three additional variables, i.e., path curvature, orientation and haptic feedback. The new model suggests that a 3D ball-and-tunnel steering movement consists of a series of small and jerky sub-movements that are similar to the ballistic/correction movements observed in the pointing movements. An interaction model is originally proposed and empirically verified for 3D object pursuit tasks, making use of Stevens’ power law. The results indicate that the power law can be used to model all three common interaction tasks, which may serve as a general law for modeling interaction tasks, and also provides a way to quantitatively compare the tasks

    Pilot interaction with automated airborne decision making systems

    Get PDF
    An investigation was made of interaction between a human pilot and automated on-board decision making systems. Research was initiated on the topic of pilot problem solving in automated and semi-automated flight management systems and attempts were made to develop a model of human decision making in a multi-task situation. A study was made of allocation of responsibility between human and computer, and discussed were various pilot performance parameters with varying degrees of automation. Optimal allocation of responsibility between human and computer was considered and some theoretical results found in the literature were presented. The pilot as a problem solver was discussed. Finally the design of displays, controls, procedures, and computer aids for problem solving tasks in automated and semi-automated systems was considered

    Generating human-computer micro-task workflows from domain ontologies

    Get PDF
    With the growing popularity of micro-task crowdsourcing platforms, a renewed interest in the resolution of complex tasks that require the coopera-tion of human and machine participants has emerged. This interest has led to workflow approaches that present new challenges at different dimensions of the human-machine computation process, namely in micro-task specification and human-computer interaction due to the unstructured nature of micro-tasks in terms of domain representation. In this sense, a semi-automatic generation envi-ronment for human-computer micro-task workflows from domain ontologies is proposed. The structure and semantics of the domain ontology provides a com-mon ground for understanding and enhances human-computer cooperation.This work is partially funded by FEDER Funds and by the ERDF (European Regional Development Fund) through the COMPETE Programme (operational programme for competitiveness) and by National Funds through the FCT (Portuguese Foundation for Science and Technology) under the projects AAL4ALL (QREN13852) and FCOMP-01-0124-FEDER-028980 (PTDC/EEI-SII/1386/2012)

    Acquisition and production of skilled behavior in dynamic decision-making tasks: Modeling strategic behavior in human-automation interaction: Why and aid can (and should) go unused

    Get PDF
    Advances in computer and control technology offer the opportunity for task-offload aiding in human-machine systems. A task-offload aid (e.g., an autopilot, an intelligent assistant) can be selectively engaged by the human operator to dynamically delegate tasks to an automated system. Successful design and performance prediction in such systems requires knowledge of the factors influencing the strategy the operator develops and uses for managing interaction with the task-offload aid. A model is presented that shows how such strategies can be predicted as a function of three task context properties (frequency and duration of secondary tasks and costs of delaying secondary tasks) and three aid design properties (aid engagement and disengagement times, aid performance relative to human performance). Sensitivity analysis indicates how each of these contextual and design factors affect the optimal aid aid usage strategy and attainable system performance. The model is applied to understanding human-automation interaction in laboratory experiments on human supervisory control behavior. The laboratory task allowed subjects freedom to determine strategies for using an autopilot in a dynamic, multi-task environment. Modeling results suggested that many subjects may indeed have been acting appropriately by not using the autopilot in the way its designers intended. Although autopilot function was technically sound, this aid was not designed with due regard to the overall task context in which it was placed. These results demonstrate the need for additional research on how people may strategically manage their own resources, as well as those provided by automation, in an effort to keep workload and performance at acceptable levels

    Multisensory experiences in HCI

    Get PDF
    The use of vision and audition for interaction dominated the field of human-computer interaction (HCI) for decades, despite the fact that nature has provided us with many more senses for perceiving and interacting with the world around us. Recently, HCI researchers have started trying to capitalize on touch, taste, and smell when designing interactive tasks, especially in gaming, multimedia, and art environments. Here we provide a snapshot of our research into touch, taste, and smell, which we’re carrying out at the Sussex Computer Human Interaction (SCHI—pronounced “sky”) Lab at the University of Sussex in Brighton, UK
    • …
    corecore