14,121 research outputs found

    A reflective characterisation of occasional user

    Get PDF
    This work revisits established user classifications and aims to characterise a historically unspecified user category, the Occasional User (OU). Three user categories, novice, intermediate and expert, have dominated the work of user interface (UI) designers, researchers and educators for decades. These categories were created to conceptualise user's needs, strategies and goals around the 80s. Since then, UI paradigm shifts, such as direct manipulation and touch, along with other advances in technology, gave new access to people with little computer knowledge. This fact produced a diversification of the existing user categories not observed in the literature review of traditional classification of users. The findings of this work include a new characterisation of the occasional user, distinguished by user's uncertainty of repetitive use of an interface and little knowledge about its functioning. In addition, the specification of the OU, together with principles and recommendations will help UI community to informatively design for users without requiring a prospective use and previous knowledge of the UI. The OU is an essential type of user to apply user-centred design approach to understand the interaction with technology as universal, accessible and transparent for the user, independently of accumulated experience and technological era that users live in

    ImageSpirit: Verbal Guided Image Parsing

    Get PDF
    Humans describe images in terms of nouns and adjectives while algorithms operate on images represented as sets of pixels. Bridging this gap between how humans would like to access images versus their typical representation is the goal of image parsing, which involves assigning object and attribute labels to pixel. In this paper we propose treating nouns as object labels and adjectives as visual attribute labels. This allows us to formulate the image parsing problem as one of jointly estimating per-pixel object and attribute labels from a set of training images. We propose an efficient (interactive time) solution. Using the extracted labels as handles, our system empowers a user to verbally refine the results. This enables hands-free parsing of an image into pixel-wise object/attribute labels that correspond to human semantics. Verbally selecting objects of interests enables a novel and natural interaction modality that can possibly be used to interact with new generation devices (e.g. smart phones, Google Glass, living room devices). We demonstrate our system on a large number of real-world images with varying complexity. To help understand the tradeoffs compared to traditional mouse based interactions, results are reported for both a large scale quantitative evaluation and a user study.Comment: http://mmcheng.net/imagespirit

    Autonomy Infused Teleoperation with Application to BCI Manipulation

    Full text link
    Robot teleoperation systems face a common set of challenges including latency, low-dimensional user commands, and asymmetric control inputs. User control with Brain-Computer Interfaces (BCIs) exacerbates these problems through especially noisy and erratic low-dimensional motion commands due to the difficulty in decoding neural activity. We introduce a general framework to address these challenges through a combination of computer vision, user intent inference, and arbitration between the human input and autonomous control schemes. Adjustable levels of assistance allow the system to balance the operator's capabilities and feelings of comfort and control while compensating for a task's difficulty. We present experimental results demonstrating significant performance improvement using the shared-control assistance framework on adapted rehabilitation benchmarks with two subjects implanted with intracortical brain-computer interfaces controlling a seven degree-of-freedom robotic manipulator as a prosthetic. Our results further indicate that shared assistance mitigates perceived user difficulty and even enables successful performance on previously infeasible tasks. We showcase the extensibility of our architecture with applications to quality-of-life tasks such as opening a door, pouring liquids from containers, and manipulation with novel objects in densely cluttered environments

    Supporting Device Discovery and Spontaneous Interaction with Spatial References

    Get PDF
    The RELATE interaction model is designed to support spontaneous interaction of mobile users with devices and services in their environment. The model is based on spatial references that capture the spatial relationship of a user’s device with other co-located devices. Spatial references are obtained by relative position sensing and integrated in the mobile user interface to spatially visualize the arrangement of discovered devices, and to provide direct access for interaction across devices. In this paper we discuss two prototype systems demonstrating the utility of the model in collaborative and mobile settings, and present a study on usability of spatial list and map representations for device selection

    Tangible user interfaces : past, present and future directions

    Get PDF
    In the last two decades, Tangible User Interfaces (TUIs) have emerged as a new interface type that interlinks the digital and physical worlds. Drawing upon users' knowledge and skills of interaction with the real non-digital world, TUIs show a potential to enhance the way in which people interact with and leverage digital information. However, TUI research is still in its infancy and extensive research is required in or- der to fully understand the implications of tangible user interfaces, to develop technologies that further bridge the digital and the physical, and to guide TUI design with empirical knowledge. This paper examines the existing body of work on Tangible User In- terfaces. We start by sketching the history of tangible user interfaces, examining the intellectual origins of this field. We then present TUIs in a broader context, survey application domains, and review frame- works and taxonomies. We also discuss conceptual foundations of TUIs including perspectives from cognitive sciences, phycology, and philoso- phy. Methods and technologies for designing, building, and evaluating TUIs are also addressed. Finally, we discuss the strengths and limita- tions of TUIs and chart directions for future research

    A study of the influences of computer interfaces and training approaches on end user training outcomes

    Get PDF
    Effective and efficient training is a key factor in determining the success of end user computing (EUC) in organisations. This study examines the influences of two application interfaces, namely icons and menus, on training outcomes. The training outcomes are measured in terms of effectiveness, efficiency and perceived ease of use. Effectiveness includes the keystrokes used to accomplish tasks, the accuracy of correct keystrokes, backtracks and errors committed. Efficiency includes the time taken to accomplish the given tasks. Perceived ease of use rates the ease of the training environment including training materials, operating system, application software and associated resources provided to users. In order to facilitate measurement, users were asked to nominate one of two approaches to training, instruction training and exploration training that focussed on two categories of users, basic and advanced. User category was determined based on two questionnaires that tested participants\u27 level of knowledge and experience. Learning style preference was also included in the study. For example, to overcome the criticisms of prior studies, this study allowed users to nominate their preferred interfaces and training approaches soon after the training and prior to the experiment. To measure training outcomes, an experiment was conducted with 159 users. Training materials were produced and five questionnaires developed to meet the requirements of the training design. All the materials were peer reviewed and pilot tested in order to eliminate any subjective bias. All questionnaires were tested for statistical validity to ensure the applicability of instruments. Further, for measurement purposes, all keystrokes and time information such as start time and end time of tasks were extracted using automated tools. Prior to data analysis, any \u27outliers\u27 were eliminated to ensure that the data were of good quality. This study found that icon interfaces were effective for end user training for trivial tasks. This study also found that menu interfaces were easy to use in the given training environment. In terms of training approaches, exploration training was found to be effective. The user categorisation alone did not have any significant influence on training outcomes in this study. However, the combination of basic users and instruction training approach was found to be efficient and the combination of basic users and exploration training approach was found to be effective. This study also found out that learning style preference was significant in terms of effectiveness but not efficiency. The results of the study indicates that interfaces play a significant role in determining training outcomes and hence the need for training designers to treat application interfaces differently when addressing training accuracy and time constraints. Similarly, this study supports previous studies in that learning style preferences influence training outcomes. Therefore, training designers should consider users\u27 learning style preferences in order to provide effective training. While categories of user did not show any significant influence on the outcomes of this study, the interaction between training approaches and categories of users was significant indicating that different categories of users respond to different training approaches. Therefore, training designers should consider the possibility of treating differently those with and without experience in EUC applications. For example, one possible approach to training design would be to hold separate training sessions. In summary, this study has found that interfaces, learning styles and the combination of training approaches and categories of users have varying significant impact on training outcomes. Thus the results reported in this study should help training designers to design training programs that would be effective, efficient and easy to use

    Sensing with the Motor Cortex

    Get PDF
    The primary motor cortex is a critical node in the network of brain regions responsible for voluntary motor behavior. It has been less appreciated, however, that the motor cortex exhibits sensory responses in a variety of modalities including vision and somatosensation. We review current work that emphasizes the heterogeneity in sensorimotor responses in the motor cortex and focus on its implications for cortical control of movement as well as for brain-machine interface development
    corecore