14,121 research outputs found
A reflective characterisation of occasional user
This work revisits established user classifications and aims to characterise a historically unspecified user category, the Occasional User (OU). Three user categories, novice, intermediate and expert, have dominated the work of user interface (UI) designers, researchers and educators for decades. These categories were created to conceptualise user's needs, strategies and goals around the 80s. Since then, UI paradigm shifts, such as direct manipulation and touch, along with other advances in technology, gave new access to people with little computer knowledge. This fact produced a diversification of the existing user categories not observed in the literature review of traditional classification of users. The findings of this work include a new characterisation of the occasional user, distinguished by user's uncertainty of repetitive use of an interface and little knowledge about its functioning. In addition, the specification of the OU, together with principles and recommendations will help UI community to informatively design for users without requiring a prospective use and previous knowledge of the UI. The OU is an essential type of user to apply user-centred design approach to understand the interaction with technology as universal, accessible and transparent for the user, independently of accumulated experience and technological era that users live in
ImageSpirit: Verbal Guided Image Parsing
Humans describe images in terms of nouns and adjectives while algorithms
operate on images represented as sets of pixels. Bridging this gap between how
humans would like to access images versus their typical representation is the
goal of image parsing, which involves assigning object and attribute labels to
pixel. In this paper we propose treating nouns as object labels and adjectives
as visual attribute labels. This allows us to formulate the image parsing
problem as one of jointly estimating per-pixel object and attribute labels from
a set of training images. We propose an efficient (interactive time) solution.
Using the extracted labels as handles, our system empowers a user to verbally
refine the results. This enables hands-free parsing of an image into pixel-wise
object/attribute labels that correspond to human semantics. Verbally selecting
objects of interests enables a novel and natural interaction modality that can
possibly be used to interact with new generation devices (e.g. smart phones,
Google Glass, living room devices). We demonstrate our system on a large number
of real-world images with varying complexity. To help understand the tradeoffs
compared to traditional mouse based interactions, results are reported for both
a large scale quantitative evaluation and a user study.Comment: http://mmcheng.net/imagespirit
Autonomy Infused Teleoperation with Application to BCI Manipulation
Robot teleoperation systems face a common set of challenges including
latency, low-dimensional user commands, and asymmetric control inputs. User
control with Brain-Computer Interfaces (BCIs) exacerbates these problems
through especially noisy and erratic low-dimensional motion commands due to the
difficulty in decoding neural activity. We introduce a general framework to
address these challenges through a combination of computer vision, user intent
inference, and arbitration between the human input and autonomous control
schemes. Adjustable levels of assistance allow the system to balance the
operator's capabilities and feelings of comfort and control while compensating
for a task's difficulty. We present experimental results demonstrating
significant performance improvement using the shared-control assistance
framework on adapted rehabilitation benchmarks with two subjects implanted with
intracortical brain-computer interfaces controlling a seven degree-of-freedom
robotic manipulator as a prosthetic. Our results further indicate that shared
assistance mitigates perceived user difficulty and even enables successful
performance on previously infeasible tasks. We showcase the extensibility of
our architecture with applications to quality-of-life tasks such as opening a
door, pouring liquids from containers, and manipulation with novel objects in
densely cluttered environments
Supporting Device Discovery and Spontaneous Interaction with Spatial References
The RELATE interaction model is designed to support spontaneous interaction of mobile users with devices and services in their environment. The model is based on spatial references that capture the spatial relationship of a user’s device with other co-located devices. Spatial references are obtained by relative position sensing and integrated in the mobile user interface to spatially visualize the arrangement of discovered devices, and to provide direct access for interaction across devices. In this paper we discuss two prototype systems demonstrating the utility of the model in collaborative and mobile settings, and present a study on usability of spatial list and map representations for device selection
Tangible user interfaces : past, present and future directions
In the last two decades, Tangible User Interfaces (TUIs) have emerged as a new interface type that interlinks the digital and physical worlds. Drawing upon users' knowledge and skills of interaction with the real non-digital world, TUIs show a potential to enhance the way in which people interact with and leverage digital information. However, TUI research is still in its infancy and extensive research is required in or- der to fully understand the implications of tangible user interfaces, to develop technologies that further bridge the digital and the physical, and to guide TUI design with empirical knowledge. This paper examines the existing body of work on Tangible User In- terfaces. We start by sketching the history of tangible user interfaces, examining the intellectual origins of this field. We then present TUIs in a broader context, survey application domains, and review frame- works and taxonomies. We also discuss conceptual foundations of TUIs including perspectives from cognitive sciences, phycology, and philoso- phy. Methods and technologies for designing, building, and evaluating TUIs are also addressed. Finally, we discuss the strengths and limita- tions of TUIs and chart directions for future research
A study of the influences of computer interfaces and training approaches on end user training outcomes
Effective and efficient training is a key factor in determining the success of end user computing (EUC) in organisations. This study examines the influences of two application interfaces, namely icons and menus, on training outcomes. The training outcomes are measured in terms of effectiveness, efficiency and perceived ease of use. Effectiveness includes the keystrokes used to accomplish tasks, the accuracy of correct keystrokes, backtracks and errors committed. Efficiency includes the time taken to accomplish the given tasks. Perceived ease of use rates the ease of the training environment including training materials, operating system, application software and associated resources provided to users. In order to facilitate measurement, users were asked to nominate one of two approaches to training, instruction training and exploration training that focussed on two categories of users, basic and advanced. User category was determined based on two questionnaires that tested participants\u27 level of knowledge and experience. Learning style preference was also included in the study. For example, to overcome the criticisms of prior studies, this study allowed users to nominate their preferred interfaces and training approaches soon after the training and prior to the experiment. To measure training outcomes, an experiment was conducted with 159 users. Training materials were produced and five questionnaires developed to meet the requirements of the training design. All the materials were peer reviewed and pilot tested in order to eliminate any subjective bias. All questionnaires were tested for statistical validity to ensure the applicability of instruments. Further, for measurement purposes, all keystrokes and time information such as start time and end time of tasks were extracted using automated tools. Prior to data analysis, any \u27outliers\u27 were eliminated to ensure that the data were of good quality. This study found that icon interfaces were effective for end user training for trivial tasks. This study also found that menu interfaces were easy to use in the given training environment. In terms of training approaches, exploration training was found to be effective. The user categorisation alone did not have any significant influence on training outcomes in this study. However, the combination of basic users and instruction training approach was found to be efficient and the combination of basic users and exploration training approach was found to be effective. This study also found out that learning style preference was significant in terms of effectiveness but not efficiency. The results of the study indicates that interfaces play a significant role in determining training outcomes and hence the need for training designers to treat application interfaces differently when addressing training accuracy and time constraints. Similarly, this study supports previous studies in that learning style preferences influence training outcomes. Therefore, training designers should consider users\u27 learning style preferences in order to provide effective training. While categories of user did not show any significant influence on the outcomes of this study, the interaction between training approaches and categories of users was significant indicating that different categories of users respond to different training approaches. Therefore, training designers should consider the possibility of treating differently those with and without experience in EUC applications. For example, one possible approach to training design would be to hold separate training sessions. In summary, this study has found that interfaces, learning styles and the combination of training approaches and categories of users have varying significant impact on training outcomes. Thus the results reported in this study should help training designers to design training programs that would be effective, efficient and easy to use
Recommended from our members
Embodying conversational characteristics in a graphical user interface
In the history of Intelligent Tutoring Systems, SOPHIE (Brown, Burton, and Bell, 1974), now considered a classic, contained many important ideas and features. One of these was its natural language user interface. Today, the trend has moved away from natural language interfaces towards graphical ones although the argument in favour of natural language user interfaces, both from Human Computer Interaction and natural language researchers, still persist. Is this argument correct?
This thesis explores this question by investigating how SOPHIE might be re-implemented with a graphical direct manipulation interface instead of a natural language one, with the goal of improving its standard of usability. It begins by analysing the features that seem to have been central to SOPHIE's usability. These, it argues, were not so much an ability to accept well formed complete English sentences, as an ability to accept and interpret correctly a wide range of abbreviated inputs.
Two models of interaction, Circuit 1, a pilot, and Circuit II, a fairly full implementation of SOPHIE were implemented and tested. Both employ free-order syntax that allows users to specify the components of a full command in any order. The combination of deixis and free-order syntax supported allows completely general ellipsis which achieves, in extended interaction sequences, the same economy and naturalness that SOPHIE achieved through its use of anaphora and ellipsis.
Whilst the free-order syntax. technique is little used at present in user interfaces, the results of observational studies conducted have shown that it saves users time and convenience. Thus, considering key linguistic features of a natural language user interface has shown how novel features can enhance the usability of direct manipulation interfaces. This thesis argues that user interfaces can be improved by employing structures found in natural language or at least conversation which can be constructed within direct manipulation interface styles.
This approach was further expanded to support topic shifts between different circuit contexts. Circuit II, like SOPHIE, supports three different topics: normal circuit behaviour, a circuit with an unknown fault, and circuits with user-hypothesised faults. Drawing on Reichman's (1981) work, Circuit II uses natural language cue phrases of the type "by the way", re-implemented in the direct manipulation style, to facilitate shifts between topics in a smoother and more natural way than SOPHIE which , used clumsy explicit commands
Sensing with the Motor Cortex
The primary motor cortex is a critical node in the network of brain regions responsible for voluntary motor behavior. It has been less appreciated, however, that the motor cortex exhibits sensory responses in a variety of modalities including vision and somatosensation. We review current work that emphasizes the heterogeneity in sensorimotor responses in the motor cortex and focus on its implications for cortical control of movement as well as for brain-machine interface development
- …