10 research outputs found
Caring, sharing widgets: a toolkit of sensitive widgets
Although most of us communicate using multiple sensory modalities in our lives, and many of our computers are similarly capable of multi-modal interaction, most human-computer interaction is predominantly in the visual mode. This paper describes a toolkit of widgets that are capable of presenting themselves in multiple modalities, but further are capapble of adapting their presentation to suit the contexts and environments in which they are used. This is of increasing importance as the use of mobile devices becomes ubiquitous
Usability of the Stylus Pen in Mobile Electronic Documentation
Stylus pens are often used with mobile information devices. However, few studies have examined the stylus’ simple movements because the technical expertise to support documentation with stylus pens has not been developed. This study examined the usability of stylus pens in authentic documentation tasks, including three main tasks (sentence, table, and paragraph making) with two types of styluses (touchsmart stylus and mobile stylus) and a traditional pen. The statistical results showed that participants preferred the traditional pen in all criteria. Because of inconvenient hand movements, the mobile stylus was the least preferred on every task. Mobility does not provide any advantage in using the stylus. In addition, the study also found inconvenient hand support using a stylus and different feedback between a stylus and a traditional pen.This study was supported by the Dongguk University Research Fund of 2015. Support for the
University Jaume-I (UJI) Robotic Intelligence Laboratory is provided in part by Ministerio de EconomĂa y
Competitividad (DPI2011-27846), by Generalitat Valenciana (PROMETEOII/2014/028) and by Universitat
Jaume I (P1-1B2014-52)
Investigating the Usability of the Stylus Pen on Handheld Devices
Many handheld devices with stylus pens are available on the market, however, there have been few studies which examine the effects of the size of the stylus pen on user performance and subjective preferences for hand-held device interfaces. Two experiments were conducted to determine the most suitable dimensions (pen-length, pen-tip width and pen-width) for a stylus pen. In Experiment 1, five pen-lengths (7, 9, 11, 13, 15 cm) were evaluated. In Experiment 2, six combinations of three pen-tip widths (0.5, 1.0 and 1.5mm) and the two pen widths (4 and 7mm) were compared. In both experiments, subjects conducted pointing, steering and writing tasks on a PDA. The results were assessed in terms of user performance and subjective evaluations for all three pointing, steering and writing tasks. We determined that the most suitable pen dimensions were 11 cm for length, 0.5 mm for tip width, and 7mm for pen width
Crossmodal audio and tactile interaction with mobile touchscreens
Touchscreen mobile devices often use cut-down versions of desktop user interfaces placing high demands on the visual sense that may prove awkward in mobile settings. The research in this thesis addresses the problems encountered by situationally impaired mobile users by using crossmodal interaction to exploit the abundant similarities between the audio and tactile modalities. By making information available to both senses, users can receive the information in the most suitable way, without having to abandon their primary task to look at the device.
This thesis begins with a literature review of related work followed by a definition of crossmodal icons. Two icons may be considered to be crossmodal if and only if they provide a common representation of data, which is accessible interchangeably via different modalities. Two experiments investigated possible parameters for use in crossmodal icons with results showing that rhythm, texture and spatial location are effective.
A third experiment focused on learning multi-dimensional crossmodal icons and the extent to which this learning transfers between modalities. The results showed identification rates of 92% for three-dimensional audio crossmodal icons when trained in the tactile equivalents, and identification rates of 89% for tactile crossmodal icons when trained in the audio equivalent.
Crossmodal icons were then incorporated into a mobile touchscreen QWERTY keyboard. Experiments showed that keyboards with audio or tactile feedback produce fewer errors and greater speeds of text entry compared to standard touchscreen keyboards. The next study examined how environmental variables affect user performance with the same keyboard. The data showed that each modality performs differently with varying levels of background noise or vibration and the exact levels at which these performance decreases occur were established.
The final study involved a longitudinal evaluation of a touchscreen application, CrossTrainer, focusing on longitudinal effects on performance with audio and tactile feedback, the impact of context on performance and personal modality preference. The results show that crossmodal audio and tactile icons are a valid method of presenting information to situationally impaired mobile touchscreen users with recognitions rates of 100% over time. This thesis concludes with a set of guidelines on the design and application of crossmodal audio and tactile feedback to enable application and interface designers to employ such feedback in all systems
Recommended from our members
Multimodal interactive e-learning: An empirical study. An experimental study that investigates the effect of multimodal metaphors on the usability of e-learning interfaces and the production of empirically derived guidelines for the use of these metaphors in the software engineering process.
This thesis investigated the use of novel combinations of multimodal metaphors in the presentation of learning information to evaluate the effect of these combinations on the usability of e-learning interfaces and on the usersÂż learning performance. The empirical research described in this thesis comprised three experimental phases. In the first phase, an initial experiment was carried out with 30 users to explore and compare the usability and learning performance of facially animated expressive avatars with earcons and speech, and text with graphics metaphors. The second experimental phase involved an experiment conducted with 48 users to investigate their perception of avatarÂżs facial expressions and body gestures when presented in both the absence and presence of interactive e-learning context. In addition, the experiment aimed at evaluating the role that an avatar could play as virtual lecturer in e-learning interfaces by comparing the usability and learning performance of three different modes of interaction: speaking facially expressive virtual lecturer, speaking facially expressive full-body animated virtual lecturer, and two speaking facially expressive virtual lecturers. In the third phase, a total of 24 users experimentally examined a novel approach for the use of earcons and auditory icons in e-learning interfaces to support an animated facially expressive avatar with body gestures during the presentation of the learning material. The obtained results demonstrated the usefulness of the tested metaphors to enhance e-learning usability and to enable users to attain better learning performance. These results provided a set of empirically derived innovative guidelines for the design and use of these metaphors to generate more usable e-learning interfaces. For example, when designing avatars as animated virtual lecturers in e-learning interfaces, specific facial expression and body gestures should be incorporated due to its positive influence in enhancing learnersÂż attitude towards the learning process
Multimodal social media product reviews and ratings in e-commerce: an empirical approach
Since the booming of the internet and the “.com” (e-commerce) in the 1990’s, everything has changed. This improvement created different areas for researchers to investigate and examine, especially in the fields of human computer interaction and social media. This technological revolution has dramatically changed the way we interact with computers, buy, communicate and share information. This thesis investigates multimodal presentations of social media review and rating messages within an e-commerce interface. Multimodality refers to the communication pattern that goes beyond text to include images, audio and media. Multimodality provides a new way of communication, as images, for example, can deliver an additional information which might be difficult or impossible to communicate using text only. Social media can be defined as a two-way interaction using the internet as the communication medium.The overall hypothesis is that the use of multimodal metaphors (sound and avatars) to present social media product r views will improve the usability of the ecommerce interface and increase the user understanding, reduce the time needed to make a decision when compared to non-multimodal presentations. E-commerce usability refers to the presentation, accessibility and clarity of information. An experimental e-commerce platform was developed to investigate the particular interactive circumstances that multimodal metaphors may benefit the social media communication of reviews of products to users. The first experiment using three conditions (text with emoji’s, earcons and facially expressive avatars) measured the user comprehension, understanding information, user satisfaction with the way in which information was communicated and social media preference in e-commerce. The second experiment investigated the time taken by users to understand information, understanding information correctly, user satisfaction and user enjoyment using three conditions (emoji’s, facially expressive avatar and animation clips) in ecommerce platform. The results of the first set experiments of the showed that the text with emoji’s and the use of facially expressive avatar conditions had improved the users’ performance through understanding information effectively and making decisions quicker compared to the earcons condition. In the second experiment, the results showed that the users performed better (understanding information, understating information faster) using the emoji’s and the facially expressive avatar presentations compared to the use of the animation clip condition. A set of empirically derived guidelines to implement these metaphors to communicate social media product reviews in e-commerce interface have been presented
A toolkit of resource-sensitive, multimodal widgets
This thesis describes an architecture for a toolkit of user interface components which allows the presentation of the widgets to use multiple output modalities - typically, audio and visual. Previously there was no toolkit of widgets which would use the most appropriate presentational resources according to their availability and suitability. Typically the use of different forms of presentation was limited to graphical feedback with the addition of other forms of presentation, such as sound, being added in an ad hoc fashion with only limited scope for managing the use of the different resources. A review of existing auditory interfaces provided some requirements that the toolkit would need to fulfil for it to be effective. In addition, it was found that a strand of research in this area required further investigation to ensure that a full set of requirements was captured. It was found that no formal evaluation of audio being used to provide background information has been undertaken. A sonically-enhanced progress indicator was designed and evaluated showing that audio feedback could be used as a replacement for visual feedback rather than simply as an enhancement. The experiment also completed the requirements capture for the design of the toolkit of multimodal widgets. A review of existing user interface architectures and systems, with particular attention paid to the way they manage multiple output modalities presented some design guidelines for the architecture of the toolkit. Building on these guidelines a design for the toolkit which fulfils all the previously captured requirements is presented. An implementation of this design is given, with an evaluation of the implementation showing that it fulfils all the requirements of the desig
Sound in the Interface to a Mobile Computer
Introduction Mobile telephones, Personal Digital Assistants (PDAs) and handheld computers are one of the fastest growth areas of computing. One problem with these devices is that they have a limited amount of screen space: the screen cannot be large as the device must be able to fit into the hand or pocket to be easily carried. As the screen is small it can become cluttered with information as designers try to cram on as much as possible. In many cases desktop widgets (buttons, menus, windows, etc.) have been taken straight from standard graphical interfaces (where screen space is not a problem) and applied directly to mobile devices. This has resulted in devices that are hard to use, with small text that is hard to read, cramped graphics and little contextual information. One way to solve the problem would be to substitute non-speech audio cues for visual ones. Sound could be used to present information about widgets so that their size could be reduced. This would mean that the clu