281 research outputs found
Exploring user-defined gestures for alternate interaction space for smartphones and smartwatches
2016 Spring.Includes bibliographical references.In smartphones and smartwatches, the input space is limited due to their small form factor. Although many studies have highlighted the possibility of expanding the interaction space for these devices, limited work has been conducted on exploring end-user preferences for gestures in the proposed interaction spaces. In this dissertation, I present the results of two elicitation studies that explore end-user preferences for creating gestures in the proposed alternate interaction spaces for smartphones and smartwatches. Using the data collected from the two elicitation studies, I present gestures preferred by end-users for common tasks that can be performed using smartphones and smartwatches. I also present the end-user mental models for interaction in proposed interaction spaces for these devices, and highlight common user motivations and preferences for suggested gestures. Based on the findings, I present design implications for incorporating the proposed alternate interaction spaces for smartphones and smartwatches
Discoverable Free Space Gesture Sets for Walk-Up-and-Use Interactions
abstract: Advances in technology are fueling a movement toward ubiquity for beyond-the-desktop systems. Novel interaction modalities, such as free space or full body gestures are becoming more common, as demonstrated by the rise of systems such as the Microsoft Kinect. However, much of the interaction design research for such systems is still focused on desktop and touch interactions. Current thinking in free-space gestures are limited in capability and imagination and most gesture studies have not attempted to identify gestures appropriate for public walk-up-and-use applications. A walk-up-and-use display must be discoverable, such that first-time users can use the system without any training, flexible, and not fatiguing, especially in the case of longer-term interactions. One mechanism for defining gesture sets for walk-up-and-use interactions is a participatory design method called gesture elicitation. This method has been used to identify several user-generated gesture sets and shown that user-generated sets are preferred by users over those defined by system designers. However, for these studies to be successfully implemented in walk-up-and-use applications, there is a need to understand which components of these gestures are semantically meaningful (i.e. do users distinguish been using their left and right hand, or are those semantically the same thing?). Thus, defining a standardized gesture vocabulary for coding, characterizing, and evaluating gestures is critical. This dissertation presents three gesture elicitation studies for walk-up-and-use displays that employ a novel gesture elicitation methodology, alongside a novel coding scheme for gesture elicitation data that focuses on features most important to users’ mental models. Generalizable design principles, based on the three studies, are then derived and presented (e.g. changes in speed are meaningful for scroll actions in walk up and use displays but not for paging or selection). The major contributions of this work are: (1) an elicitation methodology that aids users in overcoming biases from existing interaction modalities; (2) a better understanding of the gestural features that matter, e.g. that capture the intent of the gestures; and (3) generalizable design principles for walk-up-and-use public displays.Dissertation/ThesisDoctoral Dissertation Computer Science 201
Understanding the Usage and Requirements of the Photo Tagging System
The amount of personal photos is massively increasing and managing them effectively and efficiently requires new approaches and practices. This study analyses the users’ needs and their behaviour towards photo tagging in the context of a personal photo repository. Our results are from a qualitative study with 15 users. Study methods include a questionnaire and a task analy-sis approach in which we analyse and evaluate practices around semi-automated photo tagging. In the task analysis, we describe and use a photo tagging application called SmartImages for studying the actual tagging expe-rience. The results from the study indicate that photo tagging in personal collections is rarely used as it is considered too laborious. The task analysis with SmartImages made users consider tagging worthwhile and beneficial. The results propose changes to the implementation of tagging functionality in photo management applications. We conclude that better visibility of the tagging feature and introducing social elements would improve the usage and benefit of tagging. Providing automated tag suggestions that are comprehen-sible, conceptually relevant and the relation between the displayed tags and the photo is clear would make users more willing to engage with tagging ac-tivities. Addressing the mentioned issues would help the users in managing the increasing number of personal photos.Peer reviewe
Concurrent speech feedback for blind people on touchscreens
Tese de Mestrado, Engenharia Informática, 2023, Universidade de Lisboa, Faculdade de CiênciasSmartphone interactions are demanding. Most smartphones come with limited physical buttons, so users can not rely on touch to guide them. Smartphones come with built-in accessibility
mechanisms, for example, screen readers, that make the interaction accessible for blind users.
However, some tasks are still inefficient or cumbersome. Namely, when scanning through a document, users are limited by the single sequential audio channel provided by screen readers. Or
when tasks are interrupted in the presence of other actions.
In this work, we explored alternatives to optimize smartphone interaction by blind people by
leveraging simultaneous audio feedback with different configurations, such as different voices and
spatialization. We researched 5 scenarios: Task interruption, where we use concurrent speech to
reproduce a notification without interrupting the current task; Faster information consumption,
where we leverage concurrent speech to announce up to 4 different contents simultaneously; Text
properties, where the textual formatting is announced; The map scenario, where spatialization
provides feedback on how close or distant a user is from a particular location; And smartphone
interactions scenario, where there is a corresponding sound for each gesture, and instead of reading
the screen elements (e.g., button), a corresponding sound is played. We conducted a study with
10 blind participants whose smartphone usage experience ranges from novice to expert. During
the study, we asked participants’ perceptions and preferences for each scenario, what could be
improved, and in what situations these extra capabilities are valuable to them.
Our results suggest that these extra capabilities we presented are helpful for users, especially if
these can be turned on and off according to the user’s needs and situation. Moreover, we find that
using concurrent speech works best when announcing short messages to the user while listening
to longer content and not so much to have lengthy content announced simultaneously
Understanding search behaviour on mobile devices
Web search on hand-held devices has become enormously common and
popular. Although a number of studies have revealed how users
interact with search engine result pages (SERPs) on desktop
monitors, there are still only few studies related to user
interaction in mobile web search, and search results are shown in
a similar way whether on a mobile phone or a desktop. Therefore,
it is still difficult to know what happens between users and
SERPs while searching on small screens, and this means that the
current presentation of SERPs on mobile devices may not be the
best.
According to the findings from previous studies, including our
earlier work, we can confirm that search behaviour on
touch-enabled mobile devices is different from behaviour with
desktop screens, and so we need to consider a different SERP
presentation design for mobile devices. In this thesis, we
explore several user interactions during search with the aim of
improving search experience on smartphones.
First, one remarkable trend of mobile devices is their
enlargement of screen sizes during the last few years. This leads
us to look for differences in search behaviour on different sized
small screens, and if there are any, to suggest better
presentation of search results for each screen size. In the first
study, we investigated search performance, behaviour, and user
satisfaction on three small screens (3.6 inches for early
smartphones, 4.7 inches for recent smart-phones and 5.5 inches
for phablets). We found no significant differences with respect
to the efficiency of carrying out tasks. However, participants
exhibited different search behaviours on the small, medium, and
large sizes of small screens, respectively: a higher chance of
scrolling with the worst user satisfaction on the smallest
screen; fast information extraction with some hesitation before
selecting a link on the medium screen; and less eye movements on
top links on the largest screen. These results suggest that the
presentation of web search results for each screen size needs to
take into account differences in search behaviour.
Second, although people are familiar with turning pages
horizontally while reading books, vertical scrolling is the
standard option that people have available while searching on
mobile devices. So following a suggestion from the first study,
in the second study we explored the effect of horizontal and
vertical viewport control types (pagination versus scrolling)
with various positions of a correct answer in mobile web search.
Our findings suggest that although users are more familiar with
scrolling, participants spent less time to find the correct
answer with pagination, especially when the relevant result is
located beyond the page fold. In addition, participants using
scrolling exhibited less interest in lower-ranked results even if
the documents were relevant. The overall result indicates that it
is worthwhile providing different viewport controls for better
search experiences in mobile web search.
Third, snippets occupy the biggest space in each search result.
Results from a previous study suggested that snippet length
affects search performance on a desktop monitor. Due to the
smaller screen, the effect seems to be much larger on
smartphones. As one possible idea for a SERP presentation design
from the first study, we investigated appropriate snippet lengths
on mobile devices in the third study. We compared search
behaviour with three different snippet lengths, that is, one
line, two to three lines, and six or more lines of snippets on
mobile SERPs. We found that with long snippets, participants
needed longer search time for a particular task type, and the
longer time consumption provided no better search accuracy. Our
findings suggest that this search performance is related to
viewport movements and user attention.
We expect that our proposed approaches provide ways to understand
mobile web search behaviour, and that the findings can be applied
to a wide range of research areas such as human-computer
integration, information retrieval, and even social science for a
better presentation design of SERP on mobile devices
User Evaluation of Mobile Browser Features Related to Information Retrieval
Technological advancements in mobile technologies, improved network coverage, and cheaper data plans have led to an increase in internet browsing via mobile phone. Improvements such as bigger screen sizes, higher resolutions, and touchscreens have led to a better browsing experience compared with when mobile browsing first emerged.
Most of the research concerning mobile browsing seems to focus on website design for smaller screen displays, with very limited research done on the design and functionality of mobile web browsers themselves. Due to the physical constraints and small display screens, the user interface needs to be designed so that users can perform tasks easily and information can be accessed quickly.
This thesis evaluates different features from six of the most popularly used mobile browsers (Chrome, Dolphin, Internet Explorer, Opera Mini, Safari, and the UC Browser) in order to determine which features help to improve the mobile browsing experience. Ten participants were asked to perform several tasks on two mobile browsers and evaluate the browsers based on task difficulty. After all the tasks were completed, participants were asked to evaluate the overall usability of the browser.
The results showed that participants found most tasks easy to perform on all browsers. However, during the test sessions, it was observed that several participants found the tasks of adding a bookmark and locating saved bookmarks slightly difficult. This was due to each browser implementing different designs and using different icons for the bookmarking functionality. Based on interviews concerning their everyday browsing behavior, participants acknowledge that the most used feature is the combined address bar/search bar. Other features, such as bookmarking or customized on-screen keyboards, are either ignored or go unnoticed in favor of faster and more immediate interaction with their browser
How do different devices impact users' web browsing experience?
The digital world presents many interfaces, among which the desktop and mobile device platforms are dominant. Grasping the differential user experience (UX) on these devices is a critical requirement for developing user focused interfaces that can deliver enhanced satisfaction. This study specifically focuses on the user's web browsing experience while using desktop and mobile.
The thesis adopts quantitative methodology. This amalgamation presents a comprehensive understanding of the influence of device specific variables, such as loading speed, security concerns and interaction techniques, which are critically analyzed. Moreover, various UX facets including usability, user interface (UI) design, accessibility, content organization, and user satisfaction on both devices were also discussed.
Substantial differences are observed in the UX delivered by desktop and mobile devices, dictated by inherent device attributes and user behaviors. Mobile UX is often associated with personal, context sensitive use, while desktop caters more effectively to intensive, extended sessions.
A surprising revelation is the existing discrepancy between the increasing popularity of mobile devices and the persistent inability of many websites and applications to provide a satisfactory mobile UX. This issue primarily arises from the ineffective adaptation of desktop-focused designs to the mobile, underscoring the necessity for distinct, device specific strategies in UI development.
By furnishing pragmatic strategies for designing efficient, user-friendly and inclusive digital interfaces for both devices; the thesis contributes significantly to the existing body of literature. An emphasis is placed on a device-neutral approach in UX design, taking into consideration the unique capabilities and constraints of each device, thereby enriching the expanding discourse on multiservice user experience. As well as this study contributes to digital marketing and targeÂted advertising perspeÂctives
How do different devices impact users' web browsing experience?
The digital world presents many interfaces, among which the desktop and mobile device platforms are dominant. Grasping the differential user experience (UX) on these devices is a critical requirement for developing user focused interfaces that can deliver enhanced satisfaction. This study specifically focuses on the user's web browsing experience while using desktop and mobile.
The thesis adopts quantitative methodology. This amalgamation presents a comprehensive understanding of the influence of device specific variables, such as loading speed, security concerns and interaction techniques, which are critically analyzed. Moreover, various UX facets including usability, user interface (UI) design, accessibility, content organization, and user satisfaction on both devices were also discussed.
Substantial differences are observed in the UX delivered by desktop and mobile devices, dictated by inherent device attributes and user behaviors. Mobile UX is often associated with personal, context sensitive use, while desktop caters more effectively to intensive, extended sessions.
A surprising revelation is the existing discrepancy between the increasing popularity of mobile devices and the persistent inability of many websites and applications to provide a satisfactory mobile UX. This issue primarily arises from the ineffective adaptation of desktop-focused designs to the mobile, underscoring the necessity for distinct, device specific strategies in UI development.
By furnishing pragmatic strategies for designing efficient, user-friendly and inclusive digital interfaces for both devices; the thesis contributes significantly to the existing body of literature. An emphasis is placed on a device-neutral approach in UX design, taking into consideration the unique capabilities and constraints of each device, thereby enriching the expanding discourse on multiservice user experience. As well as this study contributes to digital marketing and targeÂted advertising perspeÂctives
Automatic adoption of touch as pointing modality on a touchscreen laptop: Beginners' motivators and inhibitors
Touch modality is a widely integrated and a highly desirable feature in modern interactive technological devices. It is the de-facto interaction modality in touch-enabled mobile devices such as smartphones and tablets. Nowadays, the list of touchable interfaces is continuously expanding and even includes previously non-touchable devices such as laptops. Touch modality in laptops, however, does not stand out as the default modality for interacting with the device. Primarily, a laptop can be operated with either of the traditional point-and-click modality alternatives already present, the mouse and the trackpad. User studies on pointing modalities have generated little information on the automatic use of touch since these studies are often grounded on users' preferential intentions, but rarely on the drivers that facilitate or impede the adoption of touch.
This thesis endeavours to understand how certain factors such as background in touch usage, usage mode, type of pointing task, pointing targets and starting modality motivate or inhibit beginners' automatic adoption of touch modality for activating interactive web elements on a touchscreen laptop device.
An observation of users' pointing movements was conducted in two sets of possible laptop usage mode - on a desktop and on a couch - with the aim of identifying the frequency of touches occurring as first instance. The observation aims to investigate the automatic adoption of touch by having participants perform pointing tasks on interactive web elements.
The data obtained show that participants are motivated to automatically adopt touch within a more relaxed use context such as sitting on a sofa or on a playful task such as drawing.
In conclusion, while there are not too many interactions on a touchscreen laptop which would necessitate the use of touch, its automatic adoption is, nevertheless, possible and has the potential to become widespread if user interfaces convey discoverable features of 'touchability' and if perceived worthiness of using touch overrides existing habitual usage of non-touch modalities
An Ergonomics Analysis of Redundancy Effect in Touch Screen Design for the Aged Population
Touch screen technology is rapidly increasing, and at the same time there is a shifting, aging population. As the percentage of the population over the age of 65 increases, adults of the age group are adopting smartphones and tablets more now than ever before. Although older adults are adopting touch screen devices, they face many challenges when interacting with said devices, such as not knowing how to navigate between pages, not knowing where to click for an action to occur, and the touch screen interface is often too sensitive or the buttons are not big enough. Furthermore, the challenges of aging, specifically sensory and cognitive decline resulting from aging affect comprehension and spacial processing, which are critical when navigating through an interface.
The purpose of this thesis was to better understand redundancy effect applied to females and males between the ages of 65 and 84. There were two tasks of different lengths, and for each task there were two designs. The first design included text only buttons, and the second included symbol + text buttons, the latter being the redundant interface. Quantitative results yielded no significant results for time for either task. Qualitative results included ratings for ease of navigation, general satisfaction, overall understanding, and button design preference. Preferences between text only buttons were statistically significant; for the task of online grocery shopping and booking a cruise, females prefer text only buttons and males prefer symbol + text buttons (p = 0.0068 and p = 0.0024). Although button design had no significance in completing a task, significant preference results indicate likelihood to return to a given website. Furthermore, although quantitative results were not significant, gender did influence average times per task and average ratings across categories. Further research could be conducted with larger sample sizes, other forms of redundancy, and larger tasks, however it is evident through this experiment that gender has an impact on how adults between 65 and 84 perceive and navigate through touch screen interfaces given the constraints of the symbols used, ages, and task designs. Therefore, concluding recommendations based on the qualitative data suggest that designers should create gender specific interfaces based on gender favored websites, or design based on the ability to customize the interface upon entering a website
- …