22,791 research outputs found
Designing an Adaptive Web Navigation Interface for Users with Variable Pointing Performance
Many online services and products require users to point and interact with user interface elements. For individuals who experience variable pointing ability due to physical impairments, environmental issues or age, using an input device (e.g., a computer mouse) to select elements on a website can be difficult. Adaptive user interfaces dynamically change their functionality in response to user behavior. They can support individuals with variable pointing abilities by 1) adapting dynamically to make element selection easier when a user is experiencing pointing difficulties, and 2) informing users about these pointing errors. While adaptive interfaces are increasingly prevalent on the Web, little is known about the preferences and expectations of users with variable pointing abilities and how to design systems that dynamically support them given these preferences.
We conducted an investigation with 27 individuals who intermittently experience pointing problems to inform the design of an adaptive interface for web navigation. We used a functional high-fidelity prototype as a probe to gather information about user preferences and expectations. Our participants expected the system to recognize and integrate their preferences for how pointing tasks were carried out, preferred to receive information about system functionality and wanted to be in control of the interaction. We used findings from the study to inform the design of an adaptive Web navigation interface, PINATA that tracks user pointing performance over time and provides dynamic notifications and assistance tailored to their specifications. Our work contributes to a better understanding of users' preferences and expectations of the design of an adaptive pointing system
Research in interactive scene analysis
Cooperative (man-machine) scene analysis techniques were developed whereby humans can provide a computer with guidance when completely automated processing is infeasible. An interactive approach promises significant near-term payoffs in analyzing various types of high volume satellite imagery, as well as vehicle-based imagery used in robot planetary exploration. This report summarizes the work accomplished over the duration of the project and describes in detail three major accomplishments: (1) the interactive design of texture classifiers; (2) a new approach for integrating the segmentation and interpretation phases of scene analysis; and (3) the application of interactive scene analysis techniques to cartography
Why People Search for Images using Web Search Engines
What are the intents or goals behind human interactions with image search
engines? Knowing why people search for images is of major concern to Web image
search engines because user satisfaction may vary as intent varies. Previous
analyses of image search behavior have mostly been query-based, focusing on
what images people search for, rather than intent-based, that is, why people
search for images. To date, there is no thorough investigation of how different
image search intents affect users' search behavior.
In this paper, we address the following questions: (1)Why do people search
for images in text-based Web image search systems? (2)How does image search
behavior change with user intent? (3)Can we predict user intent effectively
from interactions during the early stages of a search session? To this end, we
conduct both a lab-based user study and a commercial search log analysis.
We show that user intents in image search can be grouped into three classes:
Explore/Learn, Entertain, and Locate/Acquire. Our lab-based user study reveals
different user behavior patterns under these three intents, such as first click
time, query reformulation, dwell time and mouse movement on the result page.
Based on user interaction features during the early stages of an image search
session, that is, before mouse scroll, we develop an intent classifier that is
able to achieve promising results for classifying intents into our three intent
classes. Given that all features can be obtained online and unobtrusively, the
predicted intents can provide guidance for choosing ranking methods immediately
after scrolling
Trajectory Clustering for the Classification of Eye-Tracking Users With Motor Disorders
[Abstract] This paper presents a pilot study completed in the framework of the INTERAAC project. The aim of the project is to develop a new human-computer interaction (HCI) solution based on eye-gaze estimation from webcam images for people with motor disorders such as cerebral palsy, neurodegenerative diseases, and spinal cord injury that are otherwise unable to use a keyboard or mouse. In this study, we analyzed cursor trajectories recorded during the experiment and validated that users with different diseases can be automatically classi ed in groups based on trajectory metrics. For the clustering, Ward's method was used. The metrics are based on speed and acceleration statistics from full fi ltered tracks. The results show that the participants can be grouped into two main clusters. The main contribution of this work is the evaluation of the clustering techniques applied to eye-gaze trajecto- ries for the automatic classi cation of users diseases based on a real experiment carried with the help of three clinical partners in Spain.This work has been funded by the Spanish Ministry of Economy and Competitiveness, under the call Retos-ColaboraciĂłn 2015 of the the National Programme for Research Aimed at the Challenges of Society 2009-2016 (RTC-2015-4327-1)https://doi.org/10.17979/spudc.978849749808
Personalized robot assistant for support in dressing
Robot-assisted dressing is performed in close physical interaction with users who may have a wide range of physical characteristics and abilities. Design of user adaptive and personalized robots in this context is still indicating limited, or no consideration, of speciïŹc user-related issues. This paper describes the development of a multi-modal robotic system for a speciïŹc dressing scenario - putting on a shoe, where usersâ personalized inputs contribute to a much improved task success rate. We have developed: 1) user tracking, gesture recognition andposturerecognitionalgorithmsrelyingonimagesprovidedby a depth camera; 2) a shoe recognition algorithm from RGB and depthimages;3)speechrecognitionandtext-to-speechalgorithms implemented to allow verbal interaction between the robot and user. The interaction is further enhanced by calibrated recognition of the usersâ pointing gestures and adjusted robotâs shoe delivery position. A series of shoe ïŹtting experiments have been performed on two groups of users, with and without previous robot personalization, to assess how it affects the interaction performance. Our results show that the shoe ïŹtting task with the personalized robot is completed in shorter time, with a smaller number of user commands and reduced workload
Classifying the Correctness of Generated White-Box Tests: An Exploratory Study
White-box test generator tools rely only on the code under test to select
test inputs, and capture the implementation's output as assertions. If there is
a fault in the implementation, it could get encoded in the generated tests.
Tool evaluations usually measure fault-detection capability using the number of
such fault-encoding tests. However, these faults are only detected, if the
developer can recognize that the encoded behavior is faulty. We designed an
exploratory study to investigate how developers perform in classifying
generated white-box test as faulty or correct. We carried out the study in a
laboratory setting with 54 graduate students. The tests were generated for two
open-source projects with the help of the IntelliTest tool. The performance of
the participants were analyzed using binary classification metrics and by
coding their observed activities. The results showed that participants
incorrectly classified a large number of both fault-encoding and correct tests
(with median misclassification rate 33% and 25% respectively). Thus the real
fault-detection capability of test generators could be much lower than
typically reported, and we suggest to take this human factor into account when
evaluating generated white-box tests.Comment: 13 pages, 7 figure
AMPA experimental communications systems
The program was conducted to demonstrate the satellite communication advantages of Adaptive Phased Array Technology. A laboratory based experiment was designed and implemented to demonstrate a low earth orbit satellite communications system. Using a 32 element, L-band phased array augmented with 4 sets of weights (2 for reception and 2 for transmission) a high speed digital processing system and operating against multiple user terminals and interferers, the AMPA system demonstrated: communications with austere user terminals, frequency reuse, communications in the face of interference, and geolocation. The program and experiment objectives are described, the system hardware and software/firmware are defined, and the test performed and the resultant test data are presented
- âŠ