12,684 research outputs found

    The design and evaluation of an auditory-enhanced scrollbar

    Get PDF
    A structured method is described for the analysis of interactions to identify situations where hidden information may exist and where non-speech sound might be used to overcome the associated problems. Interactions are considered in terms of events, status and modes to find any hidden information. This is then categorised in terms of the feedback needed to present it. An auditory-enhanced scrollbar, based on the method described, was then experimentally tested. Timing and error rates were used along with subjective measures of workload. Results from the experiment show a significant reduction in time to complete one task, a decrease in the mental effort required and an overall preference for the auditory-enhanced scrollbar

    Why People Search for Images using Web Search Engines

    Get PDF
    What are the intents or goals behind human interactions with image search engines? Knowing why people search for images is of major concern to Web image search engines because user satisfaction may vary as intent varies. Previous analyses of image search behavior have mostly been query-based, focusing on what images people search for, rather than intent-based, that is, why people search for images. To date, there is no thorough investigation of how different image search intents affect users' search behavior. In this paper, we address the following questions: (1)Why do people search for images in text-based Web image search systems? (2)How does image search behavior change with user intent? (3)Can we predict user intent effectively from interactions during the early stages of a search session? To this end, we conduct both a lab-based user study and a commercial search log analysis. We show that user intents in image search can be grouped into three classes: Explore/Learn, Entertain, and Locate/Acquire. Our lab-based user study reveals different user behavior patterns under these three intents, such as first click time, query reformulation, dwell time and mouse movement on the result page. Based on user interaction features during the early stages of an image search session, that is, before mouse scroll, we develop an intent classifier that is able to achieve promising results for classifying intents into our three intent classes. Given that all features can be obtained online and unobtrusively, the predicted intents can provide guidance for choosing ranking methods immediately after scrolling

    Target Acquisition in Multiscale Electronic Worlds

    Get PDF
    Since the advent of graphical user interfaces, electronic information has grown exponentially, whereas the size of screen displays has stayed almost the same. Multiscale interfaces were designed to address this mismatch, allowing users to adjust the scale at which they interact with information objects. Although the technology has progressed quickly, the theory has lagged behind. Multiscale interfaces pose a stimulating theoretical challenge, reformulating the classic target-acquisition problem from the physical world into an infinitely rescalable electronic world. We address this challenge by extending Fitts’ original pointing paradigm: we introduce the scale variable, thus defining a multiscale pointing paradigm. This article reports on our theoretical and empirical results. We show that target-acquisition performance in a zooming interface must obey Fitts’ law, and more specifically, that target-acquisition time must be proportional to the index of difficulty. Moreover, we complement Fitts’ law by accounting for the effect of view size on pointing performance, showing that performance bandwidth is proportional to view size, up to a ceiling effect. The first empirical study shows that Fitts’ law does apply to a zoomable interface for indices of difficulty up to and beyond 30 bits, whereas classical Fitts’ law studies have been confined in the 2-10 bit range. The second study demonstrates a strong interaction between view size and task difficulty for multiscale pointing, and shows a surprisingly low ceiling. We conclude with implications of these findings for the design of multiscale user interfaces

    GazeDrone: Mobile Eye-Based Interaction in Public Space Without Augmenting the User

    Get PDF
    Gaze interaction holds a lot of promise for seamless human-computer interaction. At the same time, current wearable mobile eye trackers require user augmentation that negatively impacts natural user behavior while remote trackers require users to position themselves within a confined tracking range. We present GazeDrone, the first system that combines a camera-equipped aerial drone with a computational method to detect sidelong glances for spontaneous (calibration-free) gaze-based interaction with surrounding pervasive systems (e.g., public displays). GazeDrone does not require augmenting each user with on-body sensors and allows interaction from arbitrary positions, even while moving. We demonstrate that drone-supported gaze interaction is feasible and accurate for certain movement types. It is well-perceived by users, in particular while interacting from a fixed position as well as while moving orthogonally or diagonally to a display. We present design implications and discuss opportunities and challenges for drone-supported gaze interaction in public

    Evaluation of laser range-finder mapping for agricultural spraying vehicles

    Get PDF
    In this paper, we present a new application of laser range-finder sensing to agricultural spraying vehicles. The current generation of spraying vehicles use automatic controllers to maintain the height of the sprayer booms above the crop. However, these control systems are typically based on ultrasonic sensors mounted on the booms, which limits the accuracy of the measurements and the response of the controller to changes in the terrain, resulting in a sub-optimal spraying process. To overcome these limitations, we propose to use a laser scanner, attached to the front of the sprayer's cabin, to scan the ground surface in front of the vehicle and to build a scrolling 3d map of the terrain. We evaluate the proposed solution in a series of field tests, demonstrating that the approach provides a more detailed and accurate representation of the environment than the current sonar-based solution, and which can lead to the development of more efficient boom control systems
    • …
    corecore