304 research outputs found

    Mid-air haptic rendering of 2D geometric shapes with a dynamic tactile pointer

    Get PDF
    An important challenge that affects ultrasonic midair haptics, in contrast to physical touch, is that we lose certain exploratory procedures such as contour following. This makes the task of perceiving geometric properties and shape identification more difficult. Meanwhile, the growing interest in mid-air haptics and their application to various new areas requires an improved understanding of how we perceive specific haptic stimuli, such as icons and control dials in mid-air. We address this challenge by investigating static and dynamic methods of displaying 2D geometric shapes in mid-air. We display a circle, a square, and a triangle, in either a static or dynamic condition, using ultrasonic mid-air haptics. In the static condition, the shapes are presented as a full outline in mid-air, while in the dynamic condition, a tactile pointer is moved around the perimeter of the shapes. We measure participants’ accuracy and confidence of identifying shapes in two controlled experiments (n1 = 34, n2 = 25). Results reveal that in the dynamic condition people recognise shapes significantly more accurately, and with higher confidence. We also find that representing polygons as a set of individually drawn haptic strokes, with a short pause at the corners, drastically enhances shape recognition accuracy. Our research supports the design of mid-air haptic user interfaces in application scenarios such as in-car interactions or assistive technology in education

    Mid-Air Haptic Rendering of 2D Geometric Shapes with a Dynamic Tactile Pointer

    Get PDF
    IEEE An important challenge that affects ultrasonic midair haptics, in contrast to physical touch, is that we lose certain exploratory procedures such as contour following. This makes the task of perceiving geometric properties and shape identification more difficult. Meanwhile, the growing interest in mid-air haptics and their application to various new areas requires an improved understanding of how we perceive specific haptic stimuli, such as icons and control dials in mid-air. We address this challenge by investigating static and dynamic methods of displaying 2D geometric shapes in mid-air. We display a circle, a square, and a triangle, in either a static or dynamic condition, using ultrasonic mid-air haptics. In the static condition, the shapes are presented as a full outline in mid-air, while in the dynamic condition, a tactile pointer is moved around the perimeter of the shapes. We measure participants' accuracy and confidence of identifying shapes in two controlled experiments (n1=34;n2=25n_1 = 34; n_2 = 25). Results reveal that in the dynamic condition people recognise shapes significantly more accurately, and with higher confidence. We also find that representing polygons as a set of individually drawn haptic strokes, with a short pause at the corners, drastically enhances shape recognition accuracy. Our research supports the design of mid-air haptic user interfaces in application scenarios such as in-car interactions or assistive technology in education

    Barehand Mode Switching in Touch and Mid-Air Interfaces

    Get PDF
    Raskin defines a mode as a distinct setting within an interface where the same user input will produce results different to those it would produce in other settings. Most interfaces have multiple modes in which input is mapped to different actions, and, mode-switching is simply the transition from one mode to another. In touch interfaces, the current mode can change how a single touch is interpreted: for example, it could draw a line, pan the canvas, select a shape, or enter a command. In Virtual Reality (VR), a hand gesture-based 3D modelling application may have different modes for object creation, selection, and transformation. Depending on the mode, the movement of the hand is interpreted differently. However, one of the crucial factors determining the effectiveness of an interface is user productivity. Mode-switching time of different input techniques, either in a touch interface or in a mid-air interface, affects user productivity. Moreover, when touch and mid-air interfaces like VR are combined, making informed decisions pertaining to the mode assignment gets even more complicated. This thesis provides an empirical investigation to characterize the mode switching phenomenon in barehand touch-based and mid-air interfaces. It explores the potential of using these input spaces together for a productivity application in VR. And, it concludes with a step towards defining and evaluating the multi-faceted mode concept, its characteristics and its utility, when designing user interfaces more generally

    Phrasing Bimanual Interaction for Visual Design

    Get PDF
    Architects and other visual thinkers create external representations of their ideas to support early-stage design. They compose visual imagery with sketching to form abstract diagrams as representations. When working with digital media, they apply various visual operations to transform representations, often engaging in complex sequences. This research investigates how to build interactive capabilities to support designers in putting together, that is phrasing, sequences of operations using both hands. In particular, we examine how phrasing interactions with pen and multi-touch input can support modal switching among different visual operations that in many commercial design tools require using menus and tool palettes—techniques originally designed for the mouse, not pen and touch. We develop an interactive bimanual pen+touch diagramming environment and study its use in landscape architecture design studio education. We observe interesting forms of interaction that emerge, and how our bimanual interaction techniques support visual design processes. Based on the needs of architects, we develop LayerFish, a new bimanual technique for layering overlapping content. We conduct a controlled experiment to evaluate its efficacy. We explore the use of wearables to identify which user, and distinguish what hand, is touching to support phrasing together direct-touch interactions on large displays. From design and development of the environment and both field and controlled studies, we derive a set methods, based upon human bimanual specialization theory, for phrasing modal operations through bimanual interactions without menus or tool palettes

    Human Factors Considerations in System Design

    Get PDF
    Human factors considerations in systems design was examined. Human factors in automated command and control, in the efficiency of the human computer interface and system effectiveness are outlined. The following topics are discussed: human factors aspects of control room design; design of interactive systems; human computer dialogue, interaction tasks and techniques; guidelines on ergonomic aspects of control rooms and highly automated environments; system engineering for control by humans; conceptual models of information processing; information display and interaction in real time environments

    eStorys: A visual storyboard system supporting back-channel communication for emergencies

    Get PDF
    This is the post-print version of the final paper published in Journal of Visual Languages & Computing. The published article is available from the link below. Changes resulting from the publishing process, such as peer review, editing, corrections, structural formatting, and other quality control mechanisms may not be reflected in this document. Changes may have been made to this work since it was submitted for publication. Copyright @ 2010 Elsevier B.V.In this paper we present a new web mashup system for helping people and professionals to retrieve information about emergencies and disasters. Today, the use of the web during emergencies, is confirmed by the employment of systems like Flickr, Twitter or Facebook as demonstrated in the cases of Hurricane Katrina, the July 7, 2005 London bombings, and the April 16, 2007 shootings at Virginia Polytechnic University. Many pieces of information are currently available on the web that can be useful for emergency purposes and range from messages on forums and blogs to georeferenced photos. We present here a system that, by mixing information available on the web, is able to help both people and emergency professionals in rapidly obtaining data on emergency situations by using multiple web channels. In this paper we introduce a visual system, providing a combination of tools that demonstrated to be effective in such emergency situations, such as spatio/temporal search features, recommendation and filtering tools, and storyboards. We demonstrated the efficacy of our system by means of an analytic evaluation (comparing it with others available on the web), an usability evaluation made by expert users (students adequately trained) and an experimental evaluation with 34 participants.Spanish Ministry of Science and Innovation and Universidad Carlos III de Madrid and Banco Santander

    Expressy : Using a Wrist-worn Inertial Measurement Unit to Add Expressiveness to Touch-based Interactions

    Get PDF
    Expressiveness, which we define as the extent to which rich and complex intent can be conveyed through action, is a vital aspect of many human interactions. For instance, paint on canvas is said to be an expressive medium, because it affords the artist the ability to convey multifaceted emotional intent through intricate manipulations of a brush. To date, touch devices have failed to offer users a level of expressiveness in their interactions that rivals that experienced by the painter and those completing other skilled physical tasks. We investigate how data about hand movement – provided by a motion sensor, similar to those found in many smart watches or fitness trackers – can be used to expand the expressiveness of touch interactions. We begin by introducing a conceptual model that formalizes a design space of possible expressive touch interactions. We then describe and evaluate Expressy, an approach that uses a wrist-worn inertial measurement unit to detect and classify qualities of touch interaction that extend beyond those offered by today’s typical sensing hardware. We conclude by describing a number of sample applications, which demonstrate the enhanced, expressive interaction capabilities made possible by Expressy

    Prosodic boundary phenomena

    Get PDF
    Synopsis: In spoken language comprehension, the hearer is faced with a more or less continuous stream of auditory information. Prosodic cues, such as pitch movement, pre-boundary lengthening, and pauses, incrementally help to organize the incoming stream of information into prosodic phrases, which often coincide with syntactic units. Prosody is hence central to spoken language comprehension and some models assume that the speaker produces prosody in a consistent and hierarchical fashion. While there is manifold empirical evidence that prosodic boundary cues are reliably and robustly produced and effectively guide spoken sentence comprehension across different populations and languages, the underlying mechanisms and the nature of the prosody-syntax interface still have not been identified sufficiently. This is also reflected in the fact that most models on sentence processing completely lack prosodic information. This edited book volume is grounded in a workshop that was held in 2021 at the annual conference of the Deutsche Gesellschaft fĂĽr Sprachwissenschaft (DGfS). The five chapters cover selected topics on the production and comprehension of prosodic cues in various populations and languages, all focusing in particular on processing of prosody at structurally relevant prosodic boundaries. Specifically, the book comprises cross-linguistic evidence as well as evidence from non-native listeners, infants, adults, and elderly speakers, highlighting the important role of prosody in both language production and comprehension
    • …
    corecore