1,903 research outputs found

    Evaluating the effectiveness of physical shape-change for in-pocket mobile device notifications

    Get PDF
    Audio and vibrotactile output are the standard mechanisms mobile devices use to attract their owner's attention. Yet in busy and noisy environments, or when the user is physically active, these channels sometimes fail. Recent work has explored the use of physical shape-change as an additional method for conveying notifications when the device is in-hand or viewable. However, we do not yet understand the effectiveness of physical shape-change as a method for communicating in-pocket notifications. This paper presents three robustly implemented, mobile-device sized shape-changing devices, and two user studies to evaluate their effectiveness at conveying notifications. The studies reveal that (1) different types and configurations of shape-change convey different levels of urgency and; (2) fast pulsing shape-changing notifications are missed less often and recognised more quickly than the standard slower vibration pulse rates of a mobile device

    ShapeClip: towards rapid prototyping with shape-changing displays for designers

    Get PDF
    This paper presents ShapeClip: a modular tool capable of transforming any computer screen into a z-actuating shape-changing display. This enables designers to produce dynamic physical forms by "clipping" actuators onto screens. ShapeClip displays are portable, scalable, fault-tolerant, and support runtime re-arrangement. Users are not required to have knowledge of electronics or programming, and can develop motion designs with presentation software, image editors, or web-technologies. To evaluate ShapeClip we carried out a full-day workshop with expert designers. Participants were asked to generate shape-changing designs and then construct them using ShapeClip. ShapeClip enabled participants to rapidly and successfully transform their ideas into functional systems

    Ubiquitous Interoperable Emergency Response System

    Get PDF
    In the United States, there is an emergency dispatch for fire department services more than once every second - 31,854,000 incidents in 2012. While large scale disasters present enormous response complexity, even the most common emergencies require a better way to communicate information between personnel. Through real-time location and status updates using integrated sensors, this system can significantly decrease emergency response times and improve the overall effectiveness of emergency responses. Aside from face-to-face communication, radio transmissions are the most common medium for transferring information during emergency incidents. However, this type of information sharing is riddled with issues that are nearly impossible to overcome on a scene. Poor sound quality, the failure to hear transmissions, the inability to reach a radio microphone, and the transient nature of radio messages illustrate just a few of the problems. Proprietary and closed systems that collect and present response data have been implemented, but lack interoperability and do not provide a full array of necessary services. Furthermore, the software and hardware that run the systems are generally poorly designed for emergency response scenarios. Pervasive devices, which can transmit data without human interaction, and software using open communication standards designed for multiple platforms and form factors are two essential components. This thesis explores the issues, history, design, and implementation of a ubiquitous interoperable emergency response system by taking advantage of the latest in hardware and software, including Google Glass, Android powered mobile devices, and a cloud based architecture that can automatically scale to 7 billion requests per day. Implementing this pervasive system that transcends physical barriers by allowing disparate devices to communicate and operate harmoniously without human interaction is a step towards a practical solution for emergency response management

    Coping with Digital Wellbeing in a Multi-Device World

    Get PDF
    While Digital Self-Control Tools (DSCTs) mainly target smartphones, more effort should be put into evaluating multi-device ecosystems to enhance digital wellbeing as users typically use multiple devices at a time. In this paper, we first review more than 300 DSCTs by demonstrating that the majority of them implements a single-device conceptualization that poorly adapts to multi-device settings. Then, we report on the results from an interview and a sketching exercise (N=20) exploring how users make sense of their multi-device digital wellbeing. Findings show that digital wellbeing issues extend beyond smartphones, with the most problematic behaviors deriving from the simultaneous usage of different devices to perform uncorrelated tasks. While this suggests the need of DSCTs that can adapt to different and multiple devices, our work also highlights the importance of learning how to properly behave with technology, e.g., through educational courses, which may be more effective than any lock-out mechanism

    Principles for Designing Context-Aware Applications for Physical Activity Promotion

    Full text link
    Mobile devices with embedded sensors have become commonplace, carried by billions of people worldwide. Their potential to influence positive health behaviors such as physical activity in people is just starting to be realized. Two critical ingredients, an accurate understanding of human behavior and use of that knowledge for building computational models, underpin all emerging behavior change applications. Early research prototypes suggest that such applications would facilitate people to make difficult decisions to manage their complex behaviors. However, the progress towards building real-world systems that support behavior change has been much slower than expected. The extreme diversity in real-world contextual conditions and user characteristics has prevented the conception of systems that scale and support end-users’ goals. We believe that solutions to the many challenges of designing context-aware systems for behavior change exist in three areas: building behavior models amenable to computational reasoning, designing better tools to improve our understanding of human behavior, and developing new applications that scale existing ways of achieving behavior change. With physical activity as its focus, this thesis addresses some crucial challenges that can move the field forward. Specifically, this thesis provides the notion of sweet spots, a phenomenological account of how people make and execute their physical activity plans. The key contribution of this concept is in its potential to improve the predictability of computational models supporting physical activity planning. To further improve our understanding of the dynamic nature of human behavior, we designed and built Heed, a low-cost, distributed and situated self-reporting device. Heed’s single-purpose and situated nature proved its use as the preferred device for self-reporting in many contexts. We finally present a crowdsourcing system that leverages expert knowledge to write personalized behavior change messages for large-scale context-aware applications.PHDInformationUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttps://deepblue.lib.umich.edu/bitstream/2027.42/144089/1/gparuthi_1.pd

    Emergeables: Deformable Displays for Continuous Eyes-Free Mobile Interaction

    Get PDF
    International audienceWe present the concept of Emergeables - mobile surfaces that can deform or 'morph' to provide fully-actuated, tangible controls. Our goal in this work is to provide the flexibility of graphical touchscreens, coupled with the affordance and tactile benefits offered by physical widgets. In contrast to previous research in the area of deformable displays, our work focuses on continuous controls (e.g., dials or sliders), and strives for fully-dynamic positioning, providing versatile widgets that can change shape and location depending on the user's needs. We describe the design and implementation of two prototype emergeables built to demonstrate the concept, and present an in-depth evaluation that compares both with a touchscreen alternative. The results show the strong potential of emergeables for on-demand, eyes-free control of continuous parameters, particularly when comparing the accuracy and usability of a high-resolution emergeable to a standard GUI approach. We conclude with a discussion of the level of resolution that is necessary for future emergeables, and suggest how high-resolution versions might be achieved
    • 

    corecore