703 research outputs found

    How people recognize previously seen Web pages from titles, URLs and thumbnails

    Get PDF
    The selectable lists of pages offered by web browsers ’ history and bookmark facilities ostensibly make it easier for people to return to previously visited pages. These lists show the pages as abstractions, typically as truncated titles and URLs, and more rarely as small thumbnail images. Yet we have little knowledge of how recognizable these representations really are. Consequently, we carried out a study that compared the recognizability of thumbnails between various image sizes, and of titles and URLs between various string sizes. Our results quantify the tradeoff between the size of these representations and their recognizability. These findings directly contribute to how history and bookmark lists should be designed

    Avoiding interference through translucent interface components in single display groupware

    Get PDF

    Gsi demo: Multiuser gesture/speech interaction over digital tables by wrapping single user applications

    Get PDF
    Most commercial software applications are designed for a single user using a keyboard/mouse over an upright monitor. Our interest is exploiting these systems so they work over a digital table. Mirroring what people do when working over traditional tables, we want to allow multiple people to interact naturally with the tabletop application and with each other via rich speech and hand gesture and speech interaction on a digital table for geospatial applications- Google Earth, Warcraft III and The Sims. In this paper, we describe our underlying architecture: GSI Demo. First, GSI Demo creates a run-time wrapper around existing single user applications: it accepts and translates speech and gestures from multiple people into a single stream of keyboard and mouse inputs recognized by the application. Second, it lets people use multimodal demonstration- instead of programming- to quickly map their own speech and gestures to these keyboard/mouse inputs. For example, continuous gestures are trained by saying ¨Computer, when I do (one finger gesture), you do (mouse drag) ¨. Similarly, discrete speech commands can be trained by saying ¨Computer, when I say (layer bars), you do (keyboard and mouse macro) ¨. The end result is that end users can rapidly transform single user commercial applications into a multi-user, multimodal digital tabletop system

    Empirical development of a heuristic evaluation methodology for shared workspace groupware

    Full text link
    Good real time groupware products are hard to develop, in part because evaluating their support for basic teamwork activities is difficult and costly. To address this problem, we are developing discount evaluation methods that look for groupware-specific usability problems. In a previous paper, we detailed a new set of usability heuristics that evaluators can use to inspect shared workspace groupware to see how they support teamwork. We wanted to determine whether the new heuristics could be integrated into a low-cost methodology that parallels Nielsen’s traditional heuristic evaluation (HE). To this end, we examined 27 evaluations of two shared workspace groupware systems and analysed the inspectors ’ relative performance and variability. Similar to Nielsen’s findings for traditional HE, individual inspectors discovered about a fifth of the total known teamwork problems, and that there was only modest overlap in the problems they found. Groups of three to five inspectors would report about 40– 60 % of the total known teamwork problems. These results suggest that heuristic evaluation using our groupware heuristics can be an effective and efficient method for identifying teamwork problems in shared workspace groupware systems

    Collaboration Surrounding Beacon Use During Companion Avalanche Rescues

    Get PDF
    When facing an avalanche, backcountry skiers need to work effectively both individually and as a group to rescue buried victims. If they don’t, death is likely. One of the tools used by each person is a digital beacon that transmits an electromagnetic signal. If buried, others use their beacons to locate victims by searching for their signals, and then dig them out. This study focuses on the collaborative practices of avalanche rescue and the interactions with beacons while backcountry skiing. We conducted interviews with backcountry recreationists and experts, and we observed avalanche rescue practice scenarios. Our results highlight aspects and challenges of mental representation, trust, distributed cognition, and practice. Implications include three considerations for the redesign of beacons: simplicity, visibility and practice

    Intimacy in Long-Distance Relationships over Video Chat

    Get PDF
    Many couples live a portion of their lives being separated from each other as part of a long-distance relationship (LDR). This includes a large number of dating college students as well as couples who are geographically-separated because of situational demands such as work. We conducted interviews with individuals in LDRs to understand how they make use of video chat systems to maintain their relationships. In particular, we have investigated how couples use video to “hang out” together and engage in activities over extended periods of time. Our results show that regardless of the relationship situation, video affords a unique opportunity for couples to share presence over distance, which in turn provides intimacy and reduced idealization. While beneficial, couples still face challenges in using video, including contextual (e.g., location of partners, time zone differences), technical (e.g., mobility, audio and video quality, networking), and personal (e.g., a lack of true physicality needed by most in order to support intimate sexual acts) challenges

    Informing the Design of Proxemic Interactions

    Full text link

    Preschool children’s coping responses and outcomes in the vaccination context: child and caregiver transactional and longitudinal relationships

    Get PDF
    This article, based on 2 companion studies, presents an in-depth analysis of preschoolers coping with vaccination pain. Study 1 used an autoregressive cross-lagged path model to investigate the dynamic and reciprocal relationships between young children’s coping responses (how they cope with pain and distress) and coping outcomes (pain behaviors) at the preschool vaccination. Expanding on this analysis, study 2 then modeled preschool coping responses and outcomes using both caregiver and child variables from the child’s 12-month vaccination (n 5 548), preschool vaccination (n 5 302), and a preschool psychological assessment (n 5 172). Summarizing over the 5 path models and post hoc analyses over the 2 studies, novel transactional and longitudinal pathways predicting preschooler coping responses and outcomes were elucidated. Our research has provided empirical support for the need to differentiate between coping responses and coping outcomes: 2 different, yet interrelated, components of “coping.”Among our key findings, the results suggest that a preschooler’s ability to cope is a powerful tool to reduce pain-related distress but must be maintained throughout the appointment; caregiver behavior and poorer pain regulation from the 12-month vaccination appointment predicted forward to preschool coping responses and/or outcomes; robust concurrent relationships exist between caregiver behaviors and both child coping responses and outcomes, and finally, caregiver behaviors during vaccinations are not only critical to both child pain coping responses and outcomes in the short- and long-term but also show relationships to broader child cognitive abilities as well

    Providing artifact awareness to a distributed group through screen sharing

    Full text link
    Despite the availability of awareness servers and casual interaction systems, distributed groups still cannot maintain artifact awareness the easy awareness of the documents, objects, and tools that other people are using that is a natural part of co-located work environments. To address this deficiency, we designed an awareness tool that uses screen sharing to provide information about other people s artifacts. People see others screens in miniature at the edge of their display, can selectively raise a larger view of that screen to get more detail, and can engage in remote pointing if desired. Initial experiences show that people use our tool for several purposes: to maintain awareness of what others are doing, to project a certain image of themselves, to monitor progress and coordinate joint tasks, to help determine when another person can be interrupted, and to engage in serendipitous conversation and collaboration. People have also been able to balance awareness with privacy, by using the privacy protection strategies built into our system: restricting what parts of the screen others can see, specifying update frequency, hiding image detail, and getting feedback of when screenshots are taken.We are currently acquiring citations for the work deposited into this collection. We recognize the distribution rights of this item may have been assigned to another entity, other than the author(s) of the work.If you can provide the citation for this work or you think you own the distribution rights to this work please contact the Institutional Repository Administrator at [email protected]
    • …
    corecore