12,539 research outputs found

    Bringing tabletop technologies to kindergarten children

    Get PDF
    Taking computer technology away from the desktop and into a more physical, manipulative space, is known that provide many benefits and is generally considered to result in a system that is easier to learn and more natural to use. This paper describes a design solution that allows kindergarten children to take the benefits of the new pedagogical possibilities that tangible interaction and tabletop technologies offer for manipulative learning. After analysis of children's cognitive and psychomotor skills, we have designed and tuned a prototype game that is suitable for children aged 3 to 4 years old. Our prototype uniquely combines low cost tangible interaction and tabletop technology with tutored learning. The design has been based on the observation of children using the technology, letting them freely play with the application during three play sessions. These observational sessions informed the design decisions for the game whilst also confirming the children's enjoyment of the prototype

    GestureGPT: Zero-shot Interactive Gesture Understanding and Grounding with Large Language Model Agents

    Full text link
    Current gesture recognition systems primarily focus on identifying gestures within a predefined set, leaving a gap in connecting these gestures to interactive GUI elements or system functions (e.g., linking a 'thumb-up' gesture to a 'like' button). We introduce GestureGPT, a novel zero-shot gesture understanding and grounding framework leveraging large language models (LLMs). Gesture descriptions are formulated based on hand landmark coordinates from gesture videos and fed into our dual-agent dialogue system. A gesture agent deciphers these descriptions and queries about the interaction context (e.g., interface, history, gaze data), which a context agent organizes and provides. Following iterative exchanges, the gesture agent discerns user intent, grounding it to an interactive function. We validated the gesture description module using public first-view and third-view gesture datasets and tested the whole system in two real-world settings: video streaming and smart home IoT control. The highest zero-shot Top-5 grounding accuracies are 80.11% for video streaming and 90.78% for smart home tasks, showing potential of the new gesture understanding paradigm

    Smart Vehicle Proxemics: A Conceptual Framework Operationalizing Proxemics in the Context of Outside-the-Vehicle Interactions

    Get PDF
    We introduce smart vehicle proxemics, a conceptual framework for interactive vehicular applications that operationalizes proxemics to outside-the-vehicle interactions. We identify four zones around the vehicle affording different kinds of interactions and discuss the corresponding conceptual space along three dimensions (physical distance, interaction paradigm, and goal). We study the dimensions of this framework and synthesize our findings regarding drivers’ preferences for (i) information to obtain from their vehicles at a distance, (ii) system functions of their vehicles to control remotely, and (iii) devices (e.g., smartphones, smartglasses, smart key fobs) for interactions outside the vehicle. We discuss the positioning of smart vehicle proxemics in the context of proxemic interactions more generally, and expand on the dichotomy and complementarity of outside-the-vehicle and inside-the-vehicle interactions for new applications enabled by smart vehicle proxemics

    Self-Powered Gesture Recognition with Ambient Light

    Get PDF
    We present a self-powered module for gesture recognition that utilizes small, low-cost photodiodes for both energy harvesting and gesture sensing. Operating in the photovoltaic mode, photodiodes harvest energy from ambient light. In the meantime, the instantaneously harvested power from individual photodiodes is monitored and exploited as a clue for sensing finger gestures in proximity. Harvested power from all photodiodes are aggregated to drive the whole gesture-recognition module including a micro-controller running the recognition algorithm. We design robust, lightweight algorithm to recognize finger gestures in the presence of ambient light fluctuations. We fabricate two prototypes to facilitate user’s interaction with smart glasses and smart watches. Results show 99.7%/98.3% overall precision/recall in recognizing five gestures on glasses and 99.2%/97.5% precision/recall in recognizing seven gestures on the watch. The system consumes 34.6 µW/74.3 µW for the glasses/watch and thus can be powered by the energy harvested from ambient light. We also test system’s robustness under various light intensities, light directions, and ambient light fluctuations. The system maintains high recognition accuracy (\u3e 96%) in all tested settings
    • …
    corecore