4 research outputs found

    Hyper-personalized Wearable Sensor Fusion for Contextual Interaction

    Get PDF
    Contextual user interactions with devices and applications today are largely confined to context from location or on-screen context, and to the device at hand. This disclosure describes a context framework that, with user permission, integrates wearable and stationary sensor inputs and traditional digital context into a larger computing ecosystem to deliver content across a range of proactive ambient computing use cases. Devices and apps register their sensors with a context engine and send periodic data updates to the engine. Using machine learning models, the context engine updates the user context based on sensor and external data, and provides the user context to registered devices and apps, which modify their behavior or surface content based on the user’s context

    Change detection for optimized semantic video analysis

    Get PDF
    Semantic analysis or annotation of videos is most useful when done frequently enough to capture the significant moments of a video, but not so frequently that annotations become busy and repetitive. With current techniques, semantic analysis is done too often, overloading the semantic analyzer and overwhelming the viewer with frequent, repetitive, or similar annotations of insubstantially differing frames. This disclosure presents techniques that detect substantial changes in the video for the purposes of semantic analysis. Timely and relevant annotations are presented to viewers without overwhelming them and without overloading the semantic analyzer

    Controlling the density of user-generated content in augmented reality

    Get PDF
    In a multi-user augmented reality (AR) environment, a user can insert virtual objects that other users can see. If users place objects without constraint, then the AR view can become dense, overwhelming, and hard to understand. Per the techniques of this disclosure, the density of objects in an AR environment is constrained by maintaining a minimum distance between existing and newly-placed objects

    Synesthetic Soundtrack

    Get PDF
    This disclosure describes techniques to generate an audio experience or soundscape corresponding to the visual field of a user. With user permission, objects within the feed of a head-mounted camera are semantically identified using computer vision techniques. Based on the detected objects, a unique audio experience, shaped by the world around a user and by the physical items they engage with, is generated
    corecore