48 research outputs found

    I Am Your Father

    Get PDF
    Star Wars is supposed to be a generic mythological story with archetypes and narrative structures that transcend all cultures, both in space and in time. Posting about ­­­­­­­­the influence of movies on our culture from In All Things - an online journal for critical reflection on faith, culture, art, and every ordinary-yet-graced square inch of God’s creation. https://inallthings.org/i-am-your-father

    Urban Constellating

    Get PDF
    Urban Constellating took place on 13 February 2015 in Leeds, West Yorkshire. It was one of a series of events hosted by the Media and Place research cluster at Leeds Beckett University. The text was written by Zoë Thompson and Lynne Hibberd. Thanks to everyone who took part

    Visual Estimation of Fingertip Pressure on Diverse Surfaces using Easily Captured Data

    Full text link
    People often use their hands to make contact with the world and apply pressure. Machine perception of this important human activity could be widely applied. Prior research has shown that deep models can estimate hand pressure based on a single RGB image. Yet, evaluations have been limited to controlled settings, since performance relies on training data with high-resolution pressure measurements that are difficult to obtain. We present a novel approach that enables diverse data to be captured with only an RGB camera and a cooperative participant. Our key insight is that people can be prompted to perform actions that correspond with categorical labels describing contact pressure (contact labels), and that the resulting weakly labeled data can be used to train models that perform well under varied conditions. We demonstrate the effectiveness of our approach by training on a novel dataset with 51 participants making fingertip contact with instrumented and uninstrumented objects. Our network, ContactLabelNet, dramatically outperforms prior work, performs well under diverse conditions, and matched or exceeded the performance of human annotators

    Force-Aware Interface via Electromyography for Natural VR/AR Interaction

    Full text link
    While tremendous advances in visual and auditory realism have been made for virtual and augmented reality (VR/AR), introducing a plausible sense of physicality into the virtual world remains challenging. Closing the gap between real-world physicality and immersive virtual experience requires a closed interaction loop: applying user-exerted physical forces to the virtual environment and generating haptic sensations back to the users. However, existing VR/AR solutions either completely ignore the force inputs from the users or rely on obtrusive sensing devices that compromise user experience. By identifying users' muscle activation patterns while engaging in VR/AR, we design a learning-based neural interface for natural and intuitive force inputs. Specifically, we show that lightweight electromyography sensors, resting non-invasively on users' forearm skin, inform and establish a robust understanding of their complex hand activities. Fuelled by a neural-network-based model, our interface can decode finger-wise forces in real-time with 3.3% mean error, and generalize to new users with little calibration. Through an interactive psychophysical study, we show that human perception of virtual objects' physical properties, such as stiffness, can be significantly enhanced by our interface. We further demonstrate that our interface enables ubiquitous control via finger tapping. Ultimately, we envision our findings to push forward research towards more realistic physicality in future VR/AR.Comment: ACM Transactions on Graphics (SIGGRAPH Asia 2022

    The Cord Weekly (March 19, 1997)

    Get PDF

    Volume 38 - Issue 00 - Thursday, August 29, 2002

    Get PDF
    The Rose Thorn, Rose-Hulman\u27s independent student newspaper.https://scholar.rose-hulman.edu/rosethorn/1277/thumbnail.jp

    CHORE: Contact, Human and Object REconstruction from a single RGB image

    Full text link
    While most works in computer vision and learning have focused on perceiving 3D humans from single images in isolation, in this work we focus on capturing 3D humans interacting with objects. The problem is extremely challenging due to heavy occlusions between human and object, diverse interaction types and depth ambiguity. In this paper, we introduce CHORE, a novel method that learns to jointly reconstruct human and object from a single image. CHORE takes inspiration from recent advances in implicit surface learning and classical model-based fitting. We compute a neural reconstruction of human and object represented implicitly with two unsigned distance fields, and additionally predict a correspondence field to a parametric body as well as an object pose field. This allows us to robustly fit a parametric body model and a 3D object template, while reasoning about interactions. Furthermore, prior pixel-aligned implicit learning methods use synthetic data and make assumptions that are not met in real data. We propose a simple yet effective depth-aware scaling that allows more efficient shape learning on real data. Our experiments show that our joint reconstruction learned with the proposed strategy significantly outperforms the SOTA. Our code and models will be released to foster future research in this direction.Comment: 19 pages, 7 figure

    {BEHAVE}: {D}ataset and Method for Tracking Human Object Interactions

    Get PDF
    corecore