14 research outputs found

    Lightweight real-time hand segmentation leveraging MediaPipe landmark detection

    Get PDF
    Producción CientíficaReal-time hand segmentation is a key process in applications that require human–computer interaction, such as gesture rec- ognition or augmented reality systems. However, the infinite shapes and orientations that hands can adopt, their variability in skin pigmentation and the self-occlusions that continuously appear in images make hand segmentation a truly complex problem, especially with uncontrolled lighting conditions and backgrounds. The development of robust, real-time hand segmentation algorithms is essential to achieve immersive augmented reality and mixed reality experiences by correctly interpreting collisions and occlusions. In this paper, we present a simple but powerful algorithm based on the MediaPipe Hands solution, a highly optimized neural network. The algorithm processes the landmarks provided by MediaPipe using morphological and logical operators to obtain the masks that allow dynamic updating of the skin color model. Different experiments were carried out comparing the influence of the color space on skin segmentation, with the CIELab color space chosen as the best option. An average intersection over union of 0.869 was achieved on the demanding Ego2Hands dataset running at 90 frames per second on a conventional computer without any hardware acceleration. Finally, the proposed seg- mentation procedure was implemented in an augmented reality application to add hand occlusion for improved user immer- sion. An open-source implementation of the algorithm is publicly available at https://github.com/itap-robotica-medica/light weight-hand-segmentation.Ministerio de Ciencia e Innovación (under Grant Agreement No. RTC2019-007350-1)Publicación en abierto financiada por el Consorcio de Bibliotecas Universitarias de Castilla y León (BUCLE), con cargo al Programa Operativo 2014ES16RFOP009 FEDER 2014-2020 DE CASTILLA Y LEÓN, Actuación:20007-CL - Apoyo Consorcio BUCL

    Computational Design of Wiring Layout on Tight Suits with Minimal Motion Resistance

    Full text link
    An increasing number of electronics are directly embedded on the clothing to monitor human status (e.g., skeletal motion) or provide haptic feedback. A specific challenge to prototype and fabricate such a clothing is to design the wiring layout, while minimizing the intervention to human motion. We address this challenge by formulating the topological optimization problem on the clothing surface as a deformation-weighted Steiner tree problem on a 3D clothing mesh. Our method proposed an energy function for minimizing strain energy in the wiring area under different motions, regularized by its total length. We built the physical prototype to verify the effectiveness of our method and conducted user study with participants of both design experts and smart cloth users. On three types of commercial products of smart clothing, the optimized layout design reduced wire strain energy by an average of 77% among 248 actions compared to baseline design, and 18% over the expert design.Comment: This work is accepted at SIGGRAPH ASIA 2023(Conference Track

    WristSketcher: Creating Dynamic Sketches in AR with a Sensing Wristband

    Full text link
    Restricted by the limited interaction area of native AR glasses (e.g., touch bars), it is challenging to create sketches in AR glasses. Recent works have attempted to use mobile devices (e.g., tablets) or mid-air bare-hand gestures to expand the interactive spaces and can work as the 2D/3D sketching input interfaces for AR glasses. Between them, mobile devices allow for accurate sketching but are often heavy to carry, while sketching with bare hands is zero-burden but can be inaccurate due to arm instability. In addition, mid-air bare-hand sketching can easily lead to social misunderstandings and its prolonged use can cause arm fatigue. As a new attempt, in this work, we present WristSketcher, a new AR system based on a flexible sensing wristband for creating 2D dynamic sketches, featuring an almost zero-burden authoring model for accurate and comfortable sketch creation in real-world scenarios. Specifically, we have streamlined the interaction space from the mid-air to the surface of a lightweight sensing wristband, and implemented AR sketching and associated interaction commands by developing a gesture recognition method based on the sensing pressure points on the wristband. The set of interactive gestures used by our WristSketcher is determined by a heuristic study on user preferences. Moreover, we endow our WristSketcher with the ability of animation creation, allowing it to create dynamic and expressive sketches. Experimental results demonstrate that our WristSketcher i) faithfully recognizes users' gesture interactions with a high accuracy of 96.0%; ii) achieves higher sketching accuracy than Freehand sketching; iii) achieves high user satisfaction in ease of use, usability and functionality; and iv) shows innovation potentials in art creation, memory aids, and entertainment applications

    Robust elbow angle prediction with aging soft sensors via output-level domain adaptation

    Get PDF
    Wearable devices equipped with soft sensors provide a promising solution for body movement monitoring. Specifically, body movements like elbow flexion can be captured by monitoring the stretched soft sensors’ resistance changes. However, in addition to stretching, the resistance of a soft sensor is also influenced by its aging, which makes the resistance a less stable indicator of the elbow angle. In this paper, we leverage the recent progress in Deep Learning and address the aforementioned issue by formulating the aging-invariant prediction of elbow angles as a domain adaption problem. Specifically, we define the soft sensor data (i.e., resistance values) collected at different aging levels as different domains and adapt a regression neural network among them to learn domain-invariant features. However, unlike the popular pairwise domain adaptation problem that only involves one source and one target domain, ours is more challenging as it has “infinite” target domains due to the non-stop aging. To address this challenge, we novelly propose an output-level domain adaptation approach which builds on the fact that the elbow angles are in a fixed range regardless of aging. Experimental results show that our method enables robust and accurate prediction of elbow angles with aging soft sensors, which significantly outperforms supervised learning ones that fail to generalize to aged sensor data
    corecore