746 research outputs found

    ML277 specifically enhances the fully activated open state of KCNQ1 by modulating VSD-pore coupling

    Get PDF
    Upon membrane depolarization, the KCNQ1 potassium channel opens at the intermediate (IO) and activated (AO) states of the stepwise voltage-sensing domain (VSD) activation. In the heart, KCNQ1 associates with KCNE1 subunits to form

    The Effect of Social Chatbot Avatar Presentation on User Self-disclosure

    Get PDF
    The emergence of artificial intelligence has boosted the development and utilization of chatbots that can satisfy both users\u27 task-oriented needs, such as information search for purchase, and their social needs, such as self-disclosure for rapport-building. While much research has focused on its usage in the commercial context, little effort has been paid to examine social chatbots for psychotherapy, where facilitating relationship formation is crucial in chatbot design. Inspired by prevalent chatbot applications and drawing on the literature on visual cues and self-disclosure, this paper aims to 1) explore the effects of different presentations of social chatbot avatars (text, profile, and background) on users\u27 self-disclosure, along with the mediating role of self-awareness, and 2) understand the moderating role of chatbot gaze directions (direct gaze and averted gaze). The proposed studies will theoretically contribute to literature regarding human-robot interaction. Research findings will also provide substantial practical implications for chatbot design

    Occluded Human Body Capture with Self-Supervised Spatial-Temporal Motion Prior

    Full text link
    Although significant progress has been achieved on monocular maker-less human motion capture in recent years, it is still hard for state-of-the-art methods to obtain satisfactory results in occlusion scenarios. There are two main reasons: the one is that the occluded motion capture is inherently ambiguous as various 3D poses can map to the same 2D observations, which always results in an unreliable estimation. The other is that no sufficient occluded human data can be used for training a robust model. To address the obstacles, our key-idea is to employ non-occluded human data to learn a joint-level spatial-temporal motion prior for occluded human with a self-supervised strategy. To further reduce the gap between synthetic and real occlusion data, we build the first 3D occluded motion dataset~(OcMotion), which can be used for both training and testing. We encode the motions in 2D maps and synthesize occlusions on non-occluded data for the self-supervised training. A spatial-temporal layer is then designed to learn joint-level correlations. The learned prior reduces the ambiguities of occlusions and is robust to diverse occlusion types, which is then adopted to assist the occluded human motion capture. Experimental results show that our method can generate accurate and coherent human motions from occluded videos with good generalization ability and runtime efficiency. The dataset and code are publicly available at \url{https://github.com/boycehbz/CHOMP}
    • …
    corecore