765 research outputs found

    White lies in hand : are other-oriented lies modified by hand gestures : possibly not

    Get PDF
    Previous studies have shown that the hand-over-heart gesture is related to being more honest as opposed to using self-centred dishonesty. We assumed that the hand-over-heart gesture would also relate to other-oriented dishonesty, though the latter differs highly from self-centred lying. In Study 1 (N = 83), we showed that performing a hand-over-heart gesture diminished the tendency to use other-oriented white lies and that the fingers crossed behind one’s back gesture was not related to higher dishonesty. We then pre-registered and conducted Study 2 (N = 88), which was designed following higher methodological standards than Study 1. Contrary, to the findings of Study 1, we found that using the hand-over-heart gesture did not result in refraining from using other-oriented white lies. We discuss the findings of this failed replication indicating the importance of strict methodological guidelines in conducting research and also reflect on relatively small effect sizes related to some findings in embodied cognition

    Hormonal Health:Period Tracking Apps, Wellness, and Self-Management within Surveillance Capitalism

    Get PDF
    Period tracking is an increasingly widespread practice, and its emphasis is changing from monitoring fertility to encompassing a more broad-based picture of users’ health. Delving into the data of one’s menstrual cycle, and the hormones that are presumed to be intimately linked with it, is a practice that is reshaping ideas about health and wellness, while also shaping subjects and subjectivities that succeed under conditions of surveillance capitalism. Through close examination of six extended interviews, this article elaborates a version of period tracking that sidesteps fertility and, in doing so, participates in the “queering” of menstrual technologies. Apps can facilitate the integration of institutional medical expertise and quotidian embodied experience within a broader approach to the self as a management project. We introduce the concept of “hormonal health” to describe a way of caring for, and knowing about, bodies, one that weaves together mental and physical health, correlates subjective and objective information, and calls into question the boundary between illness and wellness. For those we spoke with, menstrual cycles are understood to affect selfhood across any simplistic body-mind division or reproductive imperative, engendering complex techniques of self-management, including monitoring, hypothesizing, intervening in medical appointments, adjusting schedules, and interpreting social interactions. Such techniques empower their proponents, but not within conditions of their choosing. In addition to problems with data privacy and profit, these techniques perpetuate individualized solutions and the internalization of pressures in a gender-stratified, neoliberal context, facilitating success within flawed structures

    A Comprehensive Review of Data-Driven Co-Speech Gesture Generation

    Full text link
    Gestures that accompany speech are an essential part of natural and efficient embodied human communication. The automatic generation of such co-speech gestures is a long-standing problem in computer animation and is considered an enabling technology in film, games, virtual social spaces, and for interaction with social robots. The problem is made challenging by the idiosyncratic and non-periodic nature of human co-speech gesture motion, and by the great diversity of communicative functions that gestures encompass. Gesture generation has seen surging interest recently, owing to the emergence of more and larger datasets of human gesture motion, combined with strides in deep-learning-based generative models, that benefit from the growing availability of data. This review article summarizes co-speech gesture generation research, with a particular focus on deep generative models. First, we articulate the theory describing human gesticulation and how it complements speech. Next, we briefly discuss rule-based and classical statistical gesture synthesis, before delving into deep learning approaches. We employ the choice of input modalities as an organizing principle, examining systems that generate gestures from audio, text, and non-linguistic input. We also chronicle the evolution of the related training data sets in terms of size, diversity, motion quality, and collection method. Finally, we identify key research challenges in gesture generation, including data availability and quality; producing human-like motion; grounding the gesture in the co-occurring speech in interaction with other speakers, and in the environment; performing gesture evaluation; and integration of gesture synthesis into applications. We highlight recent approaches to tackling the various key challenges, as well as the limitations of these approaches, and point toward areas of future development.Comment: Accepted for EUROGRAPHICS 202

    Death and Paperwork Reduction

    Get PDF
    How does government value people\u27s time? Often the valuation is implicit, even mysterious. But in patches of the federal administrative state, paperwork burdens are quantified in hours and often monetized. When agencies do monetize, they look to how the labor market values the time of the people faced with paperwork. The result is that some people\u27s time is valued over ten times more than other people\u27s time. In contrast, when agencies monetize the value of statistical life for cost-benefit analysis, they look to how people faced with a risk of death subjectively value its reduction. In practice, agencies assign the same value to every statistical life saved by a given policy. This Article establishes these patterns of agency behavior and suggests that there is no satisfying justification for them. Welfarist and egalitarian principles, along with the logic of statistical life valuation, lean against the use of market wages to monetize a person\u27s time doing government paperwork. The impact of this practice might be limited, given the modest ambition of today\u27s paperwork reduction efforts. But time-related burdens—and benefits—are key consequences of government decisions in countless contexts. If we want to scale up a thoughtful process for valuing people\u27s time in the future, we will need new foundations

    Detecting human engagement propensity in human-robot interaction

    Get PDF
    Elaborazione di immagini ricavate dal flusso di una semplice videocamera RGB di un robot al fine di stimare la propensione all'interazione di una persona in situazioni di interazione uomo-robot. Per calcolare la stima finale, tecniche basate su deep learning sono usate per estrarre alcune informazioni ausiliarie come: stima della posa di una persona, quale tipo di posa, orientamento del corpo, orientamento della testa, come appaiono le mani.Processing of images retrieved from a simple robot RGB camera stream in order to estimate the engagement propensity of a person in human-robot interaction scenarios. To compute the final estimation, deep learning based technique are used to extract some auxiliary information as: estimation of the pose of a person, which type of pose, body orientation, head orientation, how hands appear

    Understanding and designing for control in camera operation

    Get PDF
    Kameraleute nutzen traditionell gezielt Hilfsmittel um kontrollierte Kamerabewegungen zu ermöglichen. Der technische Fortschritt hat hierbei unlĂ€ngst zum Entstehen neuer Werkzeugen wie Gimbals, Drohnen oder Robotern beigetragen. Dabei wurden durch eine Kombination von Motorisierung, Computer-Vision und Machine-Learning auch neue Interaktionstechniken eingeführt. Neben dem etablierten achsenbasierten Stil wurde nun auch ein inhaltsbasierter Interaktionsstil ermöglicht. Einerseits vereinfachte dieser die Arbeit, andererseits aber folgten dieser (Teil-)Automatisierung auch unerwünschte Nebeneffekte. GrundsĂ€tzlich wollen sich Kameraleute wĂ€hrend der Kamerabewegung kontinuierlich in Kontrolle und am Ende als Autoren der Aufnahmen fühlen. WĂ€hrend Automatisierung hierbei Experten unterstützen und AnfĂ€nger befĂ€higen kann, führt sie unweigerlich auch zu einem gewissen Verlust an gewünschter Kontrolle. Wenn wir Kamerabewegung mit neuen Werkzeugen unterstützen wollen, stellt sich uns daher die Frage: Wie sollten wir diese Werkzeuge gestalten damit sie, trotz fortschreitender Automatisierung ein Gefühl von Kontrolle vermitteln? In der Vergangenheit wurde Kamerakontrolle bereits eingehend erforscht, allerdings vermehrt im virtuellen Raum. Die Anwendung inhaltsbasierter Kontrolle im physikalischen Raum trifft jedoch auf weniger erforschte domĂ€nenspezifische Herausforderungen welche gleichzeitig auch neue Gestaltungsmöglichkeiten eröffnen. Um dabei auf Nutzerbedürfnisse einzugehen, müssen sich Schnittstellen zum Beispiel an diese EinschrĂ€nkungen anpassen können und ein Zusammenspiel mit bestehenden Praktiken erlauben. Bisherige Forschung fokussierte sich oftmals auf ein technisches VerstĂ€ndnis von Kamerafahrten, was sich auch in der Schnittstellengestaltung niederschlug. Im Gegensatz dazu trĂ€gt diese Arbeit zu einem besseren VerstĂ€ndnis der Motive und Praktiken von Kameraleuten bei und bildet eine Grundlage zur Forschung und Gestaltung von Nutzerschnittstellen. Diese Arbeit prĂ€sentiert dazu konkret drei BeitrĂ€ge: Zuerst beschreiben wir ethnographische Studien über Experten und deren Praktiken. Sie zeigen vor allem die Herausforderungen von Automatisierung bei Kreativaufgaben auf (Assistenz vs. Kontrollgefühl). Zweitens, stellen wir ein Prototyping-Toolkit vor, dass für den Einsatz im Feld geeignet ist. Das Toolkit stellt Software für eine Replikation quelloffen bereit und erleichtert somit die Exploration von Designprototypen. Um Fragen zu deren Gestaltung besser beantworten zu können, stellen wir ebenfalls ein Evaluations-Framework vor, das vor allem KontrollqualitĂ€t und -gefühl bestimmt. Darin erweitern wir etablierte AnsĂ€tze um eine neurowissenschaftliche Methodik, um Daten explizit wie implizit erheben zu können. Drittens, prĂ€sentieren wir Designs und deren Evaluation aufbauend auf unserem Toolkit und Framework. Die Alternativen untersuchen Kontrolle bei verschiedenen Automatisierungsgraden und inhaltsbasierten Interaktionen. Auftretende Verdeckung durch graphische Elemente, wurde dabei durch visuelle Reduzierung und Mid-Air Gesten kompensiert. Unsere Studien implizieren hohe Grade an KontrollqualitĂ€t und -gefühl bei unseren AnsĂ€tzen, die zudem kreatives Arbeiten und bestehende Praktiken unterstützen.Cinematographers often use supportive tools to craft desired camera moves. Recent technological advances added new tools to the palette such as gimbals, drones or robots. The combination of motor-driven actuation, computer vision and machine learning in such systems also rendered new interaction techniques possible. In particular, a content-based interaction style was introduced in addition to the established axis-based style. On the one hand, content-based cocreation between humans and automated systems made it easier to reach high level goals. On the other hand however, the increased use of automation also introduced negative side effects. Creatives usually want to feel in control during executing the camera motion and in the end as the authors of the recorded shots. While automation can assist experts or enable novices, it unfortunately also takes away desired control from operators. Thus, if we want to support cinematographers with new tools and interaction techniques the following question arises: How should we design interfaces for camera motion control that, despite being increasingly automated, provide cinematographers with an experience of control? Camera control has been studied for decades, especially in virtual environments. Applying content-based interaction to physical environments opens up new design opportunities but also faces, less researched, domain-specific challenges. To suit the needs of cinematographers, designs need to be crafted with care. In particular, they must adapt to constraints of recordings on location. This makes an interplay with established practices essential. Previous work has mainly focused on a technology-centered understanding of camera travel which consequently influenced the design of camera control systems. In contrast, this thesis, contributes to the understanding of the motives of cinematographers, how they operate on set and provides a user-centered foundation informing cinematography specific research and design. The contribution of this thesis is threefold: First, we present ethnographic studies on expert users and their shooting practices on location. These studies highlight the challenges of introducing automation to a creative task (assistance vs feeling in control). Second, we report on a domain specific prototyping toolkit for in-situ deployment. The toolkit provides open source software for low cost replication enabling the exploration of design alternatives. To better inform design decisions, we further introduce an evaluation framework for estimating the resulting quality and sense of control. By extending established methodologies with a recent neuroscientific technique, it provides data on explicit as well as implicit levels and is designed to be applicable to other domains of HCI. Third, we present evaluations of designs based on our toolkit and framework. We explored a dynamic interplay of manual control with various degrees of automation. Further, we examined different content-based interaction styles. Here, occlusion due to graphical elements was found and addressed by exploring visual reduction strategies and mid-air gestures. Our studies demonstrate that high degrees of quality and sense of control are achievable with our tools that also support creativity and established practices

    Understanding and designing for control in camera operation

    Get PDF
    Kameraleute nutzen traditionell gezielt Hilfsmittel um kontrollierte Kamerabewegungen zu ermöglichen. Der technische Fortschritt hat hierbei unlĂ€ngst zum Entstehen neuer Werkzeugen wie Gimbals, Drohnen oder Robotern beigetragen. Dabei wurden durch eine Kombination von Motorisierung, Computer-Vision und Machine-Learning auch neue Interaktionstechniken eingeführt. Neben dem etablierten achsenbasierten Stil wurde nun auch ein inhaltsbasierter Interaktionsstil ermöglicht. Einerseits vereinfachte dieser die Arbeit, andererseits aber folgten dieser (Teil-)Automatisierung auch unerwünschte Nebeneffekte. GrundsĂ€tzlich wollen sich Kameraleute wĂ€hrend der Kamerabewegung kontinuierlich in Kontrolle und am Ende als Autoren der Aufnahmen fühlen. WĂ€hrend Automatisierung hierbei Experten unterstützen und AnfĂ€nger befĂ€higen kann, führt sie unweigerlich auch zu einem gewissen Verlust an gewünschter Kontrolle. Wenn wir Kamerabewegung mit neuen Werkzeugen unterstützen wollen, stellt sich uns daher die Frage: Wie sollten wir diese Werkzeuge gestalten damit sie, trotz fortschreitender Automatisierung ein Gefühl von Kontrolle vermitteln? In der Vergangenheit wurde Kamerakontrolle bereits eingehend erforscht, allerdings vermehrt im virtuellen Raum. Die Anwendung inhaltsbasierter Kontrolle im physikalischen Raum trifft jedoch auf weniger erforschte domĂ€nenspezifische Herausforderungen welche gleichzeitig auch neue Gestaltungsmöglichkeiten eröffnen. Um dabei auf Nutzerbedürfnisse einzugehen, müssen sich Schnittstellen zum Beispiel an diese EinschrĂ€nkungen anpassen können und ein Zusammenspiel mit bestehenden Praktiken erlauben. Bisherige Forschung fokussierte sich oftmals auf ein technisches VerstĂ€ndnis von Kamerafahrten, was sich auch in der Schnittstellengestaltung niederschlug. Im Gegensatz dazu trĂ€gt diese Arbeit zu einem besseren VerstĂ€ndnis der Motive und Praktiken von Kameraleuten bei und bildet eine Grundlage zur Forschung und Gestaltung von Nutzerschnittstellen. Diese Arbeit prĂ€sentiert dazu konkret drei BeitrĂ€ge: Zuerst beschreiben wir ethnographische Studien über Experten und deren Praktiken. Sie zeigen vor allem die Herausforderungen von Automatisierung bei Kreativaufgaben auf (Assistenz vs. Kontrollgefühl). Zweitens, stellen wir ein Prototyping-Toolkit vor, dass für den Einsatz im Feld geeignet ist. Das Toolkit stellt Software für eine Replikation quelloffen bereit und erleichtert somit die Exploration von Designprototypen. Um Fragen zu deren Gestaltung besser beantworten zu können, stellen wir ebenfalls ein Evaluations-Framework vor, das vor allem KontrollqualitĂ€t und -gefühl bestimmt. Darin erweitern wir etablierte AnsĂ€tze um eine neurowissenschaftliche Methodik, um Daten explizit wie implizit erheben zu können. Drittens, prĂ€sentieren wir Designs und deren Evaluation aufbauend auf unserem Toolkit und Framework. Die Alternativen untersuchen Kontrolle bei verschiedenen Automatisierungsgraden und inhaltsbasierten Interaktionen. Auftretende Verdeckung durch graphische Elemente, wurde dabei durch visuelle Reduzierung und Mid-Air Gesten kompensiert. Unsere Studien implizieren hohe Grade an KontrollqualitĂ€t und -gefühl bei unseren AnsĂ€tzen, die zudem kreatives Arbeiten und bestehende Praktiken unterstützen.Cinematographers often use supportive tools to craft desired camera moves. Recent technological advances added new tools to the palette such as gimbals, drones or robots. The combination of motor-driven actuation, computer vision and machine learning in such systems also rendered new interaction techniques possible. In particular, a content-based interaction style was introduced in addition to the established axis-based style. On the one hand, content-based cocreation between humans and automated systems made it easier to reach high level goals. On the other hand however, the increased use of automation also introduced negative side effects. Creatives usually want to feel in control during executing the camera motion and in the end as the authors of the recorded shots. While automation can assist experts or enable novices, it unfortunately also takes away desired control from operators. Thus, if we want to support cinematographers with new tools and interaction techniques the following question arises: How should we design interfaces for camera motion control that, despite being increasingly automated, provide cinematographers with an experience of control? Camera control has been studied for decades, especially in virtual environments. Applying content-based interaction to physical environments opens up new design opportunities but also faces, less researched, domain-specific challenges. To suit the needs of cinematographers, designs need to be crafted with care. In particular, they must adapt to constraints of recordings on location. This makes an interplay with established practices essential. Previous work has mainly focused on a technology-centered understanding of camera travel which consequently influenced the design of camera control systems. In contrast, this thesis, contributes to the understanding of the motives of cinematographers, how they operate on set and provides a user-centered foundation informing cinematography specific research and design. The contribution of this thesis is threefold: First, we present ethnographic studies on expert users and their shooting practices on location. These studies highlight the challenges of introducing automation to a creative task (assistance vs feeling in control). Second, we report on a domain specific prototyping toolkit for in-situ deployment. The toolkit provides open source software for low cost replication enabling the exploration of design alternatives. To better inform design decisions, we further introduce an evaluation framework for estimating the resulting quality and sense of control. By extending established methodologies with a recent neuroscientific technique, it provides data on explicit as well as implicit levels and is designed to be applicable to other domains of HCI. Third, we present evaluations of designs based on our toolkit and framework. We explored a dynamic interplay of manual control with various degrees of automation. Further, we examined different content-based interaction styles. Here, occlusion due to graphical elements was found and addressed by exploring visual reduction strategies and mid-air gestures. Our studies demonstrate that high degrees of quality and sense of control are achievable with our tools that also support creativity and established practices

    Analyse et synthÚse de mouvements théùtraux expressifs

    Get PDF
    This thesis addresses the analysis and generation of expressive movements for virtual human character. Based on previous results from three different research areas (perception of emotions and biological motion, automatic recognition of affect and computer character animation), a low-dimensional motion representation is proposed. This representation consists of the spatio-temporal trajectories of end-effectors (i.e., head, hands and feet), and pelvis. We have argued that this representation is both suitable and sufficient for characterizing the underlying expressive content in human motion, and for controlling the generation of expressive whole-body movements. In order to prove these claims, this thesis proposes: (i) A new motion capture database inspired by physical theory, which contains three categories of motion (locomotion, theatrical and improvised movements), has been built for several actors; (ii) An automatic classification framework has been designed to qualitatively and quantitatively assess the amount of emotion contained in the data. It has been shown that the proposed low-dimensional representation preserves most of the motion cues salient to the expression of affect and emotions; (iii) A motion generation system has been implemented, both for reconstructing whole-body movements from the low-dimensional representation, and for producing novel end-effector expressive trajectories. A quantitative and qualitative evaluation of the generated whole body motions shows that these motions are as expressive as the movements recorded from human actors.Cette thĂšse porte sur l'analyse et la gĂ©nĂ©ration de mouvements expressifs pour des personnages humains virtuels. Sur la base de rĂ©sultats de l’état de l’art issus de trois domaines de recherche diffĂ©rents - la perception des Ă©motions et du mouvement biologique, la reconnaissance automatique des Ă©motions et l'animation de personnages virtuels - une reprĂ©sentation en faible dimension des mouvements constituĂ©e des trajectoires spatio-temporelles des extrĂ©mitĂ©s des chaĂźnes articulĂ©es (tĂȘte, mains et pieds) et du pelvis a Ă©tĂ© proposĂ©e. Nous avons soutenu que cette reprĂ©sentation est Ă  la fois appropriĂ©e et suffisante pour caractĂ©riser le contenu expressif du mouvement humain et pour contrĂŽler la gĂ©nĂ©ration de mouvements corporels expressifs. Pour Ă©tayer cette affirmation, cette thĂšse propose:i) une nouvelle base de donnĂ©es de capture de mouvements inspirĂ©e par la thĂ©orie du thĂ©Ăątre physique. Cette base de donnĂ©es contient des exemples de diffĂ©rentes catĂ©gories de mouvements (c'est-Ă -dire des mouvements pĂ©riodiques, des mouvements fonctionnels, des mouvements spontanĂ©s et des sĂ©quences de mouvements thĂ©Ăątraux), produits avec des Ă©tats Ă©motionnels distincts (joie, tristesse, dĂ©tente, stress et neutre) et interprĂ©tĂ©s par plusieurs acteurs.ii) Une Ă©tude perceptuelle et une approche basĂ©e classification automatique conçus pour Ă©valuer qualitativement et quantitativement l'information liĂ©e aux Ă©motions vĂ©hiculĂ©es et encodĂ©es dans la reprĂ©sentation proposĂ©e. Nous avons observĂ© que, bien que de lĂ©gĂšres diffĂ©rences dans la performance aient Ă©tĂ© trouvĂ©es par rapport Ă  la situation oĂč le corps entier a Ă©tĂ© utilisĂ©, notre reprĂ©sentation conserve la plupart des marqueurs de mouvement liĂ©s Ă  l'expression de laffect et des Ă©motions.iii) Un systĂšme de synthĂšse de mouvement capable : a) de reconstruire des mouvements du corps entier Ă  partir de la reprĂ©sentation Ă  faible dimension proposĂ©e et b) de produire de nouvelles trajectoires extrĂ©mitĂ©s expressives (incluant la trajectoire du bassin). Une Ă©valuation quantitative et qualitative des mouvements du corps entier gĂ©nĂ©rĂ©s montre que ces mouvements sont aussi expressifs que les mouvements enregistrĂ©s Ă  partir d'acteurs humains
    • 

    corecore