28 research outputs found

    Geographic information extraction from texts

    Get PDF
    A large volume of unstructured texts, containing valuable geographic information, is available online. This information – provided implicitly or explicitly – is useful not only for scientific studies (e.g., spatial humanities) but also for many practical applications (e.g., geographic information retrieval). Although large progress has been achieved in geographic information extraction from texts, there are still unsolved challenges and issues, ranging from methods, systems, and data, to applications and privacy. Therefore, this workshop will provide a timely opportunity to discuss the recent advances, new ideas, and concepts but also identify research gaps in geographic information extraction

    Modelling the relationship between gesture motion and meaning

    Get PDF
    There are many ways to say “Hello,” be it a wave, a nod, or a bow. We greet others not only with words, but also with our bodies. Embodied communication permeates our interactions. A fist bump, thumbs-up, or pat on the back can be even more meaningful than hearing “good job!” A friend crossing their arms with a scowl, turning away from you, or stiffening up can feel like a harsh rejection. Social communication is not exclusively linguistic, but is a multi-sensory affair. It’s not that communication without these bodily cues is impossible, but it is impoverished. Embodiment is a fundamental human experience. Expressing ourselves through our bodies provides a powerful channel through which we express a plethora of meta-social information. And integral to communication, expression, and social engagement is our utilization of conversational gesture. We use gestures to express extra-linguistic information, to emphasize our point, and to embody mental and linguistic metaphors that add depth and color to social interaction. The gesture behaviour of virtual humans when compared to human-human conversation is limited, depending on the approach taken to automate performances of these characters. The generation of nonverbal behaviour for virtual humans can be approximately classified as either: 1) data-driven approaches that learn a mapping from aspects of the verbal channel, such as prosody, to gestures; or 2) rule bases approaches that are often tailored by designers for specific applications. This thesis is an interdisciplinary exploration that bridges these two approaches, and brings data-driven analyses to observational gesture research. By marrying a rich history of gesture research in behavioral psychology with data-driven techniques, this body of work brings rigorous computational methods to gesture classification, analysis, and generation. It addresses how researchers can exploit computational methods to make virtual humans gesture with the same richness, complexity, and apparent effortlessness as you and I. Throughout this work the central focus is on metaphoric gestures. These gestures are capable of conveying rich, nuanced, multi-dimensional meaning, and raise several challenges in their generation, including establishing and interpreting a gesture’s communicative meaning, and selecting a performance to convey it. As such, effectively utilizing these gestures remains an open challenge in virtual agent research. This thesis explores how metaphoric gestures are interpreted by an observer, how one can generate such rich gestures using a mapping between utterance meaning and gesture, as well as how one can use data driven techniques to explore the mapping between utterance and metaphoric gestures. The thesis begins in Chapter 1 by outlining the interdisciplinary space of gesture research in psychology and generation in virtual agents. It then presents several studies that address presupposed assumptions raised about the need for rich, metaphoric gestures and the risk of false implicature when gestural meaning is ignored in gesture generation. In Chapter 2, two studies on metaphoric gestures that embody multiple metaphors argue three critical points that inform the rest of the thesis: that people form rich inferences from metaphoric gestures, these inferences are informed by cultural context and, more importantly, that any approach to analyzing the relation between utterance and metaphoric gesture needs to take into account that multiple metaphors may be conveyed by a single gesture. A third study presented in Chapter 3 highlights the risk of false implicature and discusses this in the context of current subjective evaluations of the qualitative influence of gesture on viewers. Chapters 4 and 5 then present a data-driven analysis approach to recovering an interpretable explicit mapping from utterance to metaphor. The approach described in detail in Chapter 4 clusters gestural motion and relates those clusters to the semantic analysis of associated utterance. Then, Chapter 5 demonstrates how this approach can be used both as a framework for data-driven techniques in the study of gesture as well as form the basis of a gesture generation approach for virtual humans. The framework used in the last two chapters ties together the main themes of this thesis: how we can use observational behavioral gesture research to inform data-driven analysis methods, how embodied metaphor relates to fine-grained gestural motion, and how to exploit this relationship to generate rich, communicatively nuanced gestures on virtual agents. While gestures show huge variation, the goal of this thesis is to start to characterize and codify that variation using modern data-driven techniques. The final chapter of this thesis reflects on the many challenges and obstacles the field of gesture generation continues to face. The potential for applications of Virtual Agents to have broad impacts on our daily lives increases with the growing pervasiveness of digital interfaces, technical breakthroughs, and collaborative interdisciplinary research efforts. It concludes with an optimistic vision of applications for virtual agents with deep models of non-verbal social behaviour and their potential to encourage multi-disciplinary collaboration

    Teardrops on My Face: Automatic Weeping Detection from Nonverbal Behavior

    Get PDF
    Human emotional tears are a powerful socio-emotional signal. Yet, they have received relatively little attention in empirical research compared to facial expressions or body posture. While humans are highly sensitive to others' tears, to date, no automatic means exist for detecting spontaneous weeping. This paper employed facial and postural features extracted using four pre-trained classifiers (FACET, Affdex, OpenFace, OpenPose) to train a Support Vector Machine (SVM) to distinguish spontaneous weepers from non-weepers. Results showed that weeping can be accurately inferred from nonverbal behavior. Importantly, this distinction can be made before the appearance of visible tears on the face. However, features from at least two classifiers need to be combined, with the best models blending three or four classifiers to achieve near-perfect performance (97% accuracy). We discuss how direct and indirect tear detection methods may help to yield important new insights into the antecedents and consequences of emotional tears and how affective computing could benefit from the ability to recognize and respond to this uniquely human signal

    Artificial and Human Intelligence in Mental Health

    Get PDF
    While technology has dramatically changed medical practice, various aspects of mental health practice and diagnosis remain almost unchanged across decades. Here we argue that artificial intelligence with its capacity to learn and infer from data the workings of the human mind may rapidly change this scenario. However, this process will not happen without friction and will promote an explicit reflection of the overarching goals and foundational aspects of mental health. We suggest that the converse relation is also very likely to happen. The application of artificial intelligence to a field that relates to the foundations of what makes us human our volition, our thoughts, our pains and pleasures may shift artificial intelligence back to its earliest days, when it was mostly conceived of as a laboratory to explore the limits and possibilities of human intelligence.Fil: Sigman, Mariano. Consejo Nacional de Investigaciones CientĂ­ficas y TĂ©cnicas; Argentina. Universidad Torcuato Di Tella; ArgentinaFil: Fernandez Slezak, Diego. Consejo Nacional de Investigaciones CientĂ­ficas y TĂ©cnicas. Oficina de CoordinaciĂłn Administrativa Ciudad Universitaria. Instituto de InvestigaciĂłn en Ciencias de la ComputaciĂłn. Universidad de Buenos Aires. Facultad de Ciencias Exactas y Naturales. Instituto de InvestigaciĂłn en Ciencias de la ComputaciĂłn; ArgentinaFil: Drucaroff, Lucas Javier. FundaciĂłn para la Lucha contra las Enfermedades NeurolĂłgicas de la Infancia; Argentina. Consejo Nacional de Investigaciones CientĂ­ficas y TĂ©cnicas; ArgentinaFil: Sidarta, Ribeiro. No especifĂ­ca;Fil: Carrillo, Facundo. Consejo Nacional de Investigaciones CientĂ­ficas y TĂ©cnicas. Oficina de CoordinaciĂłn Administrativa Ciudad Universitaria. Instituto de InvestigaciĂłn en Ciencias de la ComputaciĂłn. Universidad de Buenos Aires. Facultad de Ciencias Exactas y Naturales. Instituto de InvestigaciĂłn en Ciencias de la ComputaciĂłn; Argentin

    Neurological and Mental Disorders

    Get PDF
    Mental disorders can result from disruption of neuronal circuitry, damage to the neuronal and non-neuronal cells, altered circuitry in the different regions of the brain and any changes in the permeability of the blood brain barrier. Early identification of these impairments through investigative means could help to improve the outcome for many brain and behaviour disease states.The chapters in this book describe how these abnormalities can lead to neurological and mental diseases such as ADHD (Attention Deficit Hyperactivity Disorder), anxiety disorders, Alzheimer’s disease and personality and eating disorders. Psycho-social traumas, especially during childhood, increase the incidence of amnesia and transient global amnesia, leading to the temporary inability to create new memories.Early detection of these disorders could benefit many complex diseases such as schizophrenia and depression

    Creative Technologies for Behaviour Change

    Get PDF
    behaviour change; motivational interviewing; motivation; physical activity; exercise; qualitative research; computer-assisted therapy; robotics; social robotics; virtual coaching; video coaching; functional imagery trainingThis thesis presents innovative uses of technology to support motivation, using motivational interviewing (MI) and functional imagery training (FIT) scripts developed specifically for remote delivery. MI scripts aimed to develop discrepancy, evoke solutions and promote self-efficacy. FIT scripts included multi-sensory mental imagery exercises at key points in the MI scripts. Four methods of delivery were developed: a human video-counsellor, a NAO robot programmed with Choregraphe software, a video robot counsellor for comparison with the human video-counsellor, and a life-sized two dimensional 'holographic' projection. Four empirical studies tested these developments in participants wanting to become more physically active. Study 1 (N=18) and Study 2 (N=20) used qualitative methods to explore the usability and acceptability of MI delivered by a pre-recorded human video counsellor and NAO robot respectively. Analysis of participants' verbal dialogue with the video counsellor in Study 1 showed high levels of change talk, an important ingredient of effective MI. In both studies, participants reported that voicing their goals aloud was helpful but they were somewhat frustrated by the lack of personalised response. Participants positively appraised the non-judgemental aspect of the interview with the robot. Study 3 tested if virtual FIT would be more acceptable and effective than virtual MI. Ninety-eight participants received FIT or MI delivered by a video robot, and compared to a wait-list control group. In Study 4, 104 participants were randomized to a monologue version of FIT delivered by a human counsellor projected as a two-dimensional life-size hologram, or on a computer screen, or a wait-list control condition. Neither Study 3 or 4 found any quantitative effect of virtual counselling on physical activity, self-efficacy, or motivation. As in studies 1 and 2, although participants found the technological interaction somewhat impersonal, qualitative responses were largely positive: participants liked the opportunity to voice their goals, reported a motivational boost, and perceived the virtual coaches as non-judgmental. This research has shown that people perceive benefits from speaking aloud about their goals and problems, and even engaging silently in imagery-based counselling. There is potential to deliver a brief motivational intervention that is fully-automated and acceptable to participants

    Adapting a virtual advisor’s verbal conversation based on predicted user preferences: A study of neutral, empathic and tailored dialogue

    Full text link
    © 2020 by the authors. Licensee MDPI, Basel, Switzerland. Virtual agents that improve the lives of humans need to be more than user-aware and adaptive to the user’s current state and behavior. Additionally, they need to apply expertise gained from experience that drives their adaptive behavior based on deep understanding of the user’s features (such as gender, culture, personality, and psychological state). Our work has involved extension of FAtiMA (Fearnot AffecTive Mind Architecture) with the addition of an Adaptive Engine to the FAtiMA cognitive agent architecture. We use machine learning to acquire the agent’s expertise by capturing a collection of user profiles into a user model and development of agent expertise based on the user model. In this paper, we describe a study to evaluate the Adaptive Engine, which compares the benefit (i.e., reduced stress, increased rapport) of tailoring dialogue to the specific user (Adaptive group) with dialogues that are either empathic (Empathic group) or neutral (Neutral group). Results showed a significant reduction in stress in the empathic and neutral groups, but not the adaptive group. Analyses of rule accuracy, participants’ dialogue preferences, and individual differences reveal that the three groups had different needs for empathic dialogue and highlight the importance and challenges of getting the tailoring right

    Understanding and designing for control in camera operation

    Get PDF
    Kameraleute nutzen traditionell gezielt Hilfsmittel um kontrollierte Kamerabewegungen zu ermöglichen. Der technische Fortschritt hat hierbei unlĂ€ngst zum Entstehen neuer Werkzeugen wie Gimbals, Drohnen oder Robotern beigetragen. Dabei wurden durch eine Kombination von Motorisierung, Computer-Vision und Machine-Learning auch neue Interaktionstechniken eingeführt. Neben dem etablierten achsenbasierten Stil wurde nun auch ein inhaltsbasierter Interaktionsstil ermöglicht. Einerseits vereinfachte dieser die Arbeit, andererseits aber folgten dieser (Teil-)Automatisierung auch unerwünschte Nebeneffekte. GrundsĂ€tzlich wollen sich Kameraleute wĂ€hrend der Kamerabewegung kontinuierlich in Kontrolle und am Ende als Autoren der Aufnahmen fühlen. WĂ€hrend Automatisierung hierbei Experten unterstützen und AnfĂ€nger befĂ€higen kann, führt sie unweigerlich auch zu einem gewissen Verlust an gewünschter Kontrolle. Wenn wir Kamerabewegung mit neuen Werkzeugen unterstützen wollen, stellt sich uns daher die Frage: Wie sollten wir diese Werkzeuge gestalten damit sie, trotz fortschreitender Automatisierung ein Gefühl von Kontrolle vermitteln? In der Vergangenheit wurde Kamerakontrolle bereits eingehend erforscht, allerdings vermehrt im virtuellen Raum. Die Anwendung inhaltsbasierter Kontrolle im physikalischen Raum trifft jedoch auf weniger erforschte domĂ€nenspezifische Herausforderungen welche gleichzeitig auch neue Gestaltungsmöglichkeiten eröffnen. Um dabei auf Nutzerbedürfnisse einzugehen, müssen sich Schnittstellen zum Beispiel an diese EinschrĂ€nkungen anpassen können und ein Zusammenspiel mit bestehenden Praktiken erlauben. Bisherige Forschung fokussierte sich oftmals auf ein technisches VerstĂ€ndnis von Kamerafahrten, was sich auch in der Schnittstellengestaltung niederschlug. Im Gegensatz dazu trĂ€gt diese Arbeit zu einem besseren VerstĂ€ndnis der Motive und Praktiken von Kameraleuten bei und bildet eine Grundlage zur Forschung und Gestaltung von Nutzerschnittstellen. Diese Arbeit prĂ€sentiert dazu konkret drei BeitrĂ€ge: Zuerst beschreiben wir ethnographische Studien über Experten und deren Praktiken. Sie zeigen vor allem die Herausforderungen von Automatisierung bei Kreativaufgaben auf (Assistenz vs. Kontrollgefühl). Zweitens, stellen wir ein Prototyping-Toolkit vor, dass für den Einsatz im Feld geeignet ist. Das Toolkit stellt Software für eine Replikation quelloffen bereit und erleichtert somit die Exploration von Designprototypen. Um Fragen zu deren Gestaltung besser beantworten zu können, stellen wir ebenfalls ein Evaluations-Framework vor, das vor allem KontrollqualitĂ€t und -gefühl bestimmt. Darin erweitern wir etablierte AnsĂ€tze um eine neurowissenschaftliche Methodik, um Daten explizit wie implizit erheben zu können. Drittens, prĂ€sentieren wir Designs und deren Evaluation aufbauend auf unserem Toolkit und Framework. Die Alternativen untersuchen Kontrolle bei verschiedenen Automatisierungsgraden und inhaltsbasierten Interaktionen. Auftretende Verdeckung durch graphische Elemente, wurde dabei durch visuelle Reduzierung und Mid-Air Gesten kompensiert. Unsere Studien implizieren hohe Grade an KontrollqualitĂ€t und -gefühl bei unseren AnsĂ€tzen, die zudem kreatives Arbeiten und bestehende Praktiken unterstützen.Cinematographers often use supportive tools to craft desired camera moves. Recent technological advances added new tools to the palette such as gimbals, drones or robots. The combination of motor-driven actuation, computer vision and machine learning in such systems also rendered new interaction techniques possible. In particular, a content-based interaction style was introduced in addition to the established axis-based style. On the one hand, content-based cocreation between humans and automated systems made it easier to reach high level goals. On the other hand however, the increased use of automation also introduced negative side effects. Creatives usually want to feel in control during executing the camera motion and in the end as the authors of the recorded shots. While automation can assist experts or enable novices, it unfortunately also takes away desired control from operators. Thus, if we want to support cinematographers with new tools and interaction techniques the following question arises: How should we design interfaces for camera motion control that, despite being increasingly automated, provide cinematographers with an experience of control? Camera control has been studied for decades, especially in virtual environments. Applying content-based interaction to physical environments opens up new design opportunities but also faces, less researched, domain-specific challenges. To suit the needs of cinematographers, designs need to be crafted with care. In particular, they must adapt to constraints of recordings on location. This makes an interplay with established practices essential. Previous work has mainly focused on a technology-centered understanding of camera travel which consequently influenced the design of camera control systems. In contrast, this thesis, contributes to the understanding of the motives of cinematographers, how they operate on set and provides a user-centered foundation informing cinematography specific research and design. The contribution of this thesis is threefold: First, we present ethnographic studies on expert users and their shooting practices on location. These studies highlight the challenges of introducing automation to a creative task (assistance vs feeling in control). Second, we report on a domain specific prototyping toolkit for in-situ deployment. The toolkit provides open source software for low cost replication enabling the exploration of design alternatives. To better inform design decisions, we further introduce an evaluation framework for estimating the resulting quality and sense of control. By extending established methodologies with a recent neuroscientific technique, it provides data on explicit as well as implicit levels and is designed to be applicable to other domains of HCI. Third, we present evaluations of designs based on our toolkit and framework. We explored a dynamic interplay of manual control with various degrees of automation. Further, we examined different content-based interaction styles. Here, occlusion due to graphical elements was found and addressed by exploring visual reduction strategies and mid-air gestures. Our studies demonstrate that high degrees of quality and sense of control are achievable with our tools that also support creativity and established practices

    Understanding and designing for control in camera operation

    Get PDF
    Kameraleute nutzen traditionell gezielt Hilfsmittel um kontrollierte Kamerabewegungen zu ermöglichen. Der technische Fortschritt hat hierbei unlĂ€ngst zum Entstehen neuer Werkzeugen wie Gimbals, Drohnen oder Robotern beigetragen. Dabei wurden durch eine Kombination von Motorisierung, Computer-Vision und Machine-Learning auch neue Interaktionstechniken eingeführt. Neben dem etablierten achsenbasierten Stil wurde nun auch ein inhaltsbasierter Interaktionsstil ermöglicht. Einerseits vereinfachte dieser die Arbeit, andererseits aber folgten dieser (Teil-)Automatisierung auch unerwünschte Nebeneffekte. GrundsĂ€tzlich wollen sich Kameraleute wĂ€hrend der Kamerabewegung kontinuierlich in Kontrolle und am Ende als Autoren der Aufnahmen fühlen. WĂ€hrend Automatisierung hierbei Experten unterstützen und AnfĂ€nger befĂ€higen kann, führt sie unweigerlich auch zu einem gewissen Verlust an gewünschter Kontrolle. Wenn wir Kamerabewegung mit neuen Werkzeugen unterstützen wollen, stellt sich uns daher die Frage: Wie sollten wir diese Werkzeuge gestalten damit sie, trotz fortschreitender Automatisierung ein Gefühl von Kontrolle vermitteln? In der Vergangenheit wurde Kamerakontrolle bereits eingehend erforscht, allerdings vermehrt im virtuellen Raum. Die Anwendung inhaltsbasierter Kontrolle im physikalischen Raum trifft jedoch auf weniger erforschte domĂ€nenspezifische Herausforderungen welche gleichzeitig auch neue Gestaltungsmöglichkeiten eröffnen. Um dabei auf Nutzerbedürfnisse einzugehen, müssen sich Schnittstellen zum Beispiel an diese EinschrĂ€nkungen anpassen können und ein Zusammenspiel mit bestehenden Praktiken erlauben. Bisherige Forschung fokussierte sich oftmals auf ein technisches VerstĂ€ndnis von Kamerafahrten, was sich auch in der Schnittstellengestaltung niederschlug. Im Gegensatz dazu trĂ€gt diese Arbeit zu einem besseren VerstĂ€ndnis der Motive und Praktiken von Kameraleuten bei und bildet eine Grundlage zur Forschung und Gestaltung von Nutzerschnittstellen. Diese Arbeit prĂ€sentiert dazu konkret drei BeitrĂ€ge: Zuerst beschreiben wir ethnographische Studien über Experten und deren Praktiken. Sie zeigen vor allem die Herausforderungen von Automatisierung bei Kreativaufgaben auf (Assistenz vs. Kontrollgefühl). Zweitens, stellen wir ein Prototyping-Toolkit vor, dass für den Einsatz im Feld geeignet ist. Das Toolkit stellt Software für eine Replikation quelloffen bereit und erleichtert somit die Exploration von Designprototypen. Um Fragen zu deren Gestaltung besser beantworten zu können, stellen wir ebenfalls ein Evaluations-Framework vor, das vor allem KontrollqualitĂ€t und -gefühl bestimmt. Darin erweitern wir etablierte AnsĂ€tze um eine neurowissenschaftliche Methodik, um Daten explizit wie implizit erheben zu können. Drittens, prĂ€sentieren wir Designs und deren Evaluation aufbauend auf unserem Toolkit und Framework. Die Alternativen untersuchen Kontrolle bei verschiedenen Automatisierungsgraden und inhaltsbasierten Interaktionen. Auftretende Verdeckung durch graphische Elemente, wurde dabei durch visuelle Reduzierung und Mid-Air Gesten kompensiert. Unsere Studien implizieren hohe Grade an KontrollqualitĂ€t und -gefühl bei unseren AnsĂ€tzen, die zudem kreatives Arbeiten und bestehende Praktiken unterstützen.Cinematographers often use supportive tools to craft desired camera moves. Recent technological advances added new tools to the palette such as gimbals, drones or robots. The combination of motor-driven actuation, computer vision and machine learning in such systems also rendered new interaction techniques possible. In particular, a content-based interaction style was introduced in addition to the established axis-based style. On the one hand, content-based cocreation between humans and automated systems made it easier to reach high level goals. On the other hand however, the increased use of automation also introduced negative side effects. Creatives usually want to feel in control during executing the camera motion and in the end as the authors of the recorded shots. While automation can assist experts or enable novices, it unfortunately also takes away desired control from operators. Thus, if we want to support cinematographers with new tools and interaction techniques the following question arises: How should we design interfaces for camera motion control that, despite being increasingly automated, provide cinematographers with an experience of control? Camera control has been studied for decades, especially in virtual environments. Applying content-based interaction to physical environments opens up new design opportunities but also faces, less researched, domain-specific challenges. To suit the needs of cinematographers, designs need to be crafted with care. In particular, they must adapt to constraints of recordings on location. This makes an interplay with established practices essential. Previous work has mainly focused on a technology-centered understanding of camera travel which consequently influenced the design of camera control systems. In contrast, this thesis, contributes to the understanding of the motives of cinematographers, how they operate on set and provides a user-centered foundation informing cinematography specific research and design. The contribution of this thesis is threefold: First, we present ethnographic studies on expert users and their shooting practices on location. These studies highlight the challenges of introducing automation to a creative task (assistance vs feeling in control). Second, we report on a domain specific prototyping toolkit for in-situ deployment. The toolkit provides open source software for low cost replication enabling the exploration of design alternatives. To better inform design decisions, we further introduce an evaluation framework for estimating the resulting quality and sense of control. By extending established methodologies with a recent neuroscientific technique, it provides data on explicit as well as implicit levels and is designed to be applicable to other domains of HCI. Third, we present evaluations of designs based on our toolkit and framework. We explored a dynamic interplay of manual control with various degrees of automation. Further, we examined different content-based interaction styles. Here, occlusion due to graphical elements was found and addressed by exploring visual reduction strategies and mid-air gestures. Our studies demonstrate that high degrees of quality and sense of control are achievable with our tools that also support creativity and established practices
    corecore