961 research outputs found

    Interpersonal Emotion Regulation And Online Social Support For Depression: A Review

    Get PDF
    Online social interactions have become commonplace as social media sites like Facebook and Twitter have risen in popularity. People frequently share about life events, emotions, and even mental health concerns, and there is evidence to suggest that in many cases online sharing can be beneficial in decreasing negative emotions and widening peoples’ social support networks. Research is increasingly focused on the relationship between social media and mental health concerns like depression, but findings are often mixed, or even contradictory, and many questions remain open for future research. Understanding how people with depression use social media and interact with others online to regulate their symptoms is important in guiding future research and developing online interventions. This literature review proposes that interpersonal emotion regulation can be applied as a framework by which to better understand how people with depression interact online and seek out social support, and addresses directions for future research

    Online avatar based interactions

    Get PDF
    The gridWorld project attempts to utilize 3D to develop an online multi-user visual chat system. GridWorld address ideas of how conversations in a virtual environment can be facilitated and enhanced by an abstract visual interface design. The visual interface was developed from research and examination of existing ideas, methodologies and application for development of user-embodiment, chat/virtual space, and interface useability towards the visualization of communication

    Sensing, interpreting, and anticipating human social behaviour in the real world

    Get PDF
    Low-level nonverbal social signals like glances, utterances, facial expressions and body language are central to human communicative situations and have been shown to be connected to important high-level constructs, such as emotions, turn-taking, rapport, or leadership. A prerequisite for the creation of social machines that are able to support humans in e.g. education, psychotherapy, or human resources is the ability to automatically sense, interpret, and anticipate human nonverbal behaviour. While promising results have been shown in controlled settings, automatically analysing unconstrained situations, e.g. in daily-life settings, remains challenging. Furthermore, anticipation of nonverbal behaviour in social situations is still largely unexplored. The goal of this thesis is to move closer to the vision of social machines in the real world. It makes fundamental contributions along the three dimensions of sensing, interpreting and anticipating nonverbal behaviour in social interactions. First, robust recognition of low-level nonverbal behaviour lays the groundwork for all further analysis steps. Advancing human visual behaviour sensing is especially relevant as the current state of the art is still not satisfactory in many daily-life situations. While many social interactions take place in groups, current methods for unsupervised eye contact detection can only handle dyadic interactions. We propose a novel unsupervised method for multi-person eye contact detection by exploiting the connection between gaze and speaking turns. Furthermore, we make use of mobile device engagement to address the problem of calibration drift that occurs in daily-life usage of mobile eye trackers. Second, we improve the interpretation of social signals in terms of higher level social behaviours. In particular, we propose the first dataset and method for emotion recognition from bodily expressions of freely moving, unaugmented dyads. Furthermore, we are the first to study low rapport detection in group interactions, as well as investigating a cross-dataset evaluation setting for the emergent leadership detection task. Third, human visual behaviour is special because it functions as a social signal and also determines what a person is seeing at a given moment in time. Being able to anticipate human gaze opens up the possibility for machines to more seamlessly share attention with humans, or to intervene in a timely manner if humans are about to overlook important aspects of the environment. We are the first to propose methods for the anticipation of eye contact in dyadic conversations, as well as in the context of mobile device interactions during daily life, thereby paving the way for interfaces that are able to proactively intervene and support interacting humans.Blick, Gesichtsausdrücke, Körpersprache, oder Prosodie spielen als nonverbale Signale eine zentrale Rolle in menschlicher Kommunikation. Sie wurden durch vielzählige Studien mit wichtigen Konzepten wie Emotionen, Sprecherwechsel, Führung, oder der Qualität des Verhältnisses zwischen zwei Personen in Verbindung gebracht. Damit Menschen effektiv während ihres täglichen sozialen Lebens von Maschinen unterstützt werden können, sind automatische Methoden zur Erkennung, Interpretation, und Antizipation von nonverbalem Verhalten notwendig. Obwohl die bisherige Forschung in kontrollierten Studien zu ermutigenden Ergebnissen gekommen ist, bleibt die automatische Analyse nonverbalen Verhaltens in weniger kontrollierten Situationen eine Herausforderung. Darüber hinaus existieren kaum Untersuchungen zur Antizipation von nonverbalem Verhalten in sozialen Situationen. Das Ziel dieser Arbeit ist, die Vision vom automatischen Verstehen sozialer Situationen ein Stück weit mehr Realität werden zu lassen. Diese Arbeit liefert wichtige Beiträge zur autmatischen Erkennung menschlichen Blickverhaltens in alltäglichen Situationen. Obwohl viele soziale Interaktionen in Gruppen stattfinden, existieren unüberwachte Methoden zur Augenkontakterkennung bisher lediglich für dyadische Interaktionen. Wir stellen einen neuen Ansatz zur Augenkontakterkennung in Gruppen vor, welcher ohne manuelle Annotationen auskommt, indem er sich den statistischen Zusammenhang zwischen Blick- und Sprechverhalten zu Nutze macht. Tägliche Aktivitäten sind eine Herausforderung für Geräte zur mobile Augenbewegungsmessung, da Verschiebungen dieser Geräte zur Verschlechterung ihrer Kalibrierung führen können. In dieser Arbeit verwenden wir Nutzerverhalten an mobilen Endgeräten, um den Effekt solcher Verschiebungen zu korrigieren. Neben der Erkennung verbessert diese Arbeit auch die Interpretation sozialer Signale. Wir veröffentlichen den ersten Datensatz sowie die erste Methode zur Emotionserkennung in dyadischen Interaktionen ohne den Einsatz spezialisierter Ausrüstung. Außerdem stellen wir die erste Studie zur automatischen Erkennung mangelnder Verbundenheit in Gruppeninteraktionen vor, und führen die erste datensatzübergreifende Evaluierung zur Detektion von sich entwickelndem Führungsverhalten durch. Zum Abschluss der Arbeit präsentieren wir die ersten Ansätze zur Antizipation von Blickverhalten in sozialen Interaktionen. Blickverhalten hat die besondere Eigenschaft, dass es sowohl als soziales Signal als auch der Ausrichtung der visuellen Wahrnehmung dient. Somit eröffnet die Fähigkeit zur Antizipation von Blickverhalten Maschinen die Möglichkeit, sich sowohl nahtloser in soziale Interaktionen einzufügen, als auch Menschen zu warnen, wenn diese Gefahr laufen wichtige Aspekte der Umgebung zu übersehen. Wir präsentieren Methoden zur Antizipation von Blickverhalten im Kontext der Interaktion mit mobilen Endgeräten während täglicher Aktivitäten, als auch während dyadischer Interaktionen mittels Videotelefonie

    Real-time generation and adaptation of social companion robot behaviors

    Get PDF
    Social robots will be part of our future homes. They will assist us in everyday tasks, entertain us, and provide helpful advice. However, the technology still faces challenges that must be overcome to equip the machine with social competencies and make it a socially intelligent and accepted housemate. An essential skill of every social robot is verbal and non-verbal communication. In contrast to voice assistants, smartphones, and smart home technology, which are already part of many people's lives today, social robots have an embodiment that raises expectations towards the machine. Their anthropomorphic or zoomorphic appearance suggests they can communicate naturally with speech, gestures, or facial expressions and understand corresponding human behaviors. In addition, robots also need to consider individual users' preferences: everybody is shaped by their culture, social norms, and life experiences, resulting in different expectations towards communication with a robot. However, robots do not have human intuition - they must be equipped with the corresponding algorithmic solutions to these problems. This thesis investigates the use of reinforcement learning to adapt the robot's verbal and non-verbal communication to the user's needs and preferences. Such non-functional adaptation of the robot's behaviors primarily aims to improve the user experience and the robot's perceived social intelligence. The literature has not yet provided a holistic view of the overall challenge: real-time adaptation requires control over the robot's multimodal behavior generation, an understanding of human feedback, and an algorithmic basis for machine learning. Thus, this thesis develops a conceptual framework for designing real-time non-functional social robot behavior adaptation with reinforcement learning. It provides a higher-level view from the system designer's perspective and guidance from the start to the end. It illustrates the process of modeling, simulating, and evaluating such adaptation processes. Specifically, it guides the integration of human feedback and social signals to equip the machine with social awareness. The conceptual framework is put into practice for several use cases, resulting in technical proofs of concept and research prototypes. They are evaluated in the lab and in in-situ studies. These approaches address typical activities in domestic environments, focussing on the robot's expression of personality, persona, politeness, and humor. Within this scope, the robot adapts its spoken utterances, prosody, and animations based on human explicit or implicit feedback.Soziale Roboter werden Teil unseres zukünftigen Zuhauses sein. Sie werden uns bei alltäglichen Aufgaben unterstützen, uns unterhalten und uns mit hilfreichen Ratschlägen versorgen. Noch gibt es allerdings technische Herausforderungen, die zunächst überwunden werden müssen, um die Maschine mit sozialen Kompetenzen auszustatten und zu einem sozial intelligenten und akzeptierten Mitbewohner zu machen. Eine wesentliche Fähigkeit eines jeden sozialen Roboters ist die verbale und nonverbale Kommunikation. Im Gegensatz zu Sprachassistenten, Smartphones und Smart-Home-Technologien, die bereits heute Teil des Lebens vieler Menschen sind, haben soziale Roboter eine Verkörperung, die Erwartungen an die Maschine weckt. Ihr anthropomorphes oder zoomorphes Aussehen legt nahe, dass sie in der Lage sind, auf natürliche Weise mit Sprache, Gestik oder Mimik zu kommunizieren, aber auch entsprechende menschliche Kommunikation zu verstehen. Darüber hinaus müssen Roboter auch die individuellen Vorlieben der Benutzer berücksichtigen. So ist jeder Mensch von seiner Kultur, sozialen Normen und eigenen Lebenserfahrungen geprägt, was zu unterschiedlichen Erwartungen an die Kommunikation mit einem Roboter führt. Roboter haben jedoch keine menschliche Intuition - sie müssen mit entsprechenden Algorithmen für diese Probleme ausgestattet werden. In dieser Arbeit wird der Einsatz von bestärkendem Lernen untersucht, um die verbale und nonverbale Kommunikation des Roboters an die Bedürfnisse und Vorlieben des Benutzers anzupassen. Eine solche nicht-funktionale Anpassung des Roboterverhaltens zielt in erster Linie darauf ab, das Benutzererlebnis und die wahrgenommene soziale Intelligenz des Roboters zu verbessern. Die Literatur bietet bisher keine ganzheitliche Sicht auf diese Herausforderung: Echtzeitanpassung erfordert die Kontrolle über die multimodale Verhaltenserzeugung des Roboters, ein Verständnis des menschlichen Feedbacks und eine algorithmische Basis für maschinelles Lernen. Daher wird in dieser Arbeit ein konzeptioneller Rahmen für die Gestaltung von nicht-funktionaler Anpassung der Kommunikation sozialer Roboter mit bestärkendem Lernen entwickelt. Er bietet eine übergeordnete Sichtweise aus der Perspektive des Systemdesigners und eine Anleitung vom Anfang bis zum Ende. Er veranschaulicht den Prozess der Modellierung, Simulation und Evaluierung solcher Anpassungsprozesse. Insbesondere wird auf die Integration von menschlichem Feedback und sozialen Signalen eingegangen, um die Maschine mit sozialem Bewusstsein auszustatten. Der konzeptionelle Rahmen wird für mehrere Anwendungsfälle in die Praxis umgesetzt, was zu technischen Konzeptnachweisen und Forschungsprototypen führt, die in Labor- und In-situ-Studien evaluiert werden. Diese Ansätze befassen sich mit typischen Aktivitäten in häuslichen Umgebungen, wobei der Schwerpunkt auf dem Ausdruck der Persönlichkeit, dem Persona, der Höflichkeit und dem Humor des Roboters liegt. In diesem Rahmen passt der Roboter seine Sprache, Prosodie, und Animationen auf Basis expliziten oder impliziten menschlichen Feedbacks an

    Presence 2005: the eighth annual international workshop on presence, 21-23 September, 2005 University College London (Conference proceedings)

    Get PDF
    OVERVIEW (taken from the CALL FOR PAPERS) Academics and practitioners with an interest in the concept of (tele)presence are invited to submit their work for presentation at PRESENCE 2005 at University College London in London, England, September 21-23, 2005. The eighth in a series of highly successful international workshops, PRESENCE 2005 will provide an open discussion forum to share ideas regarding concepts and theories, measurement techniques, technology, and applications related to presence, the psychological state or subjective perception in which a person fails to accurately and completely acknowledge the role of technology in an experience, including the sense of 'being there' experienced by users of advanced media such as virtual reality. The concept of presence in virtual environments has been around for at least 15 years, and the earlier idea of telepresence at least since Minsky's seminal paper in 1980. Recently there has been a burst of funded research activity in this area for the first time with the European FET Presence Research initiative. What do we really know about presence and its determinants? How can presence be successfully delivered with today's technology? This conference invites papers that are based on empirical results from studies of presence and related issues and/or which contribute to the technology for the delivery of presence. Papers that make substantial advances in theoretical understanding of presence are also welcome. The interest is not solely in virtual environments but in mixed reality environments. Submissions will be reviewed more rigorously than in previous conferences. High quality papers are therefore sought which make substantial contributions to the field. Approximately 20 papers will be selected for two successive special issues for the journal Presence: Teleoperators and Virtual Environments. PRESENCE 2005 takes place in London and is hosted by University College London. The conference is organized by ISPR, the International Society for Presence Research and is supported by the European Commission's FET Presence Research Initiative through the Presencia and IST OMNIPRES projects and by University College London

    The magic window : balancing privacy and awareness in office settings

    Get PDF
    Co-workers who are physically distributed in the same building often obtain information about others through the windows in office doors. Using the information gathered by looking through the window, they can determine whether it is a good time to initiate a conversation with the occupant. There are, however, two problems with ordinary glass windows. First, there are times when the window does not provide enough information, such as when the occupant is away. Second, there is potential to violate the occupant’s privacy; as a result of the privacy risk, people often cover their windows entirely. If office windows are to work efficiently as a support for collaboration, there must be a balance between awareness and privacy. In this research, I augmented the functions of a physical office window with a computer-mediated replacement called the Magic Window. The Magic Window collects video of the occupant, mediates the signal in various ways, and then presents the altered view on a screen that replaces the glass window. The Magic Window provides a better balance of awareness and privacy in office settings by allowing occupant to differentiate the amount of awareness information based on the viewer. The Magic Window system was tested in an eight-month field trial. The trial showed that the augmented window did provide a balance of privacy and awareness, and also raised a number of issues that will aid the design of future design of co-present media spaces

    Digital Traces of the Mind::Using Smartphones to Capture Signals of Well-Being in Individuals

    Get PDF
    General context and questions Adolescents and young adults typically use their smartphone several hours a day. Although there are concerns about how such behaviour might affect their well-being, the popularity of these powerful devices also opens novel opportunities for monitoring well-being in daily life. If successful, monitoring well-being in daily life provides novel opportunities to develop future interventions that provide personalized support to individuals at the moment they require it (just-in-time adaptive interventions). Taking an interdisciplinary approach with insights from communication, computational, and psychological science, this dissertation investigated the relation between smartphone app use and well-being and developed machine learning models to estimate an individual’s well-being based on how they interact with their smartphone. To elucidate the relation between smartphone trace data and well-being and to contribute to the development of technologies for monitoring well-being in future clinical practice, this dissertation addressed two overarching questions:RQ1: Can we find empirical support for theoretically motivated relations between smartphone trace data and well-being in individuals? RQ2: Can we use smartphone trace data to monitor well-being in individuals?Aims The first aim of this dissertation was to quantify the relation between the collected smartphone trace data and momentary well-being at the sample level, but also for each individual, following recent conceptual insights and empirical findings in psychological, communication, and computational science. A strength of this personalized (or idiographic) approach is that it allows us to capture how individuals might differ in how smartphone app use is related to their well-being. Considering such interindividual differences is important to determine if some individuals might potentially benefit from spending more time on their smartphone apps whereas others do not or even experience adverse effects. The second aim of this dissertation was to develop models for monitoring well-being in daily life. The present work pursued this transdisciplinary aim by taking a machine learning approach and evaluating to what extent we might estimate an individual’s well-being based on their smartphone trace data. If such traces can be used for this purpose by helping to pinpoint when individuals are unwell, they might be a useful data source for developing future interventions that provide personalized support to individuals at the moment they require it (just-in-time adaptive interventions). With this aim, the dissertation follows current developments in psychoinformatics and psychiatry, where much research resources are invested in using smartphone traces and similar data (obtained with smartphone sensors and wearables) to develop technologies for detecting whether an individual is currently unwell or will be in the future. Data collection and analysis This work combined novel data collection techniques (digital phenotyping and experience sampling methodology) for measuring smartphone use and well-being in the daily lives of 247 student participants. For a period up to four months, a dedicated application installed on participants’ smartphones collected smartphone trace data. In the same time period, participants completed a brief smartphone-based well-being survey five times a day (for 30 days in the first month and 30 days in the fourth month; up to 300 assessments in total). At each measurement, this survey comprised questions about the participants’ momentary level of procrastination, stress, and fatigue, while sleep duration was measured in the morning. Taking a time-series and machine learning approach to analysing these data, I provide the following contributions: Chapter 2 investigates the person-specific relation between passively logged usage of different application types and momentary subjective procrastination, Chapter 3 develops machine learning methodology to estimate sleep duration using smartphone trace data, Chapter 4 combines machine learning and explainable artificial intelligence to discover smartphone-tracked digital markers of momentary subjective stress, Chapter 5 uses a personalized machine learning approach to evaluate if smartphone trace data contains behavioral signs of fatigue. Collectively, these empirical studies provide preliminary answers to the overarching questions of this dissertation.Summary of results With respect to the theoretically motivated relations between smartphone trace data and wellbeing (RQ1), we found that different patterns in smartphone trace data, from time spent on social network, messenger, video, and game applications to smartphone-tracked sleep proxies, are related to well-being in individuals. The strength and nature of this relation depends on the individual and app usage pattern under consideration. The relation between smartphone app use patterns and well-being is limited in most individuals, but relatively strong in a minority. Whereas some individuals might benefit from using specific app types, others might experience decreases in well-being when spending more time on these apps. With respect to the question whether we might use smartphone trace data to monitor well-being in individuals (RQ2), we found that smartphone trace data might be useful for this purpose in some individuals and to some extent. They appear most relevant in the context of sleep monitoring (Chapter 3) and have the potential to be included as one of several data sources for monitoring momentary procrastination (Chapter 2), stress (Chapter 4), and fatigue (Chapter 5) in daily life. Outlook Future interdisciplinary research is needed to investigate whether the relationship between smartphone use and well-being depends on the nature of the activities performed on these devices, the content they present, and the context in which they are used. Answering these questions is essential to unravel the complex puzzle of developing technologies for monitoring well-being in daily life.<br/

    KEER2022

    Get PDF
    AvanttĂ­tol: KEER2022. DiversitiesDescripciĂł del recurs: 25 juliol 202

    Affective Brain-Computer Interfaces

    Get PDF
    • …
    corecore