418 research outputs found

    What aspects of realism and faithfulness are relevant to supporting non-verbal communication through 3D mediums

    Get PDF
    This thesis investigates what aspects of realism and faithfulness are relevant to supporting non-verbal communication through visual mediums. The mediums examined are 2D video, 3D computer graphics and video based 3D reconstruction. The latter is 3D CGI derived from multiple streams of 2D video. People’s ability to identify behaviour of primates through gross non-verbal communication is compared across 2D video and 3D CGI. Findings suggest 3D CGI performs equally well to 2D video for the identification of gross non-verbal behaviour, however user feedback points to a lack of understanding of intent. Secondly, ability to detect truthfulness in humans across 2D video and video based 3D reconstruction mediums is examined. Effort of doing this is measured by studying changes in level of oxygenation to the prefrontal cortex. Discussion links to the literature to propose that tendency to over trust is inversely proportional to the range of non-verbal resources communicated through a medium. It is suggested that perhaps this is because “tells” are hidden. The third study identifies that video based 3D reconstruction can successfully illustrate subtle facial muscle movements on a par with 2D video, but does identify issues with the display of lower facial detail, due to a reconstruction error called droop. It is hoped that the combination of these strands of research will help users, and application developers, make more informed decisions when selecting which type of virtual character to implement for a particular application therefore contributing to the fields of virtual characters and virtual environments/serious gaming, by giving readers a greater understanding of virtual characters ability to convey non-verbal behaviour

    To Affinity and Beyond: Interactive Digital Humans as a Human Computer Interface

    Get PDF
    The field of human computer interaction is increasingly exploring the use of more natural, human-like user interfaces to build intelligent agents to aid in everyday life. This is coupled with a move to people using ever more realistic avatars to represent themselves in their digital lives. As the ability to produce emotionally engaging digital human representations is only just now becoming technically possible, there is little research into how to approach such tasks. This is due to both technical complexity and operational implementation cost. This is now changing as we are at a nexus point with new approaches, faster graphics processing and enabling new technologies in machine learning and computer vision becoming available. I articulate the issues required for such digital humans to be considered successfully located on the other side of the phenomenon known as the Uncanny Valley. My results show that a complex mix of perceived and contextual aspects affect the sense making on digital humans and highlights previously undocumented effects of interactivity on the affinity. Users are willing to accept digital humans as a new form of user interface and they react to them emotionally in previously unanticipated ways. My research shows that it is possible to build an effective interactive digital human that crosses the Uncanny Valley. I directly explore what is required to build a visually realistic digital human as a primary research question and I explore if such a realistic face provides sufficient benefit to justify the challenges involved in building it. I conducted a Delphi study to inform the research approaches and then produced a complex digital human character based on these insights. This interactive and realistic digital human avatar represents a major technical undertaking involving multiple teams around the world. Finally, I explored a framework for examining the ethical implications and signpost future research areas

    Revisiting E-topia: Theoretical Approaches and Empirical Findings on Online Anonymity

    Get PDF
    As social hierarchies along identity markers of gender, race, age etc. are replicated within participatory spaces, the question arises as to how online participation and its modes of identity reconfiguration might affect this dilemma. This paper first revisits the discussions about cyberdemocracy in the 1990s, which focused on the liberating effects of anonymity facilitating an inclusive sphere of equals. It then moves on to the arguments of cyberfeminist debates, criticizing the naivety of cyberdemocracy by pointing to the persistence of offline inequalities in cyberspace. Current discussions pick up this criticism and focus on visual re-embodiment and the persistence of identity online. After giving an overview of these theoretical debates, the paper turns to empirical findings on the effects of online anonymity. Various studies from different disciplines show that anonymity has both democratic and anti-democratic effects. It both liberates the democratic subjects and at the same time contributes to new modes of domination. Thus, the theoretical accounts of optimistic cyberdemocrats and pessimistic cyberfeminists together contribute to a holistic understanding of online anonymity in participatory spaces

    Generation of Virtual Humans for Virtual Reality, Medicine, and Domestic Assistance

    Get PDF
    Achenbach J. Generation of Virtual Humans for Virtual Reality, Medicine, and Domestic Assistance. Bielefeld: Universität Bielefeld; 2019.Virtual humans are employed in various applications including computer games, special effects in movies, virtual try-ons, medical surgery planning, and virtual assistance. This thesis deals with virtual humans and their computer-aided generation for different purposes. In a first step, we derive a technique to digitally clone the face of a scanned person. Fitting a facial template model to 3D-scanner data is a powerful technique for generating face avatars, in particular in the presence of noisy and incomplete measurements. Consequently, there are many approaches for the underlying non-rigid registration task, and these are typically composed from very similar algorithmic building blocks. By providing a thorough analysis of the different design choices, we derive a face matching technique tailored to high-quality reconstructions from high-resolution scanner data. We then extend this approach in two ways: An anisotropic bending model allows us to more accurately reconstruct facial details. A simultaneous constrained fitting of eyes and eyelids improves the reconstruction of the eye region considerably. Next, we extend this work to full bodies and present a complete pipeline to create animatable virtual humans by fitting a holistic template character. Due to the careful selection of techniques and technology, our reconstructed humans are quite realistic in terms of both geometry and texture. Since we represent our models as single-layer triangle meshes and animate them through standard skeleton-based skinning and facial blendshapes, our characters can be used in standard VR engines out of the box. By optimizing computation time and minimizing manual intervention, our reconstruction pipeline is capable of processing entire characters in less than ten minutes. In a following part of this thesis, we build on our template fitting method and deal with the problem of inferring the skin surface of a head from a given skull and vice versa. Starting with a method for automated estimation of a human face from a given skull remain, we extend this approach to bidirectional facial reconstruction in order to also estimate the skull from a given scan of the skin surface. This is based on a multilinear model that describes the correlation between the skull and the facial soft tissue thickness on the one hand and the head/face surface geometry on the other hand. We demonstrate the versatility of our novel multilinear model by estimating faces from given skulls as well as skulls from given faces within just a couple of seconds. To foster further research in this direction, we made our multilinear model publicly available. In a last part, we generate assistive virtual humans that are employed as stimuli for an interdisciplinary study. In the study, we shed light on user preferences for visual attributes of virtual assistants in a variety of smart home contexts

    A review on visual privacy preservation techniques for active and assisted living

    Get PDF
    This paper reviews the state of the art in visual privacy protection techniques, with particular attention paid to techniques applicable to the field of Active and Assisted Living (AAL). A novel taxonomy with which state-of-the-art visual privacy protection methods can be classified is introduced. Perceptual obfuscation methods, a category in this taxonomy, is highlighted. These are a category of visual privacy preservation techniques, particularly relevant when considering scenarios that come under video-based AAL monitoring. Obfuscation against machine learning models is also explored. A high-level classification scheme of privacy by design, as defined by experts in privacy and data protection law, is connected to the proposed taxonomy of visual privacy preservation techniques. Finally, we note open questions that exist in the field and introduce the reader to some exciting avenues for future research in the area of visual privacy.Open Access funding provided thanks to the CRUE-CSIC agreement with Springer Nature. This work is part of the visuAAL project on Privacy-Aware and Acceptable Video-Based Technologies and Services for Active and Assisted Living (https://www.visuaal-itn.eu/). This project has received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No 861091. The authors would also like to acknowledge the contribution of COST Action CA19121 - GoodBrother, Network on Privacy-Aware Audio- and Video-Based Applications for Active and Assisted Living (https://goodbrother.eu/), supported by COST (European Cooperation in Science and Technology) (https://www.cost.eu/)

    State of the art in privacy preservation in video data

    Full text link
    Active and Assisted Living (AAL) technologies and services are a possible solution to address the crucial challenges regarding health and social care resulting from demographic changes and current economic conditions. AAL systems aim to improve quality of life and support independent and healthy living of older and frail people. AAL monitoring systems are composed of networks of sensors (worn by the users or embedded in their environment) processing elements and actuators that analyse the environment and its occupants to extract knowledge and to detect events, such as anomalous behaviours, launch alarms to tele-care centres, or support activities of daily living, among others. Therefore, innovation in AAL can address healthcare and social demands while generating economic opportunities. Recently, there has been far-reaching advancements in the development of video-based devices with improved processing capabilities, heightened quality, wireless data transfer, and increased interoperability with Internet of Things (IoT) devices. Computer vision gives the possibility to monitor an environment and report on visual information, which is commonly the most straightforward and human-like way of describing an event, a person, an object, interactions and actions. Therefore, cameras can offer more intelligent solutions for AAL but they may be considered intrusive by some end users. The General Data Protection Regulation (GDPR) establishes the obligation for technologies to meet the principles of data protection by design and by default. More specifically, Article 25 of the GDPR requires that organizations must "implement appropriate technical and organizational measures [...] which are designed to implement data protection principles [...] , in an effective manner and to integrate the necessary safeguards into [data] processing.” Thus, AAL solutions must consider privacy-by-design methodologies in order to protect the fundamental rights of those being monitored. Different methods have been proposed in the latest years to preserve visual privacy for identity protection. However, in many AAL applications, where mostly only one person would be present (e.g. an older person living alone), user identification might not be an issue; concerns are more related to the disclosure of appearance (e.g. if the person is dressed/naked) and behaviour, what we called bodily privacy. Visual obfuscation techniques, such as image filters, facial de-identification, body abstraction, and gait anonymization, can be employed to protect privacy and agreed upon by the users ensuring they feel comfortable. Moreover, it is difficult to ensure a high level of security and privacy during the transmission of video data. If data is transmitted over several network domains using different transmission technologies and protocols, and finally processed at a remote location and stored on a server in a data center, it becomes demanding to implement and guarantee the highest level of protection over the entire transmission and storage system and for the whole lifetime of the data. The development of video technologies, increase in data rates and processing speeds, wide use of the Internet and cloud computing as well as highly efficient video compression methods have made video encryption even more challenging. Consequently, efficient and robust encryption of multimedia data together with using efficient compression methods are important prerequisites in achieving secure and efficient video transmission and storage.This publication is based upon work from COST Action GoodBrother - Network on Privacy-Aware Audio- and Video-Based Applications for Active and Assisted Living (CA19121), supported by COST (European Cooperation in Science and Technology). COST (European Cooperation in Science and Technology) is a funding agency for research and innovation networks. Our Actions help connect research initiatives across Europe and enable scientists to grow their ideas by sharing them with their peers. This boosts their research, career and innovation. www.cost.e

    Creating a Virtual Mirror for Motor Learning in Virtual Reality

    Get PDF
    Waltemate T. Creating a Virtual Mirror for Motor Learning in Virtual Reality. Bielefeld: Universität Bielefeld; 2018

    Leaving RL

    Get PDF
    In der vorliegenden Diplomarbeit geht es darum, die Art und Weise aufzuzeigen, wie sich Jugendliche im Web 2.0 Identitäten konstruieren. Als Arbeitsgrundlage dienten mir das Netzwerk Facebook und das online Rollenspiel World of Warcraft. Eine Analyse dieser beiden virtuellen Welten soll aufzeigen, dass insbesondere junge Menschen verstärkt das Internet nutzen um bestimmte Aspekte ihrer Identitäten darzustellen, öffentlich zu machen und gegebenenfalls auch zu verändern. Im theoretischen Teil dieser Arbeit wird darauf hingewiesen, dass die Identität eines jeden Menschen als etwas Multiples, sowie Form- und Veränderbares angesehen wird. Grundsätzlich wird davon ausgegangen, dass die meisten Menschen nicht ihr wahres Ich zur Schau stellen, sondern dieses hinter den unterschiedlichsten Masken bzw. Rollen verbergen. Bei der näheren Beschäftigung mit den oben genannten Netzwerken wird offensichtlich, dass, je nach sozialer Situation, die Beteiligten nicht nur ihre Masken wechseln, sondern ihre jeweiligen Identitäten gleich mit. Es werden verschiedene Rollen eingenommen, mit Hilfe derer unterschiedliche Aspekte der angestrebten Identitäten hervorgehoben oder versteckt werden können. Mit Facebook und World of Warcraft wurden zwei dieser sozialen und kulturellen Darstellungsformen ausgewählt, in denen vor allem Mitglieder der Digitalen Generation unterwegs sind. Beide Netzwerke erlauben es eben jener Generation, also Menschen, die in das Internetzeitalter hineingeboren wurden, mit den unterschiedlichsten Identitäten zu experimentieren
    • …
    corecore