429 research outputs found

    Evaluating Perceived Trust From Procedurally Animated Gaze

    Get PDF
    Adventure role playing games (RPGs) provide players with increasingly expansive worlds, compelling storylines, and meaningful fictional character interactions. Despite the fast-growing richness of these worlds, the majority of interactions between the player and non-player characters (NPCs) still remain scripted. In this paper we propose using an NPC’s animations to reflect how they feel towards the player and as a proof of concept, investigate the potential for a straightforward gaze model to convey trust. Through two perceptual experiments, we find that viewers can distinguish between high and low trust animations, that viewers associate the gaze differences specifically with trust and not with an unrelated attitude (aggression), and that the effect can hold for different facial expressions and scene contexts, even when viewed by participants for a short (five second) clip length. With an additional experiment, we explore the extent that trust is uniquely conveyed over other attitudes associated with gaze, such as interest, unfriendliness, and admiration

    Human or Robot?: Investigating voice, appearance and gesture motion realism of conversational social agents

    Get PDF
    Research on creation of virtual humans enables increasing automatization of their behavior, including synthesis of verbal and nonverbal behavior. As the achievable realism of different aspects of agent design evolves asynchronously, it is important to understand if and how divergence in realism between behavioral channels can elicit negative user responses. Specifically, in this work, we investigate the question of whether autonomous virtual agents relying on synthetic text-to-speech voices should portray a corresponding level of realism in the non-verbal channels of motion and visual appearance, or if, alternatively, the best available realism of each channel should be used. In two perceptual studies, we assess how realism of voice, motion, and appearance influence the perceived match of speech and gesture motion, as well as the agent\u27s likability and human-likeness. Our results suggest that maximizing realism of voice and motion is preferable even when this leads to realism mismatches, but for visual appearance, lower realism may be preferable. (A video abstract can be found at https://youtu.be/arfZZ-hxD1Y.

    Kolaboratif robotlarda güven özelliği: Sanal insan robot etkileşim ortamında, sözsüz ipuçlarının deneysel araştırması

    Get PDF
    This thesis reports the development of non-verbal HRI (Human-Robot Interaction) behaviors on a robotic manipulator, evaluating the role of trust in collaborative assembly tasks. Towards this end, we developed four non-verbal HRI behaviors, namely gazing, head nodding, tilting, and shaking, on a UR5 robotic manipulator. We used them under different degrees of trust of the user to the robot actions. Specifically, we used a certain head-on neck posture for the cobot using the last three links along with the gripper. The gaze behavior directed the gripper towards the desired point in space, alongside with the head nodding and shaking behaviors. We designed a remote setup to experiment subjects interacting with the cobot remotely via Zoom teleconferencing. In a simple collaborative scenario, the efficacy of these behaviors was assessed in terms of their impact on the formation of trust between the robot and the user and task performance. Nineteen people participated in the experiment with varying ages and genders.Bu tez insan robot arası etkileşimi geliştirmek amacıyla, yardımcı UR5 robotunun manipülatörü ile, bakış ve kafa davranışları yaratmayı ve etkilerini montaj senaryosu altında test etmeyi hedeflemektedir. Bu doğrultuda çeşitli sözlü olmayan robot davranışları UR5 robotu ve Robotiq çene kıskacı kullanılarak geliştirildi, bunlar; yana ve öne kafa sallama, kafa eğme ve bakış davranışıdır. Bu davranışları uygulayabilmek için daha önceden dizayn edilmiş bir robot duruşu kullanıldı ve son üç robot eklemi, çene kıskacı kullanılarak baş-boyun yapısına çevrildi. Bu duruş yapısı ile birlikte çene kıskacı uzayda bir noktaya doğrultularak bakış davranışı yapabilmektedir. Bakış davranışına ek olarak kafa yapısı ile birlikte kafa sallama gibi davranışlarda modellendi, bunun yanında katılımcıların aktif olarak cobot ile birlikte telekonferans programı olan Zoom üzerinden etkileşime geçebileceği özgün bir deney ortamı geliştirildi. Ortak çalışmaya dayalı bir senaryoda bu davranışların güven kazanımı ve performans üzerindeki etkisi test edildi. Farklı yaş ve cinsiyet gruplarından 19 katılımcı ile birlikte deneyler gerçekleştirildi.M.S. - Master of Scienc

    Privacy-Preserving Gaze Data Streaming in Immersive Interactive Virtual Reality:Robustness and User Experience

    Get PDF
    Eye tracking is routinely being incorporated into virtual reality (VR) systems. Prior research has shown that eye tracking data, if exposed, can be used for re-identification attacks [14]. The state of our knowledge about currently existing privacy mechanisms is limited to privacy-utility trade-off curves based on data-centric metrics of utility, such as prediction error, and black-box threat models. We propose that for interactive VR applications, it is essential to consider user-centric notions of utility and a variety of threat models. We develop a methodology to evaluate real-time privacy mechanisms for interactive VR applications that incorporate subjective user experience and task performance metrics. We evaluate selected privacy mechanisms using this methodology and find that re-identification accuracy can be decreased to as low as 14% while maintaining a high usability score and reasonable task performance. Finally, we elucidate three threat scenarios (black-box, black-box with exemplars, and white-box) and assess how well the different privacy mechanisms hold up to these adversarial scenarios. This work advances the state of the art in VR privacy by providing a methodology for end-to-end assessment of the risk of re-identification attacks and potential mitigating solutions

    Privacy-Preserving Gaze Data Streaming in Immersive Interactive Virtual Reality: Robustness and User Experience

    Full text link
    Eye tracking is routinely being incorporated into virtual reality (VR) systems. Prior research has shown that eye tracking data, if exposed, can be used for re-identification attacks. The state of our knowledge about currently existing privacy mechanisms is limited to privacy-utility trade-off curves based on data-centric metrics of utility, such as prediction error, and black-box threat models. We propose that for interactive VR applications, it is essential to consider user-centric notions of utility and a variety of threat models. We develop a methodology to evaluate real-time privacy mechanisms for interactive VR applications that incorporate subjective user experience and task performance metrics. We evaluate selected privacy mechanisms using this methodology and find that re-identification accuracy can be decreased to as low as 14% while maintaining a high usability score and reasonable task performance. Finally, we elucidate three threat scenarios (black-box, black-box with exemplars, and white-box) and assess how well the different privacy mechanisms hold up to these adversarial scenarios. This work advances the state of the art in VR privacy by providing a methodology for end-to-end assessment of the risk of re-identification attacks and potential mitigating solutions.Comment: To appear in IEEE Transactions on Visualization and Computer Graphic

    The Procedural Justice Industrial Complex

    Get PDF
    The singular focus on procedural justice police reform is dangerous. Procedurally just law enforcement encounters provide an empirically proven subjective sense of fairness and legitimacy, while obscuring substantively unjust outcomes emanating from a fundamentally unjust system. The deceptive simplicity of procedural justice – that a polite cop is a lawful cop – promotes a false consciousness among would-be reformers that progress has been made, evokes a false sense of legitimacy divorced from objective indicia of lawfulness or morality, and claims the mantle of “reform” in the process. It is not just that procedural justice is a suboptimal type of reform; it is the type of reform that actively frustrates other reforms by dressing up policing with the perception of correctness and legitimacy. And yet, procedural justice dominates police reform policy. Virtually all current federally funded police reform proposals support procedural justice trainings to the exclusion of proposals to address police brutality, eliminate discriminatory overpolicing, demilitarize departments, and end qualified immunity. As a result, a growing procedural justice industrial complex has taken shape. This multilayered public-private partnership between government agencies, academic institutions, and for-profit training companies increasingly helps police departments “protect their brand” and “reduce liability” through procedural politeness, while requiring no changes to unlawful, unnecessary, and violent police behavior. This Article provides the first comprehensive account of this growing complex, charting its roots in community policing and evolving into a cottage industry of private, for-profit purveyors offering costly procedural justice trainings to departments flush with federal grant money. This Article also challenges the dominant scholarly narrative supporting these procedural justice policies, interrogating its role in promoting unnecessary ubiquitous police presence and justifying new racially discriminatory practices like “hot spots policing” and “precision policing.” In doing so, the Article applies these process-oriented critiques to five substantive police reform proposals, exploring how this singular focus on procedural justice distinctly frustrates more necessary transformative reforms in the areas of discriminatory policing, police brutality, police accountability, legal reform, and police abolition

    An Actor-Centric Approach to Facial Animation Control by Neural Networks For Non-Player Characters in Video Games

    Get PDF
    Game developers increasingly consider the degree to which character animation emulates facial expressions found in cinema. Employing animators and actors to produce cinematic facial animation by mixing motion capture and hand-crafted animation is labor intensive and therefore expensive. Emotion corpora and neural network controllers have shown promise toward developing autonomous animation that does not rely on motion capture. Previous research and practice in disciplines of Computer Science, Psychology and the Performing Arts have provided frameworks on which to build a workflow toward creating an emotion AI system that can animate the facial mesh of a 3d non-player character deploying a combination of related theories and methods. However, past investigations and their resulting production methods largely ignore the emotion generation systems that have evolved in the performing arts for more than a century. We find very little research that embraces the intellectual process of trained actors as complex collaborators from which to understand and model the training of a neural network for character animation. This investigation demonstrates a workflow design that integrates knowledge from the performing arts and the affective branches of the social and biological sciences. Our workflow begins at the stage of developing and annotating a fictional scenario with actors, to producing a video emotion corpus, to designing training and validating a neural network, to analyzing the emotion data annotation of the corpus and neural network, and finally to determining resemblant behavior of its autonomous animation control of a 3d character facial mesh. The resulting workflow includes a method for the development of a neural network architecture whose initial efficacy as a facial emotion expression simulator has been tested and validated as substantially resemblant to the character behavior developed by a human actor
    corecore