3,212 research outputs found

    Towards virtual communities on the Web: Actors and audience

    Get PDF
    We report about ongoing research in a virtual reality environment where visitors can interact with agents that help them to obtain information, to perform certain transactions and to collaborate with them in order to get some tasks done. Our environment models a theatre in our hometown. We discuss attempts to let this environment evolve into a theatre community where we do not only have goal-directed visitors, but also visitors that that are not sure whether they want to buy or just want information or visitors who just want to look around. It is shown that we need a multi-user and multiagent environment to realize our goals. Since our environment models a theatre it is also interesting to investigate the roles of performers and audience in this environment. For that reason we discuss capabilities and personalities of agents. Some notes on the historical development of networked communities are included

    On combining the facial movements of a talking head

    Get PDF
    We present work on Obie, an embodied conversational agent framework. An embodied conversational agent, or talking head, consists of three main components. The graphical part consists of a face model and a facial muscle model. Besides the graphical part, we have implemented an emotion model and a mapping from emotions to facial expressions. The animation part of the framework focuses on the combination of different facial movements temporally. In this paper we propose a scheme of combining facial movements on a 3D talking head

    HeadOn: Real-time Reenactment of Human Portrait Videos

    Get PDF
    We propose HeadOn, the first real-time source-to-target reenactment approach for complete human portrait videos that enables transfer of torso and head motion, face expression, and eye gaze. Given a short RGB-D video of the target actor, we automatically construct a personalized geometry proxy that embeds a parametric head, eye, and kinematic torso model. A novel real-time reenactment algorithm employs this proxy to photo-realistically map the captured motion from the source actor to the target actor. On top of the coarse geometric proxy, we propose a video-based rendering technique that composites the modified target portrait video via view- and pose-dependent texturing, and creates photo-realistic imagery of the target actor under novel torso and head poses, facial expressions, and gaze directions. To this end, we propose a robust tracking of the face and torso of the source actor. We extensively evaluate our approach and show significant improvements in enabling much greater flexibility in creating realistic reenacted output videos.Comment: Video: https://www.youtube.com/watch?v=7Dg49wv2c_g Presented at Siggraph'1

    Lip syncing method for realistic expressive 3D face model

    Get PDF
    Lip synchronization of 3D face model is now being used in a multitude of important fields. It brings a more human, social and dramatic reality to computer games, films and interactive multimedia, and is growing in use and importance. High level of realism can be used in demanding applications such as computer games and cinema. Authoring lip syncing with complex and subtle expressions is still difficult and fraught with problems in terms of realism. This research proposed a lip syncing method of realistic expressive 3D face model. Animated lips requires a 3D face model capable of representing the myriad shapes the human face experiences during speech and a method to produce the correct lip shape at the correct time. The paper presented a 3D face model designed to support lip syncing that align with input audio file. It deforms using Raised Cosine Deformation (RCD) function that is grafted onto the input facial geometry. The face model was based on MPEG-4 Facial Animation (FA) Standard. This paper proposed a method to animate the 3D face model over time to create animated lip syncing using a canonical set of visemes for all pairwise combinations of a reduced phoneme set called ProPhone. The proposed research integrated emotions by the consideration of Ekman model and Plutchik’s wheel with emotive eye movements by implementing Emotional Eye Movements Markup Language (EEMML) to produce realistic 3D face model. © 2017 Springer Science+Business Media New Yor

    Virtual and Augmented Reality in Finance: State Visibility of Events and Risk

    Get PDF
    International audienceThe recent financial crisis and its aftermath motivate our re-thinking of the role of Information and Communication Technologies (ICT) as a driver for change in global finance and a critical factor for success and sustainability. We attribute the recent financial crisis that hit the global market, causing a drastic economic slowdown and recession, to a lack of state visibility of risk, inadequate response to events, and a slow dynamic system adaptation to events. There is evidence that ICT is not yet appropriately developed to create business value and business intelligence capable of counteracting devastating events. The aim of this chapter is to assess the potential of Virtual Reality and Augmented Reality (VR / AR) technologies in supporting the dynamics of global financial systems and in addressing the grand challenges posed by unexpected events and crises. We overview, firstly, in this chapter traditional AR/VR uses. Secondly, we describe early attempts to use 3D/ VR / AR technologies in Finance. Thirdly, we consider the case study of mediating the visibility of the financial state and we explore the various dimensions of the problem. Fourthly, we assess the potential of AR / VR technologies in raising the perception of the financial state (including financial risk). We conclude the chapter with a summary and a research agenda to develop technologies capable of increasing the perception of the financial state and risk and counteracting devastating events

    Framework for proximal personified interfaces

    Get PDF

    Multispace behavioral model for face-based affective social agents

    Get PDF
    This paper describes a behavioral model for affective social agents based on three independent but interacting parameter spaces: knowledge, personality, andmood. These spaces control a lower-level geometry space that provides parameters at the facial feature level. Personality and mood use findings in behavioral psychology to relate the perception of personality types and emotional states to the facial actions and expressions through two-dimensional models for personality and emotion. Knowledge encapsulates the tasks to be performed and the decision-making process using a specially designed XML-based language. While the geometry space provides an MPEG-4 compatible set of parameters for low-level control, the behavioral extensions available through the triple spaces provide flexible means of designing complicated personality types, facial expression, and dynamic interactive scenarios
    corecore