10 research outputs found

    A high-level architecture for believable social agents

    Get PDF
    The creation or virtual humans capable of behaving and interacting realistically with each other requires the development of autonomous believable social agents. Standard goal-oriented approaches are not well suited to it because they don't take into account important characteristics identified by the social sciences. The paper tackles the issue of a general social reasoning mechanism, discussing its basic functional requirements using a sociological perspective, and proposing a high-level architecture based on roles, norms, values and type

    Nonverbal communication interface for collaborative virtual environments

    Get PDF
    Nonverbal communication is an important aspect of real-life face-to-face interaction and one of the most efficient ways to convey emotions, therefore users should be provided the means to replicate it in the virtual world. Because articulated embodiments are well suited to provide body communication in virtual environments, this paper first reviews some of the advantages and disadvantages of complex embodiments. After a brief introduction to nonverbal communication theories, we present our solution, taking into account the practical limitations of input devices and social science aspects. We introduce our sample of actions and implementation using our VLNET (Virtual Life Network) networked virtual environment and discuss the results of an informal evaluation experimen

    Requirements for an architecture for believable social agents

    No full text
    This paper introduces four sociological concepts which we argue are important for the creation of autonomous social agents capable of behaving and interacting realistically with each other as virtual humans. A list of functional requirements based on these concepts is then propose

    Specifying MPEG-4 body behaviors

    No full text
    The MPEG-4 standard specifies a set of low-level animation parameters for body animation, but does not provide any high-level functionality for the control of avatars or embodied agents. In this paper we discuss the required features for a script format allowing designers to easily specify complex bodily behaviors, and describe a system and its associated syntax - Body Animation Script (BAS) - which fulfills these requirements in a flexible way. The described architecture allows the organization and parametrization of predefined MPEG-4 animations and their integration with real-time algorithmic animations, such as pointing at a specific location or walking. This system has been implemented at EPFL in the framework of the EU SoNG project, in order to allow intelligent software agents to control their 3D graphical representation and end-users to trigger rich nonverbal behaviors from an online interface. It has been integrated into AML - the Avatar Markup Languag

    Avatar Markup Language

    No full text
    Synchronization of speech, facial expressions and body gestures is one of the most critical problems in realistic avatar animation in virtual environments. In this paper, we address this problem by proposing a new high-level animation language to describe avatar animation. The Avatar Markup Language (AML), based on XML, encapsulates the Text to Speech, Facial Animation and Body Animation in a unified manner with appropriate synchronization. We use low-level animation parameters, defined by the MPEG-4 standard, to demonstrate the use of the AML. However, the AML itself is independent of any low-level parameters as such. AML can be effectively used by intelligent software agents to control their 3D graphical representations in the virtual environments. With the help of the associated tools, AML also facilitates to create and share 3D avatar animations quickly and easily. We also discuss how the language has been developed and used within the SoNG project framework. The tools developed to use AML in a real-time animation system incorporating intelligent agents and 3D avatars are also discussed subsequentl

    Research Alert: Object-Focused Interaction in Collaborative Virtual Environments

    No full text
    This paper explores and evaluates the support for object-focused interaction provided by a desktop Collaborative Virtual Environment. An experimental “design ” task was conducted, and video recordings of the participants ’ activities facilitated an observational analysis of interaction in, and through, the virtual world. Observations include: problems due to “frag-mented ” views of embodiments in relation to shared objects; participants compensating with spoken accounts of their actions; and difficulties in understanding others ’ perspectives. Implications and proposals for the design of CVEs drawn from these observations are: the use of semidistorted views to support peripheral awareness; more explicit or exaggerated repre-sentations of actions than are provided by pseudohumanoid avatars; and navigation tech-niques that are sensitive to the actions of others. The paper also presents some examples of the ways in which these proposals might be realized
    corecore