17,998 research outputs found

    Design of a virtual human presenter

    Get PDF
    We have created a virtual human presenter who accepts speech texts with embedded commands as inputs. The presenter acts in real-time 3D animation synchronized with speech. The system was developed on the Jack animated-agent system. Jack provides a 3D graphical environment for controlling articulated figures, including detailed human model

    Design of a Virtual Human Presenter

    Get PDF
    We created a virtual human presenter based on extensions to the JackTM animated agent system. Inputs to the presenter system are in the form of speech texts with embedded commands, most of which relate to the virtual presenter\u27s body language. The system then makes him act as a presenter with presentation skills in real-time 3D animation synchronized with speech outputs. He can make presentations with virtual visual aids, with virtual 3D environments, or even on the WWW

    Design of a virtual human presenter

    Full text link

    Design of a Virtual Assistant to Improve Interaction Between the Audience and the Presenter

    Get PDF
    This article presents a novel design of a Virtual Assistant as part of a human-machine interaction system to improve communication between the presenter and the audience that can be used in education or general presentations for improving interaction during the presentations (e.g., auditoriums with 200 people). The main goal of the proposed model is the design of a framework of interaction to increase the level of attention of the public in key aspects of the presentation. In this manner, the collaboration between the presenter and Virtual Assistant could improve the level of learning among the public. The design of the Virtual Assistant relies on non-anthropomorphic forms with ‘live’ characteristics generating an intuitive and self-explainable interface. A set of intuitive and useful virtual interactions to support the presenter was designed. This design was validated from various types of the public with a psychological study based on a discrete emotions’ questionnaire confirming the adequacy of the proposed solution. The human-machine interaction system supporting the Virtual Assistant should automatically recognize the attention level of the audience from audiovisual resources and synchronize the Virtual Assistant with the presentation. The system involves a complex artificial intelligence architecture embracing perception of high-level features from audio and video, knowledge representation, and reasoning for pervasive and affective computing and reinforcement learning to teach the intelligent agent to decide on the best strategy to increase the level of attention of the audience

    Presenting in Virtual Worlds: Towards an Architecture for a 3D Presenter explaining 2D-Presented Information

    Get PDF
    Entertainment, education and training are changing because of multi-party interaction technology. In the past we have seen the introduction of embodied agents and robots that take the role of a museum guide, a news presenter, a teacher, a receptionist, or someone who is trying to sell you insurances, houses or tickets. In all these cases the embodied agent needs to explain and describe. In this paper we contribute the design of a 3D virtual presenter that uses different output channels to present and explain. Speech and animation (posture, pointing and involuntary movements) are among these channels. The behavior is scripted and synchronized with the display of a 2D presentation with associated text and regions that can be pointed at (sheets, drawings, and paintings). In this paper the emphasis is on the interaction between 3D presenter and the 2D presentation

    Presenting in Virtual Worlds: An Architecture for a 3D Anthropomorphic Presenter

    Get PDF
    Multiparty-interaction technology is changing entertainment, education, and training. Deployed examples of such technology include embodied agents and robots that act as a museum guide, a news presenter, a teacher, a receptionist, or someone trying to sell you insurance, homes, or tickets. In all these cases, the embodied agent needs to explain and describe. This article describes the design of a 3D virtual presenter that uses different output channels (including speech and animation of posture, pointing, and involuntary movements) to present and explain. The behavior is scripted and synchronized with a 2D display containing associated text and regions (slides, drawings, and paintings) at which the presenter can point. This article is part of a special issue on interactive entertainment

    A Role-Based Approach for Orchestrating Emergent Configurations in the Internet of Things

    Full text link
    The Internet of Things (IoT) is envisioned as a global network of connected things enabling ubiquitous machine-to-machine (M2M) communication. With estimations of billions of sensors and devices to be connected in the coming years, the IoT has been advocated as having a great potential to impact the way we live, but also how we work. However, the connectivity aspect in itself only accounts for the underlying M2M infrastructure. In order to properly support engineering IoT systems and applications, it is key to orchestrate heterogeneous 'things' in a seamless, adaptive and dynamic manner, such that the system can exhibit a goal-directed behaviour and take appropriate actions. Yet, this form of interaction between things needs to take a user-centric approach and by no means elude the users' requirements. To this end, contextualisation is an important feature of the system, allowing it to infer user activities and prompt the user with relevant information and interactions even in the absence of intentional commands. In this work we propose a role-based model for emergent configurations of connected systems as a means to model, manage, and reason about IoT systems including the user's interaction with them. We put a special focus on integrating the user perspective in order to guide the emergent configurations such that systems goals are aligned with the users' intentions. We discuss related scientific and technical challenges and provide several uses cases outlining the concept of emergent configurations.Comment: In Proceedings of the Second International Workshop on the Internet of Agents @AAMAS201

    Framework to Enhance Teaching and Learning in System Analysis and Unified Modelling Language

    Get PDF
    Cowling, MA ORCiD: 0000-0003-1444-1563; Munoz Carpio, JC ORCiD: 0000-0003-0251-5510Systems Analysis modelling is considered foundational for Information and Communication Technology (ICT) students, with introductory and advanced units included in nearly all ICT and computer science degrees. Yet despite this, novice systems analysts (learners) find modelling and systems thinking quite difficult to learn and master. This makes the process of teaching the fundamentals frustrating and time intensive. This paper will discuss the foundational problems that learners face when learning Systems Analysis modelling. Through a systematic literature review, a framework will be proposed based on the key problems that novice learners experience. In this proposed framework, a sequence of activities has been developed to facilitate understanding of the requirements, solutions and incremental modelling. An example is provided illustrating how the framework could be used to incorporate visualization and gaming elements into a Systems Analysis classroom; therefore, improving motivation and learning. Through this work, a greater understanding of the approach to teaching modelling within the computer science classroom will be provided, as well as a framework to guide future teaching activities

    Refining personal and social presence in virtual meetings

    Get PDF
    Virtual worlds show promise for conducting meetings and conferences without the need for physical travel. Current experience suggests the major limitation to the more widespread adoption and acceptance of virtual conferences is the failure of existing environments to provide a sense of immersion and engagement, or of ‘being there’. These limitations are largely related to the appearance and control of avatars, and to the absence of means to convey non-verbal cues of facial expression and body language. This paper reports on a study involving the use of a mass-market motion sensor (Kinect™) and the mapping of participant action in the real world to avatar behaviour in the virtual world. This is coupled with full-motion video representation of participant’s faces on their avatars to resolve both identity and facial expression issues. The outcomes of a small-group trial meeting based on this technology show a very positive reaction from participants, and the potential for further exploration of these concepts
    corecore