85 research outputs found

    A semantic memory bank assisted by an embodied conversational agents for mobile devices

    Get PDF
    Alzheimer’s disease is a type of dementia that causes memory loss and interferes with intellectual abilities seriously. It has no current cure and therapeutic efficiency of current medication is limited. However, there is evidence that non-pharmacological treatments could be useful to stimulate cognitive abilities. In the last few year, several studies have focused on describing and under- standing how Virtual Coaches (VC) could be key drivers for health promotion in home care settings. The use of VC gains an augmented attention in the considerations of medical innovations. In this paper, we propose an approach that exploits semantic technologies and Embodied Conversational Agents to help patients training cognitive abilities using mobile devices. In this work, semantic technologies are used to provide knowledge about the memory of a specific person, who exploits the structured data stored in a linked data repository and take advantage of the flexibility provided by ontologies to define search domains and expand the agent’s capabilities. Our Memory Bank Embodied Conversational Agent (MBECA) is used to interact with the patient and ease the interaction with new devices. The framework is oriented to Alzheimer’s patients, caregivers, and therapists

    Building Embodied Conversational Agents:Observations on human nonverbal behaviour as a resource for the development of artificial characters

    Get PDF
    "Wow this is so cool!" This is what I most probably yelled, back in the 90s, when my first computer program on our MSX computer turned out to do exactly what I wanted it to do. The program contained the following instruction: COLOR 10(1.1) After hitting enter, it would change the screen color from light blue to dark yellow. A few years after that experience, Microsoft Windows was introduced. Windows came with an intuitive graphical user interface that was designed to allow all people, so also those who would not consider themselves to be experienced computer addicts, to interact with the computer. This was a major step forward in human-computer interaction, as from that point forward no complex programming skills were required anymore to perform such actions as adapting the screen color. Changing the background was just a matter of pointing the mouse to the desired color on a color palette. "Wow this is so cool!". This is what I shouted, again, 20 years later. This time my new smartphone successfully skipped to the next song on Spotify because I literally told my smartphone, with my voice, to do so. Being able to operate your smartphone with natural language through voice-control can be extremely handy, for instance when listening to music while showering. Again, the option to handle a computer with voice instructions turned out to be a significant optimization in human-computer interaction. From now on, computers could be instructed without the use of a screen, mouse or keyboard, and instead could operate successfully simply by telling the machine what to do. In other words, I have personally witnessed how, within only a few decades, the way people interact with computers has changed drastically, starting as a rather technical and abstract enterprise to becoming something that was both natural and intuitive, and did not require any advanced computer background. Accordingly, while computers used to be machines that could only be operated by technically-oriented individuals, they had gradually changed into devices that are part of many people’s household, just as much as a television, a vacuum cleaner or a microwave oven. The introduction of voice control is a significant feature of the newer generation of interfaces in the sense that these have become more "antropomorphic" and try to mimic the way people interact in daily life, where indeed the voice is a universally used device that humans exploit in their exchanges with others. The question then arises whether it would be possible to go even one step further, where people, like in science-fiction movies, interact with avatars or humanoid robots, whereby users can have a proper conversation with a computer-simulated human that is indistinguishable from a real human. An interaction with a human-like representation of a computer that behaves, talks and reacts like a real person would imply that the computer is able to not only produce and understand messages transmitted auditorily through the voice, but also could rely on the perception and generation of different forms of body language, such as facial expressions, gestures or body posture. At the time of writing, developments of this next step in human-computer interaction are in full swing, but the type of such interactions is still rather constrained when compared to the way humans have their exchanges with other humans. It is interesting to reflect on how such future humanmachine interactions may look like. When we consider other products that have been created in history, it sometimes is striking to see that some of these have been inspired by things that can be observed in our environment, yet at the same do not have to be exact copies of those phenomena. For instance, an airplane has wings just as birds, yet the wings of an airplane do not make those typical movements a bird would produce to fly. Moreover, an airplane has wheels, whereas a bird has legs. At the same time, an airplane has made it possible for a humans to cover long distances in a fast and smooth manner in a way that was unthinkable before it was invented. The example of the airplane shows how new technologies can have "unnatural" properties, but can nonetheless be very beneficial and impactful for human beings. This dissertation centers on this practical question of how virtual humans can be programmed to act more human-like. The four studies presented in this dissertation all have the equivalent underlying question of how parts of human behavior can be captured, such that computers can use it to become more human-like. Each study differs in method, perspective and specific questions, but they are all aimed to gain insights and directions that would help further push the computer developments of human-like behavior and investigate (the simulation of) human conversational behavior. The rest of this introductory chapter gives a general overview of virtual humans (also known as embodied conversational agents), their potential uses and the engineering challenges, followed by an overview of the four studies

    Building Embodied Conversational Agents:Observations on human nonverbal behaviour as a resource for the development of artificial characters

    Get PDF
    "Wow this is so cool!" This is what I most probably yelled, back in the 90s, when my first computer program on our MSX computer turned out to do exactly what I wanted it to do. The program contained the following instruction: COLOR 10(1.1) After hitting enter, it would change the screen color from light blue to dark yellow. A few years after that experience, Microsoft Windows was introduced. Windows came with an intuitive graphical user interface that was designed to allow all people, so also those who would not consider themselves to be experienced computer addicts, to interact with the computer. This was a major step forward in human-computer interaction, as from that point forward no complex programming skills were required anymore to perform such actions as adapting the screen color. Changing the background was just a matter of pointing the mouse to the desired color on a color palette. "Wow this is so cool!". This is what I shouted, again, 20 years later. This time my new smartphone successfully skipped to the next song on Spotify because I literally told my smartphone, with my voice, to do so. Being able to operate your smartphone with natural language through voice-control can be extremely handy, for instance when listening to music while showering. Again, the option to handle a computer with voice instructions turned out to be a significant optimization in human-computer interaction. From now on, computers could be instructed without the use of a screen, mouse or keyboard, and instead could operate successfully simply by telling the machine what to do. In other words, I have personally witnessed how, within only a few decades, the way people interact with computers has changed drastically, starting as a rather technical and abstract enterprise to becoming something that was both natural and intuitive, and did not require any advanced computer background. Accordingly, while computers used to be machines that could only be operated by technically-oriented individuals, they had gradually changed into devices that are part of many people’s household, just as much as a television, a vacuum cleaner or a microwave oven. The introduction of voice control is a significant feature of the newer generation of interfaces in the sense that these have become more "antropomorphic" and try to mimic the way people interact in daily life, where indeed the voice is a universally used device that humans exploit in their exchanges with others. The question then arises whether it would be possible to go even one step further, where people, like in science-fiction movies, interact with avatars or humanoid robots, whereby users can have a proper conversation with a computer-simulated human that is indistinguishable from a real human. An interaction with a human-like representation of a computer that behaves, talks and reacts like a real person would imply that the computer is able to not only produce and understand messages transmitted auditorily through the voice, but also could rely on the perception and generation of different forms of body language, such as facial expressions, gestures or body posture. At the time of writing, developments of this next step in human-computer interaction are in full swing, but the type of such interactions is still rather constrained when compared to the way humans have their exchanges with other humans. It is interesting to reflect on how such future humanmachine interactions may look like. When we consider other products that have been created in history, it sometimes is striking to see that some of these have been inspired by things that can be observed in our environment, yet at the same do not have to be exact copies of those phenomena. For instance, an airplane has wings just as birds, yet the wings of an airplane do not make those typical movements a bird would produce to fly. Moreover, an airplane has wheels, whereas a bird has legs. At the same time, an airplane has made it possible for a humans to cover long distances in a fast and smooth manner in a way that was unthinkable before it was invented. The example of the airplane shows how new technologies can have "unnatural" properties, but can nonetheless be very beneficial and impactful for human beings. This dissertation centers on this practical question of how virtual humans can be programmed to act more human-like. The four studies presented in this dissertation all have the equivalent underlying question of how parts of human behavior can be captured, such that computers can use it to become more human-like. Each study differs in method, perspective and specific questions, but they are all aimed to gain insights and directions that would help further push the computer developments of human-like behavior and investigate (the simulation of) human conversational behavior. The rest of this introductory chapter gives a general overview of virtual humans (also known as embodied conversational agents), their potential uses and the engineering challenges, followed by an overview of the four studies

    Towards the Use of Dialog Systems to Facilitate Inclusive Education

    Get PDF
    Continuous advances in the development of information technologies have currently led to the possibility of accessing learning contents from anywhere, at anytime, and almost instantaneously. However, accessibility is not always the main objective in the design of educative applications, specifically to facilitate their adoption by disabled people. Different technologies have recently emerged to foster the accessibility of computers and new mobile devices, favoring a more natural communication between the student and the developed educative systems. This chapter describes innovative uses of multimodal dialog systems in education, with special emphasis in the advantages that they provide for creating inclusive applications and learning activities

    Automotive Intelligence Embedded in Electric Connected Autonomous and Shared Vehicles Technology for Sustainable Green Mobility

    Get PDF
    The automotive sector digitalization accelerates the technology convergence of perception, computing processing, connectivity, propulsion, and data fusion for electric connected autonomous and shared (ECAS) vehicles. This brings cutting-edge computing paradigms with embedded cognitive capabilities into vehicle domains and data infrastructure to provide holistic intrinsic and extrinsic intelligence for new mobility applications. Digital technologies are a significant enabler in achieving the sustainability goals of the green transformation of the mobility and transportation sectors. Innovation occurs predominantly in ECAS vehicles’ architecture, operations, intelligent functions, and automotive digital infrastructure. The traditional ownership model is moving toward multimodal and shared mobility services. The ECAS vehicle’s technology allows for the development of virtual automotive functions that run on shared hardware platforms with data unlocking value, and for introducing new, shared computing-based automotive features. Facilitating vehicle automation, vehicle electrification, vehicle-to-everything (V2X) communication is accomplished by the convergence of artificial intelligence (AI), cellular/wireless connectivity, edge computing, the Internet of things (IoT), the Internet of intelligent things (IoIT), digital twins (DTs), virtual/augmented reality (VR/AR) and distributed ledger technologies (DLTs). Vehicles become more intelligent, connected, functioning as edge micro servers on wheels, powered by sensors/actuators, hardware (HW), software (SW) and smart virtual functions that are integrated into the digital infrastructure. Electrification, automation, connectivity, digitalization, decarbonization, decentralization, and standardization are the main drivers that unlock intelligent vehicles' potential for sustainable green mobility applications. ECAS vehicles act as autonomous agents using swarm intelligence to communicate and exchange information, either directly or indirectly, with each other and the infrastructure, accessing independent services such as energy, high-definition maps, routes, infrastructure information, traffic lights, tolls, parking (micropayments), and finding emergent/intelligent solutions. The article gives an overview of the advances in AI technologies and applications to realize intelligent functions and optimize vehicle performance, control, and decision-making for future ECAS vehicles to support the acceleration of deployment in various mobility scenarios. ECAS vehicles, systems, sub-systems, and components are subjected to stringent regulatory frameworks, which set rigorous requirements for autonomous vehicles. An in-depth assessment of existing standards, regulations, and laws, including a thorough gap analysis, is required. Global guidelines must be provided on how to fulfill the requirements. ECAS vehicle technology trustworthiness, including AI-based HW/SW and algorithms, is necessary for developing ECAS systems across the entire automotive ecosystem. The safety and transparency of AI-based technology and the explainability of the purpose, use, benefits, and limitations of AI systems are critical for fulfilling trustworthiness requirements. The article presents ECAS vehicles’ evolution toward domain controller, zonal vehicle, and federated vehicle/edge/cloud-centric based on distributed intelligence in the vehicle and infrastructure level architectures and the role of AI techniques and methods to implement the different autonomous driving and optimization functions for sustainable green mobility.publishedVersio

    Application-driven visual computing towards industry 4.0 2018

    Get PDF
    245 p.La Tesis recoge contribuciones en tres campos: 1. Agentes Virtuales Interactivos: autónomos, modulares, escalables, ubicuos y atractivos para el usuario. Estos IVA pueden interactuar con los usuarios de manera natural.2. Entornos de RV/RA Inmersivos: RV en la planificación de la producción, el diseño de producto, la simulación de procesos, pruebas y verificación. El Operario Virtual muestra cómo la RV y los Co-bots pueden trabajar en un entorno seguro. En el Operario Aumentado la RA muestra información relevante al trabajador de una manera no intrusiva. 3. Gestión Interactiva de Modelos 3D: gestión online y visualización de modelos CAD multimedia, mediante conversión automática de modelos CAD a la Web. La tecnología Web3D permite la visualización e interacción de estos modelos en dispositivos móviles de baja potencia.Además, estas contribuciones han permitido analizar los desafíos presentados por Industry 4.0. La tesis ha contribuido a proporcionar una prueba de concepto para algunos de esos desafíos: en factores humanos, simulación, visualización e integración de modelos

    The SEMAINE API : a component integration framework for a naturally interacting and emotionally competent embodied conversational agent

    Get PDF
    The present thesis addresses the topic area of Embodied Conversational Agents (ECAs) with capabilities for natural interaction with a human user and emotional competence with respect to the perception and generation of emotional expressivity. The focus is on the technological underpinnings that facilitate the implementation of a real-time system with these capabilities, built from re-usable components. The thesis comprises three main contributions. First, it describes a new component integration framework, the SEMAINE API, which makes it easy to build emotion-oriented systems from components which interact with one another using standard and pre-standard XML representations. Second, it presents a prepare-and-trigger system architecture which substantially speeds up the time to animation for system utterances that can be pre-planned. Third, it reports on the W3C Emotion Markup Language, an upcoming web standard for representing emotions in technological systems. We assess critical aspects of system performance, showing that the framework provides a good basis for implementing real-time interactive ECA systems, and illustrate by means of three examples that the SEMAINE API makes it is easy to build new emotion-oriented systems from new and existing components.Die vorliegende Dissertation behandelt das Thema der virtuellen Agenten mit Fähigkeiten zur natürlichen Benutzer-Interaktion sowie emotionaler Kompetenz bzgl. der Wahrnehmung und Generierung emotionalen Ausdrucks. Der Schwerpunkt der Arbeit liegt auf den technologischen Grundlagen für die Implementierung eines echtzeitfähigen Systems mit diesen Fähigkeiten, das aus wiederverwendbaren Komponenten erstellt werden kann. Die Arbeit umfasst drei Kernaspekte. Zum Einen beschreibt sie ein neues Framework zur Komponenten-Integration, die SEMAINE API: Diese erleichtert die Erstellung von Emotions-orientierten Systemen aus Komponenten, die untereinander mittels Standard- oder Prä-Standard-Repräsentationen kommunizieren. Zweitens wird eine Systemarchitektur vorgestellt, welche Vorbereitung und Auslösung von Systemverhalten entkoppelt und so zu einer substanziellen Beschleunigung der Generierungszeit führt, wenn Systemäußerungen im Voraus geplant werden können. Drittens beschreibt die Arbeit die W3C Emotion Markup Language, einen werdenden Web-Standard zur Repräsentation von Emotionen in technologischen Systemen. Es werden kritische Aspekte der Systemperformanz untersucht, wodurch gezeigt wird, dass das Framework eine gute Basis für die Implementierung echtzeitfähiger interaktiver Agentensysteme darstellt. Anhand von drei Beispielen wird illustriert, dass mit der SEMAINE API leicht neue Emotions-orientierte Systeme aus neuen und existierenden Komponenten erstellt werden können

    On the Development of Adaptive and User-Centred Interactive Multimodal Interfaces

    Get PDF
    Multimodal systems have attained increased attention in recent years, which has made possible important improvements in the technologies for recognition, processing, and generation of multimodal information. However, there are still many issues related to multimodality which are not clear, for example, the principles that make it possible to resemble human-human multimodal communication. This chapter focuses on some of the most important challenges that researchers have recently envisioned for future multimodal interfaces. It also describes current efforts to develop intelligent, adaptive, proactive, portable and affective multimodal interfaces
    corecore