754 research outputs found
Agents for educational games and simulations
This book consists mainly of revised papers that were presented at the Agents for Educational Games and Simulation (AEGS) workshop held on May 2, 2011, as part of the Autonomous Agents and MultiAgent Systems (AAMAS) conference in Taipei, Taiwan. The 12 full papers presented were carefully reviewed and selected from various submissions. The papers are organized topical sections on middleware applications, dialogues and learning, adaption and convergence, and agent applications
Recommended from our members
Multimedia delivery in the future internet
The term âNetworked Mediaâ implies that all kinds of media including text, image, 3D graphics, audio
and video are produced, distributed, shared, managed and consumed on-line through various networks,
like the Internet, Fiber, WiFi, WiMAX, GPRS, 3G and so on, in a convergent manner [1]. This white
paper is the contribution of the Media Delivery Platform (MDP) cluster and aims to cover the Networked
challenges of the Networked Media in the transition to the Future of the Internet.
Internet has evolved and changed the way we work and live. End users of the Internet have been confronted
with a bewildering range of media, services and applications and of technological innovations concerning
media formats, wireless networks, terminal types and capabilities. And there is little evidence that the pace
of this innovation is slowing. Today, over one billion of users access the Internet on regular basis, more
than 100 million users have downloaded at least one (multi)media file and over 47 millions of them do so
regularly, searching in more than 160 Exabytes1 of content. In the near future these numbers are expected
to exponentially rise. It is expected that the Internet content will be increased by at least a factor of 6, rising
to more than 990 Exabytes before 2012, fuelled mainly by the users themselves. Moreover, it is envisaged
that in a near- to mid-term future, the Internet will provide the means to share and distribute (new)
multimedia content and services with superior quality and striking flexibility, in a trusted and personalized
way, improving citizensâ quality of life, working conditions, edutainment and safety.
In this evolving environment, new transport protocols, new multimedia encoding schemes, cross-layer inthe
network adaptation, machine-to-machine communication (including RFIDs), rich 3D content as well as
community networks and the use of peer-to-peer (P2P) overlays are expected to generate new models of
interaction and cooperation, and be able to support enhanced perceived quality-of-experience (PQoE) and
innovative applications âon the moveâ, like virtual collaboration environments, personalised services/
media, virtual sport groups, on-line gaming, edutainment. In this context, the interaction with content
combined with interactive/multimedia search capabilities across distributed repositories, opportunistic P2P
networks and the dynamic adaptation to the characteristics of diverse mobile terminals are expected to
contribute towards such a vision.
Based on work that has taken place in a number of EC co-funded projects, in Framework Program 6 (FP6)
and Framework Program 7 (FP7), a group of experts and technology visionaries have voluntarily
contributed in this white paper aiming to describe the status, the state-of-the art, the challenges and the way
ahead in the area of Content Aware media delivery platforms
Real-time generation and adaptation of social companion robot behaviors
Social robots will be part of our future homes.
They will assist us in everyday tasks, entertain us, and provide helpful advice.
However, the technology still faces challenges that must be overcome to equip the machine with social competencies and make it a socially intelligent and accepted housemate.
An essential skill of every social robot is verbal and non-verbal communication.
In contrast to voice assistants, smartphones, and smart home technology, which are already part of many people's lives today, social robots have an embodiment that raises expectations towards the machine.
Their anthropomorphic or zoomorphic appearance suggests they can communicate naturally with speech, gestures, or facial expressions and understand corresponding human behaviors.
In addition, robots also need to consider individual users' preferences: everybody is shaped by their culture, social norms, and life experiences, resulting in different expectations towards communication with a robot.
However, robots do not have human intuition - they must be equipped with the corresponding algorithmic solutions to these problems.
This thesis investigates the use of reinforcement learning to adapt the robot's verbal and non-verbal communication to the user's needs and preferences.
Such non-functional adaptation of the robot's behaviors primarily aims to improve the user experience and the robot's perceived social intelligence.
The literature has not yet provided a holistic view of the overall challenge: real-time adaptation requires control over the robot's multimodal behavior generation, an understanding of human feedback, and an algorithmic basis for machine learning.
Thus, this thesis develops a conceptual framework for designing real-time non-functional social robot behavior adaptation with reinforcement learning.
It provides a higher-level view from the system designer's perspective and guidance from the start to the end.
It illustrates the process of modeling, simulating, and evaluating such adaptation processes.
Specifically, it guides the integration of human feedback and social signals to equip the machine with social awareness.
The conceptual framework is put into practice for several use cases, resulting in technical proofs of concept and research prototypes.
They are evaluated in the lab and in in-situ studies.
These approaches address typical activities in domestic environments, focussing on the robot's expression of personality, persona, politeness, and humor.
Within this scope, the robot adapts its spoken utterances, prosody, and animations based on human explicit or implicit feedback.Soziale Roboter werden Teil unseres zukĂŒnftigen Zuhauses sein.
Sie werden uns bei alltĂ€glichen Aufgaben unterstĂŒtzen, uns unterhalten und uns mit hilfreichen RatschlĂ€gen versorgen.
Noch gibt es allerdings technische Herausforderungen, die zunĂ€chst ĂŒberwunden werden mĂŒssen, um die Maschine mit sozialen Kompetenzen auszustatten und zu einem sozial intelligenten und akzeptierten Mitbewohner zu machen.
Eine wesentliche FĂ€higkeit eines jeden sozialen Roboters ist die verbale und nonverbale Kommunikation.
Im Gegensatz zu Sprachassistenten, Smartphones und Smart-Home-Technologien, die bereits heute Teil des Lebens vieler Menschen sind, haben soziale Roboter eine Verkörperung, die Erwartungen an die Maschine weckt.
Ihr anthropomorphes oder zoomorphes Aussehen legt nahe, dass sie in der Lage sind, auf natĂŒrliche Weise mit Sprache, Gestik oder Mimik zu kommunizieren, aber auch entsprechende menschliche Kommunikation zu verstehen.
DarĂŒber hinaus mĂŒssen Roboter auch die individuellen Vorlieben der Benutzer berĂŒcksichtigen.
So ist jeder Mensch von seiner Kultur, sozialen Normen und eigenen Lebenserfahrungen geprĂ€gt, was zu unterschiedlichen Erwartungen an die Kommunikation mit einem Roboter fĂŒhrt.
Roboter haben jedoch keine menschliche Intuition - sie mĂŒssen mit entsprechenden Algorithmen fĂŒr diese Probleme ausgestattet werden.
In dieser Arbeit wird der Einsatz von bestĂ€rkendem Lernen untersucht, um die verbale und nonverbale Kommunikation des Roboters an die BedĂŒrfnisse und Vorlieben des Benutzers anzupassen.
Eine solche nicht-funktionale Anpassung des Roboterverhaltens zielt in erster Linie darauf ab, das Benutzererlebnis und die wahrgenommene soziale Intelligenz des Roboters zu verbessern.
Die Literatur bietet bisher keine ganzheitliche Sicht auf diese Herausforderung: Echtzeitanpassung erfordert die Kontrolle ĂŒber die multimodale Verhaltenserzeugung des Roboters, ein VerstĂ€ndnis des menschlichen Feedbacks und eine algorithmische Basis fĂŒr maschinelles Lernen.
Daher wird in dieser Arbeit ein konzeptioneller Rahmen fĂŒr die Gestaltung von nicht-funktionaler Anpassung der Kommunikation sozialer Roboter mit bestĂ€rkendem Lernen entwickelt.
Er bietet eine ĂŒbergeordnete Sichtweise aus der Perspektive des Systemdesigners und eine Anleitung vom Anfang bis zum Ende.
Er veranschaulicht den Prozess der Modellierung, Simulation und Evaluierung solcher Anpassungsprozesse.
Insbesondere wird auf die Integration von menschlichem Feedback und sozialen Signalen eingegangen, um die Maschine mit sozialem Bewusstsein auszustatten.
Der konzeptionelle Rahmen wird fĂŒr mehrere AnwendungsfĂ€lle in die Praxis umgesetzt, was zu technischen Konzeptnachweisen und Forschungsprototypen fĂŒhrt, die in Labor- und In-situ-Studien evaluiert werden.
Diese AnsÀtze befassen sich mit typischen AktivitÀten in hÀuslichen Umgebungen, wobei der Schwerpunkt auf dem Ausdruck der Persönlichkeit, dem Persona, der Höflichkeit und dem Humor des Roboters liegt.
In diesem Rahmen passt der Roboter seine Sprache, Prosodie, und Animationen auf Basis expliziten oder impliziten menschlichen Feedbacks an
MediaSync: Handbook on Multimedia Synchronization
This book provides an approachable overview of the most recent advances in the fascinating field of media synchronization (mediasync), gathering contributions from the most representative and influential experts. Understanding the challenges of this field in the current multi-sensory, multi-device, and multi-protocol world is not an easy task. The book revisits the foundations of mediasync, including theoretical frameworks and models, highlights ongoing research efforts, like hybrid broadband broadcast (HBB) delivery and users' perception modeling (i.e., Quality of Experience or QoE), and paves the way for the future (e.g., towards the deployment of multi-sensory and ultra-realistic experiences). Although many advances around mediasync have been devised and deployed, this area of research is getting renewed attention to overcome remaining challenges in the next-generation (heterogeneous and ubiquitous) media ecosystem. Given the significant advances in this research area, its current relevance and the multiple disciplines it involves, the availability of a reference book on mediasync becomes necessary. This book fills the gap in this context. In particular, it addresses key aspects and reviews the most relevant contributions within the mediasync research space, from different perspectives. Mediasync: Handbook on Multimedia Synchronization is the perfect companion for scholars and practitioners that want to acquire strong knowledge about this research area, and also approach the challenges behind ensuring the best mediated experiences, by providing the adequate synchronization between the media elements that constitute these experiences
Example Based Caricature Synthesis
The likeness of a caricature to the original face image is an essential and often overlooked part of caricature
production. In this paper we present an example based caricature synthesis technique, consisting of shape
exaggeration, relationship exaggeration, and optimization for likeness. Rather than relying on a large training set
of caricature face pairs, our shape exaggeration step is based on only one or a small number of examples of facial
features. The relationship exaggeration step introduces two definitions which facilitate global facial feature
synthesis. The first is the T-Shape rule, which describes the relative relationship between the facial elements in an
intuitive manner. The second is the so called proportions, which characterizes the facial features in a proportion
form. Finally we introduce a similarity metric as the likeness metric based on the Modified Hausdorff Distance
(MHD) which allows us to optimize the configuration of facial elements, maximizing likeness while satisfying a
number of constraints. The effectiveness of our algorithm is demonstrated with experimental results
Framework for the Integration of Mobile Device Features in PLM
Currently, companies have covered their business processes with stationary workstations while mobile business applications have limited relevance. Companies can cover their overall business processes more time-efficiently and cost-effectively when they integrate mobile users in workflows using mobile device features. The objective is a framework that can be used to model and control business applications for PLM processes using mobile device features to allow a totally new user experience
Human-Computer Interaction
In this book the reader will find a collection of 31 papers presenting different facets of Human Computer Interaction, the result of research projects and experiments as well as new approaches to design user interfaces. The book is organized according to the following main topics in a sequential order: new interaction paradigms, multimodality, usability studies on several interaction mechanisms, human factors, universal design and development methodologies and tools
- âŠ