11 research outputs found

    Machine Understanding of Human Behavior

    Get PDF
    A widely accepted prediction is that computing will move to the background, weaving itself into the fabric of our everyday living spaces and projecting the human user into the foreground. If this prediction is to come true, then next generation computing, which we will call human computing, should be about anticipatory user interfaces that should be human-centered, built for humans based on human models. They should transcend the traditional keyboard and mouse to include natural, human-like interactive functions including understanding and emulating certain human behaviors such as affective and social signaling. This article discusses a number of components of human behavior, how they might be integrated into computers, and how far we are from realizing the front end of human computing, that is, how far are we from enabling computers to understand human behavior

    A dynamic texture based approach to recognition of facial actions and their temporal models

    Get PDF
    In this work, we propose a dynamic texture-based approach to the recognition of facial Action Units (AUs, atomic facial gestures) and their temporal models (i.e., sequences of temporal segments: neutral, onset, apex, and offset) in near-frontal-view face videos. Two approaches to modeling the dynamics and the appearance in the face region of an input video are compared: an extended version of Motion History Images and a novel method based on Nonrigid Registration using Free-Form Deformations (FFDs). The extracted motion representation is used to derive motion orientation histogram descriptors in both the spatial and temporal domain. Per AU, a combination of discriminative, frame-based GentleBoost ensemble learners and dynamic, generative Hidden Markov Models detects the presence of the AU in question and its temporal segments in an input image sequence. When tested for recognition of all 27 lower and upper face AUs, occurring alone or in combination in 264 sequences from the MMI facial expression database, the proposed method achieved an average event recognition accuracy of 89.2 percent for the MHI method and 94.3 percent for the FFD method. The generalization performance of the FFD method has been tested using the Cohn-Kanade database. Finally, we also explored the performance on spontaneous expressions in the Sensitive Artificial Listener data set

    Designing interactive ambient multimedia applications: requirements and implementation challenges

    Get PDF
    Ambient intelligence opens new possibilities for interactive multimedia, leading towards applications where the selection, generation and playback of multimedia content can be directed and influenced by multiple users in an ambient sensor network. In this paper, we derive the basic requirements for a flexible infrastructure that can support the integration of multimedia and ambient intelligence, and enable rapid tailoring of interactive multimedia applications. We describe our implementation of the proposed infrastructure, and demonstrate its functionality through several prototype application

    Designing interactive ambient multimedia applications: requirements and implementation challenges

    Get PDF
    Ambient intelligence opens new possibilities for interactive multimedia, leading towards applications where the selection, generation and playback of multimedia content can be directed and influenced by multiple users in an ambient sensor network. In this paper, we derive the basic requirements for a flexible infrastructure that can support the integration of multimedia and ambient intelligence, and enable rapid tailoring of interactive multimedia applications. We describe our implementation of the proposed infrastructure, and demonstrate its functionality through several prototype application

    Pillows as adaptive interfaces in ambient environments

    Get PDF
    We have developed a set of small interactive throw pillows containing intelligent touch-sensing surfaces, in order to explore new ways to model the environment, participants, artefacts, and their interactions, in the context of expressive non-verbal interaction. We present the overall architecture of the environment, describing a model of the user, the interface (the interactive pillows and the devices it can interact with) and the context engine. We describe the representation and process modules of the context engine and demonstrate how they support real-time adaptation. We present an evaluation of the current prototype and conclude with plans for future work

    Timing is everything: A spatio-temporal approach to the analysis of facial actions

    No full text
    This thesis presents a fully automatic facial expression analysis system based on the Facial Action Coding System (FACS). FACS is the best known and the most commonly used system to describe facial activity in terms of facial muscle actions (i.e., action units, AUs). We will present our research on the analysis of the morphological, spatio-temporal and behavioural aspects of facial expressions. In contrast with most other researchers in the field who use appearance based techniques, we use a geometric feature based approach. We will argue that that approach is more suitable for analysing facial expression temporal dynamics. Our system is capable of explicitly exploring the temporal aspects of facial expressions from an input colour video in terms of their onset (start), apex (peak) and offset (end). The fully automatic system presented here detects 20 facial points in the first frame and tracks them throughout the video. From the tracked points we compute geometry-based features which serve as the input to the remainder of our systems. The AU activation detection system uses GentleBoost feature selection and a Support Vector Machine (SVM) classifier to find which AUs were present in an expression. Temporal dynamics of active AUs are recognised by a hybrid GentleBoost-SVM-Hidden Markov model classifier. The system is capable of analysing 23 out of 27 existing AUs with high accuracy. The main contributions of the work presented in this thesis are the following: we have created a method for fully automatic AU analysis with state-of-the-art recognition results. We have proposed for the first time a method for recognition of the four temporal phases of an AU. We have build the largest comprehensive database of facial expressions to date. We also present for the first time in the literature two studies for automatic distinction between posed and spontaneous expressions

    TĂ©cnicas de computaciĂłn social e informaciĂłn contextual para el desarrollo de actividades de aprendizaje colaborativo

    Get PDF
    [EN]Educational innovation is a field in which its processes has been greatly enriched by the use of Information and Communication Technologies (ICT). Thanks to technological advances, the use of learning models where information comes from many different sources is now usual. Likewise, student-student, student-device and device-device collaborations provides added value to the learning processes thanks to the fact that, through it, aspects such as communication, achievement of common goals or sharing resources. Within the educational innovation, we find as a great challenge the development of tools that facilitate the creation of innovative collaborative learning processes that improve the achievement of the objectives sought, with respect to individualized processes, and the fidelity of the students to the process through the use of contextual information. Moreover, the development of these solutions, that facilitate the work of teachers, developers and technicians encouraging the production of educational processes more attractive to students, presents itself as an ambitious challenge in which the perspectives of Ambient Intelligence and Social Computing play a key role. The doctoral dissertation presented here describes and evaluates CAFCLA, a framework specially conceived for the design, development and implementation of collaborative learning activities that make use of contextual information and that is based on the paradigms of Ambient Intelligence and Social Computing. CAFCLA is a flexible framework that covers the entire process of developing collaborative learning activities and hides all the difficulties involved in the use and integration of multiple technologies to its users. In order to evaluate the validity of the proposal, CAFCLA has supported the implementation of three concrete and different use cases. These experimental use cases have shown that, among other benefits, the use of Social Computing customizes the learning process, encourages collaboration, improves relationships, increases commitment, promotes behaviour change in users and enables learning to be maintained over time. In addition, in order to demonstrate the flexibility of the framework, these use cases have been developed in different scenarios (such as a museum, a public building or at home), different types of learning have been proposed (serious games, recommendations system orWebQuest) and different learning objectives have been chosen (academic, social and energy-efficient).[ES]La innovación educativa es un campo que ha sido enormemente enriquecido por el uso de las Tecnologías de la Información y las Comunicaciones (TIC) en sus procesos. Gracias a los avances tecnológicos, actualmente es habitual el uso de modelos de aprendizaje donde la información proviene de numerosas y diferentes fuentes. De igual forma, la colaboración estudiante-estudiante, estudiante-dispositivo y dispositivo-dispositivo, proporciona un valor añadido a los procesos de aprendizaje gracias a que, a través de ella, se fomentan aspectos como la comunicación, la consecución de una meta común, o la compartición de recursos. Dentro de la innovación educativa encontramos como un gran desafío el desarrollo de herramientas que faciliten la creación de procesos de aprendizaje colaborativo innovadores que mejoren los resultados obtenidos, respecto a los procesos individualizados, y la fidelidad de los estudiantes al proceso mediante el uso de información contextual.Más aún, el desarrollo de soluciones que faciliten el trabajo a profesores, desarrolladores y técnicos, fomentando la producción de procesos educativos más atractivos para los estudiantes, se presenta como un ambicioso reto en el que las perspectivas de la Inteligencia Ambiental y la Computación Social juegan un papel fundamental. La tesis doctoral aquí presentada describe y evalúa CAFCLA, un framework especialmente concebido para el diseño, desarrollo e implementación de actividades de aprendizaje colaborativo que hagan uso de información contextual basándose en los paradigmas de la Inteligencia Ambiental y la Computación Social. CAFCLA es un framework flexible que abarca todo el proceso de desarrollo de actividades de aprendizaje colaborativo y oculta todas las dificultades que implican el uso e integración de múltiples tecnologías a sus usuarios. Para evaluar la validez de la propuesta realizada, CAFCLA ha soportado la implementación de tres casos de uso concretos y diferentes entre sí. Estos casos de uso experimentales han demostrado que, entre otros beneficios, el uso de la Computación Social personaliza el proceso de aprendizaje, fomenta la colaboración, mejora las relaciones, aumenta el compromiso, favorecen el cambio de comportamiento en los usuarios y mantiene su implicación en el proceso a lo largo del tiempo. Además, con el objetivo de demostrar la flexibilidad del framework, estos casos de uso se han desarrollado en diferentes escenarios (como un museo, un edificio público o el hogar), se han propuesto diferente tipos de aprendizaje (juegos serios, sistema de recomendaciones o WebQuest) y se han elegido diferentes objetivos de aprendizaje (académicos, sociales y de eficiencia energética)

    Ambient intelligence drives open innovation

    No full text
    corecore