52 research outputs found

    Learning an Intrinsic Garment Space for Interactive Authoring of Garment Animation

    Get PDF
    Authoring dynamic garment shapes for character animation on body motion is one of the fundamental steps in the CG industry. Established workflows are either time and labor consuming (i.e., manual editing on dense frames with controllers), or lack keyframe-level control (i.e., physically-based simulation). Not surprisingly, garment authoring remains a bottleneck in many production pipelines. Instead, we present a deep-learning-based approach for semi-automatic authoring of garment animation, wherein the user provides the desired garment shape in a selection of keyframes, while our system infers a latent representation for its motion-independent intrinsic parameters (e.g., gravity, cloth materials, etc.). Given new character motions, the latent representation allows to automatically generate a plausible garment animation at interactive rates. Having factored out character motion, the learned intrinsic garment space enables smooth transition between keyframes on a new motion sequence. Technically, we learn an intrinsic garment space with an motion-driven autoencoder network, where the encoder maps the garment shapes to the intrinsic space under the condition of body motions, while the decoder acts as a differentiable simulator to generate garment shapes according to changes in character body motion and intrinsic parameters. We evaluate our approach qualitatively and quantitatively on common garment types. Experiments demonstrate our system can significantly improve current garment authoring workflows via an interactive user interface. Compared with the standard CG pipeline, our system significantly reduces the ratio of required keyframes from 20% to 1 -- 2%

    Designing wheelchair-based movement games

    Get PDF
    People using wheelchairs have access to fewer sports and other physically stimulating leisure activities than nondisabled persons, and often lead sedentary lifestyles that negatively influence their health. While motion- based video games have demonstrated great potential of encouraging physical activity among nondisabled players, the accessibility of motion-based games is limited for persons with mobility disabilities, thus also limiting access to the potential health benefits of playing these games. In our work, we address this issue through the design of wheelchair-accessible motion-based game controls. We present KINECTWheels, a toolkit designed to integrate wheelchair movements into motion-based games. Building on the toolkit, we developed Cupcake Heaven, a wheelchair-based video game designed for older adults using wheelchairs, and we created Wheelchair Revolution, a motion-based dance game that is accessible to both persons using wheelchairs and nondisabled players. Evaluation results show that KINECTWheels can be applied to make motion-based games wheelchair-accessible, and that wheelchair-based games engage broad audiences in physically stimulating play. Through the application of the wheelchair as an enabling technology in games, our work has the potential of encouraging players of all ages to develop a positive relationship with their wheelchair

    A Survey on Video-based Graphics and Video Visualization

    Get PDF

    Physiological and behavior monitoring systems for smart healthcare environments: a review

    Get PDF
    Healthcare optimization has become increasingly important in the current era, where numerous challenges are posed by population ageing phenomena and the demand for higher quality of the healthcare services. The implementation of Internet of Things (IoT) in the healthcare ecosystem has been one of the best solutions to address these challenges and therefore to prevent and diagnose possible health impairments in people. The remote monitoring of environmental parameters and how they can cause or mediate any disease, and the monitoring of human daily activities and physiological parameters are among the vast applications of IoT in healthcare, which has brought extensive attention of academia and industry. Assisted and smart tailored environments are possible with the implementation of such technologies that bring personal healthcare to any individual, while living in their preferred environments. In this paper we address several requirements for the development of such environments, namely the deployment of physiological signs monitoring systems, daily activity recognition techniques, as well as indoor air quality monitoring solutions. The machine learning methods that are most used in the literature for activity recognition and body motion analysis are also referred. Furthermore, the importance of physical and cognitive training of the elderly population through the implementation of exergames and immersive environments is also addressedinfo:eu-repo/semantics/publishedVersio

    Design of educational games with user participation : a systematic review

    Get PDF
    Metodologias inovadoras e ferramentas digitais pedagógicas têm sido demandadas por educadores de forma crescente. Entre elas, destaca-se o uso de games (jogos digitais) como estratégia de engajamento, imersão e transformação da sala de aula em uma experiência de aprendizado conectado ao universo dos estudantes. A inserção de usuários no design de games visa corrigir erros e otimizar características antes da sua testagem e consumo em massa pelo mercado. Este artigo faz uma revisão sistemática da literatura científica recente sobre games educativos desenvolvidos ou validados com a participação de usuários, identificando as principais técnicas utilizadas e etapas de inserção do usuário no processo. Os games encontrados foram desenvolvidos ou avaliados utilizando onze métodos diferentes, aplicados nas fases de projeto, desenvolvimento ou validação, em diversos níveis educacionais e com participantes de perfis variados. Os resultados indicam que a maioria dos games educativos é avaliada através de questionários por estudantes de graduação na etapa de validação de produto. A análise dos dados também identificou um conjunto de 35 aspectos de design ressaltados por pesquisadores, codesigners e usuários finais, sendo descritos os mais recorrentes.Innovative methodologies and digital pedagogical tools have been increasingly demanded by educators. Among them, the use of games (digital games) stands out as a strategy for engagement, immersion and transformation of the classroom into a learning experience connected to the universe of students. The insertion of users in game design aims to correct errors and optimize features before their testing and mass consumption by the market. This paper makes a systematic review of recent scientific literature on educational games developed or validated with the participation of users, identifying the main techniques used and stages of user insertion in the process. The games found were developed or evaluated using eleven different methods, applied in the design, development or validation phases, at different educational levels and with participants of different profiles. The results indicate that most educational games are evaluated through questionnaires by undergraduate students in the product validation stage. Data analysis also identified a set of 35 design aspects highlighted by researchers, co-designers and end users, the most recurrent being described

    Learning-based 3D human motion capture and animation synthesis

    Get PDF
    Realistic virtual human avatar is a crucial element in a wide range of applications, from 3D animated movies to emerging AR/VR technologies. However, producing a believable 3D motion for such avatars is widely known to be a challenging task. A traditional 3D human motion generation pipeline consists of several stages, each requiring expensive equipment and skilled human labor to perform, limiting its usage beyond the entertainment industry despite its massive potential benefits. This thesis attempts to explore some alternative solutions to reduce the complexity of the traditional 3D animation pipeline. To this end, it presents several novel ways to perform 3D human motion capture, synthesis, and control. Specifically, it focuses on using learning-based methods to bypass the critical bottlenecks of the classical animation approach. First, a new 3D pose estimation method from in-the-wild monocular images is proposed, eliminating the need for a multi-camera setup in the traditional motion capture system. Second, it explores several data-driven designs to achieve a believable 3D human motion synthesis and control that can potentially reduce the need for manual animation. In particular, the problem of speech-driven 3D gesture synthesis is chosen as the case study due to its uniquely ambiguous nature. The improved motion generation quality is achieved by introducing a novel adversarial objective that rates the difference between real and synthetic data. A novel motion generation strategy is also introduced by combining a classical database search algorithm with a powerful deep learning method, resulting in a greater motion control variation than the purely predictive counterparts. Furthermore, this thesis also contributes a new way of collecting a large-scale 3D motion dataset through the use of learning-based monocular estimations methods. This result demonstrates the promising capability of learning-based monocular approaches and shows the prospect of combining these learning-based modules into an integrated 3D animation framework. The presented learning-based solutions open the possibility of democratizing the traditional 3D animation system that can be enabled using low-cost equipment, e.g., a single RGB camera. Finally, this thesis also discusses the potential further integration of these learning-based approaches to enhance 3D animation technology.Realistische virtuelle menschliche Avatare sind ein entscheidendes Element in einer Vielzahl von Anwendungen, von 3D-Animationsfilmen bis hin zu neuen AR/VR-Technologien. Die Erzeugung glaubwürdiger Bewegungen solcher Avatare in drei Dimensionen ist bekanntermaßen eine herausfordernde Aufgabe. Traditionelle Pipelines zur Erzeugung menschlicher 3D-Bewegungen bestehen aus mehreren Stufen, die jede für sich genommen teure Ausrüstung und den Einsatz von Expertenwissen erfordern und daher trotz ihrer enormen potenziellen Vorteile abseits der Unterhaltungsindustrie nur eingeschränkt verwendbar sind. Diese Arbeit untersucht verschiedene Alternativen um die Komplexität der traditionellen 3D-Animations-Pipeline zu reduzieren. Zu diesem Zweck stellt sie mehrere neuartige Möglichkeiten zur Erfassung, Synthese und Steuerung humanoider 3D-Bewegungen vor. Sie konzentriert sich auf die Verwendung lernbasierter Methoden, um kritische Teile des klassischen Animationsansatzes zu überbrücken: Zunächst wird eine neue 3D-Pose-Estimation-Methode für monokulare Bilder vorgeschlagen, um die Notwendigkeit mehrerer Kameras im traditionellen Motion-Capture-Ansatz zu beseitigen. Des Weiteren untersucht die Arbeit mehrere datengetriebene Ansätze zur Synthese und Steuerung glaubwürdiger humanoider 3D-Bewegungen, die möglicherweise den Bedarf an manueller Animation reduzieren können. Als Fallstudie wird, aufgrund seiner einzigartig mehrdeutigen Natur, das Problem der sprachgetriebenen 3D-Gesten-Synthese untersucht. Die Verbesserungen in der Qualität der erzeugten Bewegungen wird durch eine neuartige Kostenfunktion erreicht, die den Unterschied zwischen realen und synthetischen Daten bewertet. Außerdem wird eine neue Strategie zur Bewegungssynthese beschrieben, die eine klassische Datenbanksuche mit einer leistungsstarken Deep-Learning-Methode kombiniert, was zu einer größeren Variation der Bewegungssteuerung führt, als rein lernbasierte Verfahren sie bieten. Ein weiterer Beitrag dieser Dissertation besteht in einer neuen Methode zum Aufbau eines großen Datensatzes dreidimensionaler Bewegungen, auf Grundlage lernbasierter monokularer Pose-Estimation- Methoden. Dies demonstriert die vielversprechenden Möglichkeiten lernbasierter monokularer Methoden und lässt die Aussicht erkennen, diese lernbasierten Module zu einem integrierten 3D-Animations- Framework zu kombinieren. Die in dieser Arbeit vorgestellten lernbasierten Lösungen eröffnen die Möglichkeit, das traditionelle 3D-Animationssystem auch mit kostengünstiger Ausrüstung, wie z.B. einer einzelnen RGB-Kamera verwendbar zu machen. Abschließend diskutiert diese Arbeit auch die mögliche weitere Integration dieser lernbasierten Ansätze zur Verbesserung der 3D-Animationstechnologie

    Aesthetic potential of human-computer interaction in performing arts

    Get PDF
    Human-computer interaction (HCI) is a multidisciplinary area that studies the communication between users and computers. In this thesis, we want to examine if and how HCI when incorporated into staged performances can generate new possibilities for artistic expression on stage. We define and study four areas of technology-enhanced performance that were strongly influenced by HCI techniques: multimedia expression, body representation, body augmentation and interactive environments. We trace relevant artistic practices that contributed to the exploration of these topics and then present new forms of creative expression that emerged after the incorporation of HCI techniques. We present and discuss novel practices like: performer and the media as one responsive entity, real-time control of virtual characters, on-body projections, body augmentation through humanmachine systems and interactive stage design. The thesis concludes by showing some concrete examples of these novel practices implemented in performance pieces. We present and discuss technologyaugmented dance pieces developed during this master’s degree. We also present a software tool for aesthetic visualisation of movement data and discuss its application in video creation, staged performances and interactive installations

    Towards a framework for socially interactive robots

    Get PDF
    250 p.En las últimas décadas, la investigación en el campo de la robótica social ha crecido considerablemente. El desarrollo de diferentes tipos de robots y sus roles dentro de la sociedad se están expandiendo poco a poco. Los robots dotados de habilidades sociales pretenden ser utilizados para diferentes aplicaciones; por ejemplo, como profesores interactivos y asistentes educativos, para apoyar el manejo de la diabetes en niños, para ayudar a personas mayores con necesidades especiales, como actores interactivos en el teatro o incluso como asistentes en hoteles y centros comerciales.El equipo de investigación RSAIT ha estado trabajando en varias áreas de la robótica, en particular,en arquitecturas de control, exploración y navegación de robots, aprendizaje automático y visión por computador. El trabajo presentado en este trabajo de investigación tiene como objetivo añadir una nueva capa al desarrollo anterior, la capa de interacción humano-robot que se centra en las capacidades sociales que un robot debe mostrar al interactuar con personas, como expresar y percibir emociones, mostrar un alto nivel de diálogo, aprender modelos de otros agentes, establecer y mantener relaciones sociales, usar medios naturales de comunicación (mirada, gestos, etc.),mostrar personalidad y carácter distintivos y aprender competencias sociales.En esta tesis doctoral, tratamos de aportar nuestro grano de arena a las preguntas básicas que surgen cuando pensamos en robots sociales: (1) ¿Cómo nos comunicamos (u operamos) los humanos con los robots sociales?; y (2) ¿Cómo actúan los robots sociales con nosotros? En esa línea, el trabajo se ha desarrollado en dos fases: en la primera, nos hemos centrado en explorar desde un punto de vista práctico varias formas que los humanos utilizan para comunicarse con los robots de una maneranatural. En la segunda además, hemos investigado cómo los robots sociales deben actuar con el usuario.Con respecto a la primera fase, hemos desarrollado tres interfaces de usuario naturales que pretenden hacer que la interacción con los robots sociales sea más natural. Para probar tales interfaces se han desarrollado dos aplicaciones de diferente uso: robots guía y un sistema de controlde robot humanoides con fines de entretenimiento. Trabajar en esas aplicaciones nos ha permitido dotar a nuestros robots con algunas habilidades básicas, como la navegación, la comunicación entre robots y el reconocimiento de voz y las capacidades de comprensión.Por otro lado, en la segunda fase nos hemos centrado en la identificación y el desarrollo de los módulos básicos de comportamiento que este tipo de robots necesitan para ser socialmente creíbles y confiables mientras actúan como agentes sociales. Se ha desarrollado una arquitectura(framework) para robots socialmente interactivos que permite a los robots expresar diferentes tipos de emociones y mostrar un lenguaje corporal natural similar al humano según la tarea a realizar y lascondiciones ambientales.La validación de los diferentes estados de desarrollo de nuestros robots sociales se ha realizado mediante representaciones públicas. La exposición de nuestros robots al público en esas actuaciones se ha convertido en una herramienta esencial para medir cualitativamente la aceptación social de los prototipos que estamos desarrollando. De la misma manera que los robots necesitan un cuerpo físico para interactuar con el entorno y convertirse en inteligentes, los robots sociales necesitan participar socialmente en tareas reales para las que han sido desarrollados, para así poder mejorar su sociabilida
    corecore