35 research outputs found

    Virtual human modelling and animation for real-time sign language visualisation

    Get PDF
    >Magister Scientiae - MScThis thesis investigates the modelling and animation of virtual humans for real-time sign language visualisation. Sign languages are fully developed natural languages used by Deaf communities all over the world. These languages are communicated in a visual-gestural modality by the use of manual and non-manual gestures and are completely di erent from spoken languages. Manual gestures include the use of hand shapes, hand movements, hand locations and orientations of the palm in space. Non-manual gestures include the use of facial expressions, eye-gazes, head and upper body movements. Both manual and nonmanual gestures must be performed for sign languages to be correctly understood and interpreted. To e ectively visualise sign languages, a virtual human system must have models of adequate quality and be able to perform both manual and non-manual gesture animations in real-time. Our goal was to develop a methodology and establish an open framework by using various standards and open technologies to model and animate virtual humans of adequate quality to e ectively visualise sign languages. This open framework is to be used in a Machine Translation system that translates from a verbal language such as English to any sign language. Standards and technologies we employed include H-Anim, MakeHuman, Blender, Python and SignWriting. We found it necessary to adapt and extend H-Anim to e ectively visualise sign languages. The adaptations and extensions we made to H-Anim include imposing joint rotational limits, developing exible hands and the addition of facial bones based on the MPEG-4 Facial De nition Parameters facial feature points for facial animation. By using these standards and technologies, we found that we could circumvent a few di cult problems, such as: modelling high quality virtual humans; adapting and extending H-Anim; creating a sign language animation action vocabulary; blending between animations in an action vocabulary; sharing animation action data between our virtual humans; and e ectively visualising South African Sign Language.South Afric

    Evaluating Extensible 3D (X3D) Graphics For Use in Software Visualisation

    No full text
    3D web software visualisation has always been expensive, special purpose, and hard to program. Most of the technologies used require large amounts of scripting, are not reliable on all platforms, are binary formats, or no longer maintained. We can make end-user web software visualisation of object-oriented programs cheap, portable, and easy by using Extensible (X3D) 3D Graphics, which is a new open standard. In this thesis we outline our experience with X3D and discuss the suitability of X3D as an output format for software visualisation

    Serious games: design and development

    Get PDF
    With the growth of the video game industry, interest in video game research has increased, leading to the study of Serious Games. Serious Games are generally perceived as games that use the video games’ capabilities to emerge players, for other purposes besides entertainment. These purposes include education and training, among others. By using Serious Games for education, teachers could capture the students’ attention in the same way that video games often do, thus the learning process could be more efficient. Additionally, by exploiting the potential of these virtual worlds, it is possible to experience situations that would otherwise be very difficult to experience in the real world, mainly due to reasons of cost, safety and time. Serious Games research and development is still very scarse. However, nowadays there is a large number of available platforms and tools, which can be used to develop Serious Games and video games in general. For instance, web browsers can now provide easy access to realistic 3D virtual worlds. This grants video game developers the tools to create compelling and rich environments that can be accessed by anyone with an internet connection. Additionnaly, other development platforms can be used to achieve different goals. Desktop technologies provide greater processing power and achieve greater results in terms of visual quality, as well as in terms of creating more accurate simulations. This disseration describes the design and development of two Serious Games, one for PC, developed with XNA, and another for the web, developed with WebGL.O crescimento da indústria dos jogos de vídeo, despoletou um maior interesse no estudo deste fenómeno, o que consequentemente levou ao estudo de Jogos Sérios. Jogos Sérios são normalmente considerados jogos de vídeo que são desenvolvidos para outros fins para além do entretenimento. Estes fins incluem a educação e o treino, entre outros. Ao utilizar Jogos Sérios para a educação, os docentes poderiam conseguir captar a atenção dos alunos da mesma forma que os jogos de vídeo normalmente conseguem. Desta forma o processo de aprendizagem poderia ser mais eficiente. Adicionalmente, ao explorar o potencial destes mundos virtuais, é possível experienciar situações que de outra forma seriam difíceis de experienciar na vida real, devido ao seu custo, a razões de segurança e também ao tempo dispendido para as realizar. O estudo de Jogos Sérios é ainda bastante disperso. No entanto, hoje em dia existe já um grande número de plataformas e ferramentas disponíveis que podem ser usadas para desenvolver Jogos Sérios. Por exemplo, os web browsers podem agora fornecer acesso fácil a mundos virtuais 3D. Isto permite que os criadores de jogos de vídeo tenham acesso às ferramentas necessárias para criar ambientes ricos, que possam ser acedidos por qualquer pessoa através de uma ligacção à internet. Adicionalmente, existem outras plataformas de desenvolvimento que podem ser utilizadas para alcançar objetivos diferentes. Tecnologias desktop fornecem um maior poder de processamento e permitem alcançar melhores resultados em termos de qualidade visual, bem como em termos de criação de simulações mais precisas. Nesta dissertação descreve-se a criação e o desenvolvimento de dois Jogos Sérios, um para PC, desenvolvido em XNA e outro outro para a web, desenvolvido em WebGL

    The Role of Emotional and Facial Expression in Synthesised Sign Language Avatars

    Get PDF
    This thesis explores the role that underlying emotional facial expressions might have in regards to understandability in sign language avatars. Focusing specifically on Irish Sign Language (ISL), we examine the Deaf community’s requirement for a visual-gestural language as well as some linguistic attributes of ISL which we consider fundamental to this research. Unlike spoken language, visual-gestural languages such as ISL have no standard written representation. Given this, we compare current methods of written representation for signed languages as we consider: which, if any, is the most suitable transcription method for the medical receptionist dialogue corpus. A growing body of work is emerging from the field of sign language avatar synthesis. These works are now at a point where they can benefit greatly from introducing methods currently used in the field of humanoid animation and, more specifically, the application of morphs to represent facial expression. The hypothesis underpinning this research is: augmenting an existing avatar (eSIGN) with various combinations of the 7 widely accepted universal emotions identified by Ekman (1999) to deliver underlying facial expressions, will make that avatar more human-like. This research accepts as true that this is a factor in improving usability and understandability for ISL users. Using human evaluation methods (Huenerfauth, et al., 2008) the research compares an augmented set of avatar utterances against a baseline set with regards to 2 key areas: comprehension and naturalness of facial configuration. We outline our approach to the evaluation including our choice of ISL participants, interview environment, and evaluation methodology. Remarkably, the results of this manual evaluation show that there was very little difference between the comprehension scores of the baseline avatars and those augmented withEFEs. However, after comparing the comprehension results for the synthetic human avatar “Anna” against the caricature type avatar “Luna”, the synthetic human avatar Anna was the clear winner. The qualitative feedback allowed us an insight into why comprehension scores were not higher in each avatar and we feel that this feedback will be invaluable to the research community in the future development of sign language avatars. Other questions asked in the evaluation focused on sign language avatar technology in a more general manner. Significantly, participant feedback in regard to these questions indicates a rise in the level of literacy amongst Deaf adults as a result of mobile technology

    Real-Time Set Editing in a Virtual Production Environment with an Innovative Interface

    Get PDF
    This bachelor thesis wants to describe a prototypical implementation of a 3D user interface for intuitive real-time set editing in virtual production. Furthermore this approach is evaluated qualitatively through a user group, testing the device and fill in a questionnaire. The dimension of virtual elements created with computer graphics technology in all areas of entertainment industry is steadily growing since the past years. Nevertheless can the editing process of virtual elements still require a costly process in terms of time and money. With the appearance of new input devices and improved tracking technologies it is interesting to evaluate if a real-time editing process could improve this situation. Being currently bound to experts on special workstations, this could lead to a more intuitive and real-time workflow, enabling everybody on a film set to influence the digital editing process and work collaboratively on the scene consisting of virtual and real elements.Ziel dieser Bachelorthesis ist die Beschreibung eines prototypischen 3D Editierverfahrens, das intuitives Editieren von virtuellen Elementen in Echtzeit innerhalb einer virtuellen Produktionsumgebung ermöglichen soll. Die Evaluation dieses Ansatzes geschieht qualitativ. Eine Benutzergruppe, bestehend aus Industrievertretern testet das neue Verfahren und füllt anschließend einen Fragebogen aus. Der Anteil virtueller, mithilfe von 3D Computergrafik erstellter, Elemente wächst in allen Bereichen der Entertainment Industrie seit Jahren stetig. Trotzdem ist die Bearbeitung von virtuellen Objekten nach wie vor ein komplexer Vorgang, der besonders geschulte Mitarbeiter an speziellen Arbeitsplätzen benötigt. Dies kostet Zeit und Geld. Mit dem Aufkommen neuer Eingabegeräte und verbesserten Tracking Technologien stellt sich die Frage ob es nicht möglich ist diesen Bearbeitungsprozess zu verbessern. Mithilfe des neuen Editierverfahrens soll eine intuitive Oberfläche geschaffen werden die es jedermann ermöglicht direkt noch am Filmset Änderungen an virtuellen Elementen vorzunehmen und gemeinsam an einer Filmszene zu arbeiten ohne dass hierfür besonderes Expertenwissen nötig wäre

    High quality dynamic reflectance and surface reconstruction from video

    Get PDF
    The creation of high quality animations of real-world human actors has long been a challenging problem in computer graphics. It involves the modeling of the shape of the virtual actors, creating their motion, and the reproduction of very fine dynamic details. In order to render the actor under arbitrary lighting, it is required that reflectance properties are modeled for each point on the surface. These steps, that are usually performed manually by professional modelers, are time consuming and cumbersome. In this thesis, we show that algorithmic solutions for some of the problems that arise in the creation of high quality animation of real-world people are possible using multi-view video data. First, we present a novel spatio-temporal approach to create a personalized avatar from multi-view video data of a moving person. Thereafter, we propose two enhancements to a method that captures human shape, motion and reflectance properties of amoving human using eightmulti-view video streams. Afterwards we extend this work, and in order to add very fine dynamic details to the geometric models, such as wrinkles and folds in the clothing, we make use of the multi-view video recordings and present a statistical method that can passively capture the fine-grain details of time-varying scene geometry. Finally, in order to reconstruct structured shape and animation of the subject from video, we present a dense 3D correspondence finding method that enables spatiotemporally coherent reconstruction of surface animations directly frommulti-view video data. These algorithmic solutions can be combined to constitute a complete animation pipeline for acquisition, reconstruction and rendering of high quality virtual actors from multi-view video data. They can also be used individually in a system that require the solution of a specific algorithmic sub-problem. The results demonstrate that using multi-view video data it is possible to find the model description that enables realistic appearance of animated virtual actors under different lighting conditions and exhibits high quality dynamic details in the geometry.Die Entwicklung hochqualitativer Animationen von menschlichen Schauspielern ist seit langem ein schwieriges Problem in der Computergrafik. Es beinhaltet das Modellieren einer dreidimensionaler Abbildung des Akteurs, seiner Bewegung und die Wiedergabe sehr feiner dynamischer Details. Um den Schauspieler unter einer beliebigen Beleuchtung zu rendern, müssen auch die Reflektionseigenschaften jedes einzelnen Punktes modelliert werden. Diese Schritte, die gewöhnlich manuell von Berufsmodellierern durchgeführt werden, sind zeitaufwendig und beschwerlich. In dieser These schlagen wir algorithmische Lösungen für einige der Probleme vor, die in der Entwicklung solch hochqualitativen Animationen entstehen. Erstens präsentieren wir einen neuartigen, räumlich-zeitlichen Ansatz um einen Avatar von Mehransicht-Videodaten einer bewegenden Person zu schaffen. Danach beschreiben wir einen videobasierten Modelierungsansatz mit Hilfe einer animierten Schablone eines menschlichen Körpers. Unter Zuhilfenahme einer handvoll synchronisierter Videoaufnahmen berechnen wir die dreidimensionale Abbildung, seine Bewegung und Reflektionseigenschaften der Oberfläche. Um sehr feine dynamische Details, wie Runzeln und Falten in der Kleidung zu den geometrischen Modellen hinzuzufügen, zeigen wir eine statistische Methode, die feinen Details der zeitlich variierenden Szenegeometrie passiv erfassen kann. Und schließlich zeigen wir eine Methode, die dichte 3D Korrespondenzen findet, um die strukturierte Abbildung und die zugehörige Bewegung aus einem Video zu extrahieren. Dies ermöglicht eine räumlich-zeitlich zusammenhängende Rekonstruktion von Oberflächenanimationen direkt aus Mehransicht-Videodaten. Diese algorithmischen Lösungen können kombiniert eingesetzt werden, um eine Animationspipeline für die Erfassung, die Rekonstruktion und das Rendering von Animationen hoher Qualität aus Mehransicht-Videodaten zu ermöglichen. Sie können auch einzeln in einem System verwendet werden, das nach einer Lösung eines spezifischen algorithmischen Teilproblems verlangt. Das Ergebnis ist eine Modelbeschreibung, das realistisches Erscheinen von animierten virtuellen Schauspielern mit dynamischen Details von hoher Qualität unter verschiedenen Lichtverhältnissen ermöglicht

    Intermediated reality

    Get PDF
    Real-time solutions to reducing the gap between virtual and physical worlds for photorealistic interactive Augmented Reality (AR) are presented. First, a method of texture deformation with image inpainting, provides a proof of concept to convincingly re-animate fixed physical objects through digital displays with seamless visual appearance. This, in combination with novel methods for image-based retargeting of real shadows to deformed virtual poses and environment illumination estimation using in conspicuous flat Fresnel lenses, brings real-world props to life in compelling, practical ways. Live AR animation capability provides the key basis for interactive facial performance capture driven deformation of real-world physical facial props. Therefore, Intermediated Reality (IR) is enabled; a tele-present AR framework that drives mediated communication and collaboration for multiple users through the remote possession of toys brought to life.This IR framework provides the foundation of prototype applications in physical avatar chat communication, stop-motion animation movie production, and immersive video games. Specifically, a new approach to reduce the number of physical configurations needed for a stop-motion animation movie by generating the in-between frames digitally in AR is demonstrated. AR-generated frames preserve its natural appearance and achieve smooth transitions between real-world keyframes and digitally generated in-betweens. Finally, the methods integrate across the entire Reality-Virtuality Continuum to target new game experiences called Multi-Reality games. This gaming experience makes an evolutionary step toward the convergence of real and virtual game characters for visceral digital experiences
    corecore