5 research outputs found

    Morphing arquitectónico: transformaciones entre las casas usonianas de Frank Lloyd Wright

    Get PDF
    Esta tesis investiga sobre el proceso de transformación de la forma arquitectónica, analizando una técnica específica denominada morphing. La técnica del morphing se utiliza en los gráficos por ordenador para la transformación de la forma entre dos o más objetos dados. Desde un punto de vista técnico, se revisan y actualizan las metodologías y aplicaciones existentes, sus características específicas y sus incidencias sobre la arquitectura. Desde un punto de vista práctico, se utilizan una serie de modelos de las casas Usonianas de Frank Lloyd Wright, con el fin de experimentar la técnica y ver qué utilidades se pueden obtener a partir de su lógica de diseño. Como resultado de este análisis se obtiene una metodología genérica para el procedimiento de un morphing arquitectónico.This thesis investigates the transformation of architectural form, analyzing a specific technique called morphing. Morphing is a technique used in computer graphics to transform a form between two or more given objects. From a technical point of view, the existing techniques are reviewed and updated, as well as their specific characteristics and impact on architecture. From a practical point of view, some models of Usonian houses of Frank Lloyd Wright are used to experience the technique and see which utilities are available from his design logic. As a result of this analysis a generic methodology for the process of architectural morphing is obtained.Postprint (published version

    Virtual human modelling and animation for real-time sign language visualisation

    Get PDF
    >Magister Scientiae - MScThis thesis investigates the modelling and animation of virtual humans for real-time sign language visualisation. Sign languages are fully developed natural languages used by Deaf communities all over the world. These languages are communicated in a visual-gestural modality by the use of manual and non-manual gestures and are completely di erent from spoken languages. Manual gestures include the use of hand shapes, hand movements, hand locations and orientations of the palm in space. Non-manual gestures include the use of facial expressions, eye-gazes, head and upper body movements. Both manual and nonmanual gestures must be performed for sign languages to be correctly understood and interpreted. To e ectively visualise sign languages, a virtual human system must have models of adequate quality and be able to perform both manual and non-manual gesture animations in real-time. Our goal was to develop a methodology and establish an open framework by using various standards and open technologies to model and animate virtual humans of adequate quality to e ectively visualise sign languages. This open framework is to be used in a Machine Translation system that translates from a verbal language such as English to any sign language. Standards and technologies we employed include H-Anim, MakeHuman, Blender, Python and SignWriting. We found it necessary to adapt and extend H-Anim to e ectively visualise sign languages. The adaptations and extensions we made to H-Anim include imposing joint rotational limits, developing exible hands and the addition of facial bones based on the MPEG-4 Facial De nition Parameters facial feature points for facial animation. By using these standards and technologies, we found that we could circumvent a few di cult problems, such as: modelling high quality virtual humans; adapting and extending H-Anim; creating a sign language animation action vocabulary; blending between animations in an action vocabulary; sharing animation action data between our virtual humans; and e ectively visualising South African Sign Language.South Afric

    Generating, animating, and rendering varied individuals for real-time crowds

    Get PDF
    To simulate realistic crowds of virtual humans in real time, three main requirements need satisfaction. First of all, quantity, i.e., the ability to simulate thousands of characters. Secondly, quality, because each virtual human composing a crowd needs to look unique in its appearance and animation. Finally, efficiency is paramount, for an operation usually efficient on a single virtual human, becomes extremely costly when applied on large crowds. Developing an architecture able to manage all three aspects is a challenging problem that we have addressed in our research. Our first contribution is an efficient and versatile architecture called YaQ, able to simulate thousands of characters in real time. This platform, developed at EPFL-VRLab, results from several years of research and integrates state-of-the-art techniques at all levels: YaQ aims at providing efficient algorithms and real-time solutions for populating globally and massively large-scale empty environments. YaQ thus fits various application domains, such as video games and virtual reality. Our architecture is especially efficient in managing the large quantity of data that is used to simulate crowds. In order to simulate large crowds, many instances of a small set of human templates have to be generated. From this starting point, if no care is taken to vary each character individually, many clones appear in the crowd. We present several algorithms to make each individual unique in the crowd. Firstly, we introduce a new method to distinguish body parts of a human and apply detailed color variety and patterns to each one of them. Secondly, we present two techniques to modify the shape and profile of a virtual human: a simple and efficient method for attaching accessories to individuals, and efficient tools to scale the skeleton and mesh of an instance. Finally, we also contribute to varying individuals' animation by introducing variations to the upper body movements, thus allowing characters to make a phone call, have a hand in their pocket, or carry heavy accessories, etc. To achieve quantity in a crowd, levels of detail need to be used. We explore the most adequate solutions to simulate large crowds with levels of detail, while avoiding disturbing switches between two different representations of a virtual human. To do so, we develop solutions to make most variety techniques scalable to all levels of detail

    The use of mobile phones as service-delivery devices in sign language machine translation system

    Get PDF
    Masters of ScienceThis thesis investigates the use of mobile phones as service-delivery devices in a sign language machine translation system. Four sign language visualization methods were evaluated on mobile phones. Three of the methods were synthetic sign language visualization methods. Three factors were considered: the intelligibility of sign language, as rendered by the method; the power consumption; and the bandwidth usage associated with each method. The average intelligibility rate was 65%, with some methods achieving intelligibility rates of up to 92%. The average size was 162 KB and, on average, the power consumption increased to 180% of the idle state, across all methods. This research forms part of the Integration of Signed and Verbal Communication: South African Sign Language Recognition and Animation (SASL) project at the University of the Western Cape and serves as an integration platform for the group's research. In order to perform this research a machine translation system that uses mobile phones as service-delivery devices was developed as well as a 3D Avatar for mobile phones. It was concluded that mobile phones are suitable service-delivery platforms for sign language machine translation systems.South Afric
    corecore