771 research outputs found

    Visual Perception of Dynamic Properties and Events: Collisions and Throws

    Get PDF
    The central topic of this dissertation is visual perception of dynamic events. The topic is worth of interest, as witnessed by its long tradition in the history of Experimental Psychology, starting with the seminal work of Albert Michotte (1881 - 1965) on phenomenal causality. Thus, the topic I chose is not original in itself. However, a distinctive element of novelty in my dissertation is the use of Computer Graphics techniques as a means for creating realistic experimental stimuli in psychological experiments. Besides the advantage of reducing the gap between laboratory experiments and everyday experience, this may reveal the importance of experimental variables which traditionally have been ignored in research on visual perception of dynamic events. The reader should be informed that this dissertation is characterized by various lines of research, which are intrinsically connected with the central topic of visual perception of dynamic events. In some of the experiments, I investigate visual perception of dynamic events, whereas in others I investigate cognition of the same events. Two distinct dynamic events will be especially studied: horizontal collisions and throws. Moreover, the results of the experiments will be discussed not only in relation to their theoretical implications for psychological models, but also in relation to their potential applications to Physics education and Computer Graphics. As a result, the content of the dissertation is quite heterogeneous, but I hope to provide the reader with a broad and multidisciplinary perspective on the subject at hand. The dissertation is composed of five chapters, which may be divided into three groups. (i) In Chapters 1-3, after a presentation of the theoretical background of visual perception of dynamic events, I investigate the influence of dynamic properties of virtual objects on visual perception of horizontal collisions. The results of this research are important for the old and still active debate on phenomenal causality. (ii) In Chapter 4 I present a research on Naïve Physics of horizontal collisions between virtual spheres differing in simulated mass and velocity. In this chapter I take a more cognitive (rather than perceptual) perspective on dynamic events, investigating how people reason about the proposed physical event. (iii) In Chapter 5, I present a research on visual perception of virtual throwing animations, which are complex and rarely studied dynamic events. This chapter stands out for its multidisciplinary nature, as in it I discuss how the results can be applied to Computer Graphics. The research presented in this last chapter has been conducted as a part of my doctorate studies when I was a visiting PhD student at the Graphics, Vision, and Visualisation Group at Trinity College Dublin, where I collaborated with Professor Carol O’Sullivan and Doctor Ludovic Hoyet, who are computer scientists working on applications of visual perception to Computer Graphics. In more detail, in Chapter 1 I discuss the theoretical background of visual perception of dynamic events and phenomenal causality. Firstly, I focus on Michotte’s classical work. Secondly, I discuss some prominent issues which have been debated for a long time in this field of research. Lastly, I present White’s schema-matching model of visual perception of dynamic events, discussing its differences and similarities as compared with Michotte’s model. This chapter is intended to serve as a theoretical point of reference for the entire dissertation. In Chapter 2 I discuss the hypothesis that visually perceived dynamic properties of objects involved in dynamic events do influence visual perception of the dynamic events themselves. Firstly, I try to confute two popular arguments against this hypothesis. Then, I highlight the evolutionary advantage of visual perception of dynamic properties, discussing their possible influence on visual perception of dynamic events. Lastly, I discuss Runeson’s KSD model in relation to the presented hypothesis. In Chapter 3 I present three experiments which confirm the hypothesis discussed in Chapter 2. In particular, I show that simulated material (Experiment 1) and size (Experiments 2 and 3) of virtual objects involved in horizontal collisions strongly influence how observers perceive the event. I also discuss the theoretical implications of these findings by referring to Michotte’s and White’s models. In Chapter 4 I present a research on Naïve Physics of horizontal collisions. Firstly, I discuss the general importance of studying Naïve Physics for improving basic education in Physics. Secondly, I present Information Integration Theory and Functional Measurement methodology as suitable tools for the assessment of students’ intuitive knowledge of physical events, evidencing their advantages over multiple-choice surveys. Lastly, I present two experiments (conducted using Information Integration Theory and Functional Measurement) on Naïve Physics of horizontal collisions between simulated spheres differing in size, velocity, and material. The importance of the results for Physics instruction will also be discussed. Finally, in Chapter 5 I present a research on visual perception of edited virtual throwing animations. First I discuss the relations between visual perception of dynamic events (human motion in particular) and Computer Graphics. Then, I present two experiments on observers’ sensitivity to anomalies in realistic virtual throwing animations, discussing the importance of the results for videogames and movies industry

    Towards Perception-based Character Animation

    Get PDF

    Paralympic VR Game Immersive Game using Virtual Reality Technology

    Get PDF
    Throughout the years virtual reality has been used for a wide range of applications, and several types of research have been made in order to improve its techniques and technology. In the last few years, the interest in virtual reality has been increasing partially due to the emergence of cheaper and more accessible hardware, and the increase in content available. One of the possible applications for virtual reality is to lead people into seeing situations from a different perspective, which can help change opinions. This thesis uses virtual reality to help people better understand paralympic sports by allowing them to experience the sports’ world from the athletes’ perspective. For the creation of the virtual environment, both computer-generated elements and 360 video are used. The integration of these two components presented a challenge to explore. This thesis focused on wheelchair basketball, and a simulator of this sport was created resorting to the use of a game engine (Unity 3D). For the development of this simulator, computer-generated elements were built, and the interaction with them implemented. Besides allowing the users to play the sport as if they are in the athlete’s shoes, users can also watch 360 videos in which explanations of the modality (rules and classification) are presented. They are also capable of interacting with some of these videos through virtual elements that are placed over the videos. User studies were conducted to evaluate the sense of presence, motion sickness and usability of the system developed. The results were positive although there are still some aspects that should be improved

    Animation and Interaction of Responsive, Expressive, and Tangible 3D Virtual Characters

    Get PDF
    This thesis is framed within the field of 3D Character Animation. Virtual characters are used in many Human Computer Interaction applications such as video games and serious games. Within these virtual worlds they move and act in similar ways to humans controlled by users through some form of interface or by artificial intelligence. This work addresses the challenges of developing smoother movements and more natural behaviors driving motions in real-time, intuitively, and accurately. The interaction between virtual characters and intelligent objects will also be explored. With these subjects researched the work will contribute to creating more responsive, expressive, and tangible virtual characters. The navigation within virtual worlds uses locomotion such as walking, running, etc. To achieve maximum realism, actors' movements are captured and used to animate virtual characters. This is the philosophy of motion graphs: a structure that embeds movements where the continuous motion stream is generated from concatenating motion pieces. However, locomotion synthesis, using motion graphs, involves a tradeoff between the number of possible transitions between different kinds of locomotion, and the quality of these, meaning smooth transition between poses. To overcome this drawback, we propose the method of progressive transitions using Body Part Motion Graphs (BPMGs). This method deals with partial movements, and generates specific, synchronized transitions for each body part (group of joints) within a window of time. Therefore, the connectivity within the system is not linked to the similarity between global poses allowing us to find more and better quality transition points while increasing the speed of response and execution of these transitions in contrast to standard motion graphs method. Secondly, beyond getting faster transitions and smoother movements, virtual characters also interact with each other and with users by speaking. This interaction requires the creation of appropriate gestures according to the voice that they reproduced. Gestures are the nonverbal language that accompanies voiced language. The credibility of virtual characters when speaking is linked to the naturalness of their movements in sync with the voice in speech and intonation. Consequently, we analyzed the relationship between gestures, speech, and the performed gestures according to that speech. We defined intensity indicators for both gestures (GSI, Gesture Strength Indicator) and speech (PSI, Pitch Strength Indicator). We studied the relationship in time and intensity of these cues in order to establish synchronicity and intensity rules. Later we adapted the mentioned rules to select the appropriate gestures to the speech input (tagged text from speech signal) in the Gesture Motion Graph (GMG). The evaluation of resulting animations shows the importance of relating the intensity of speech and gestures to generate believable animations beyond time synchronization. Subsequently, we present a system that leads automatic generation of gestures and facial animation from a speech signal: BodySpeech. This system also includes animation improvements such as: increased use of data input, more flexible time synchronization, and new features like editing style of output animations. In addition, facial animation also takes into account speech intonation. Finally, we have moved virtual characters from virtual environments to the physical world in order to explore their interaction possibilities with real objects. To this end, we present AvatARs, virtual characters that have tangible representation and are integrated into reality through augmented reality apps on mobile devices. Users choose a physical object to manipulate in order to control the animation. They can select and configure the animation, which serves as a support for the virtual character represented. Then, we explored the interaction of AvatARs with intelligent physical objects like the Pleo social robot. Pleo is used to assist hospitalized children in therapy or simply for playing. Despite its benefits, there is a lack of emotional relationship and interaction between the children and Pleo which makes children lose interest eventually. This is why we have created a mixed reality scenario where Vleo (AvatAR as Pleo, virtual element) and Pleo (real element) interact naturally. This scenario has been tested and the results conclude that AvatARs enhances children's motivation to play with Pleo, opening a new horizon in the interaction between virtual characters and robots.Aquesta tesi s'emmarca dins del món de l'animació de personatges virtuals tridimensionals. Els personatges virtuals s'utilitzen en moltes aplicacions d'interacció home màquina, com els videojocs o els serious games, on es mouen i actuen de forma similar als humans dins de mons virtuals, i on són controlats pels usuaris per mitjà d'alguna interfície, o d'altra manera per sistemes intel·ligents. Reptes com aconseguir moviments fluids i comportament natural, controlar en temps real el moviment de manera intuitiva i precisa, i inclús explorar la interacció dels personatges virtuals amb elements físics intel·ligents; són els que es treballen a continuació amb l'objectiu de contribuir en la generació de personatges virtuals responsius, expressius i tangibles. La navegació dins dels mons virtuals fa ús de locomocions com caminar, córrer, etc. Per tal d'aconseguir el màxim de realisme, es capturen i reutilitzen moviments d'actors per animar els personatges virtuals. Així funcionen els motion graphs, una estructura que encapsula moviments i per mitjà de cerques dins d'aquesta, els concatena creant un flux continu. La síntesi de locomocions usant els motion graphs comporta un compromís entre el número de transicions entre les diferents locomocions, i la qualitat d'aquestes (similitud entre les postures a connectar). Per superar aquest inconvenient, proposem el mètode transicions progressives usant Body Part Motion Graphs (BPMGs). Aquest mètode tracta els moviments de manera parcial, i genera transicions específiques i sincronitzades per cada part del cos (grup d'articulacions) dins d'una finestra temporal. Per tant, la conectivitat del sistema no està lligada a la similitud de postures globals, permetent trobar més punts de transició i de més qualitat, i sobretot incrementant la rapidesa en resposta i execució de les transicions respecte als motion graphs estàndards. En segon lloc, més enllà d'aconseguir transicions ràpides i moviments fluids, els personatges virtuals també interaccionen entre ells i amb els usuaris parlant, creant la necessitat de generar moviments apropiats a la veu que reprodueixen. Els gestos formen part del llenguatge no verbal que acostuma a acompanyar a la veu. La credibilitat dels personatges virtuals parlants està lligada a la naturalitat dels seus moviments i a la concordança que aquests tenen amb la veu, sobretot amb l'entonació d'aquesta. Així doncs, hem realitzat l'anàlisi de la relació entre els gestos i la veu, i la conseqüent generació de gestos d'acord a la veu. S'han definit indicadors d'intensitat tant per gestos (GSI, Gesture Strength Indicator) com per la veu (PSI, Pitch Strength Indicator), i s'ha estudiat la relació entre la temporalitat i la intensitat de les dues senyals per establir unes normes de sincronia temporal i d'intensitat. Més endavant es presenta el Gesture Motion Graph (GMG), que selecciona gestos adients a la veu d'entrada (text anotat a partir de la senyal de veu) i les regles esmentades. L'avaluació de les animaciones resultants demostra la importància de relacionar la intensitat per generar animacions cre\"{ibles, més enllà de la sincronització temporal. Posteriorment, presentem un sistema de generació automàtica de gestos i animació facial a partir d'una senyal de veu: BodySpeech. Aquest sistema també inclou millores en l'animació, major reaprofitament de les dades d'entrada i sincronització més flexible, i noves funcionalitats com l'edició de l'estil les animacions de sortida. A més, l'animació facial també té en compte l'entonació de la veu. Finalment, s'han traslladat els personatges virtuals dels entorns virtuals al món físic per tal d'explorar les possibilitats d'interacció amb objectes reals. Per aquest fi, presentem els AvatARs, personatges virtuals que tenen representació tangible i que es visualitzen integrats en la realitat a través d'un dispositiu mòbil gràcies a la realitat augmentada. El control de l'animació es duu a terme per mitjà d'un objecte físic que l'usuari manipula, seleccionant i parametritzant les animacions, i que al mateix temps serveix com a suport per a la representació del personatge virtual. Posteriorment, s'ha explorat la interacció dels AvatARs amb objectes físics intel·ligents com el robot social Pleo. El Pleo s'utilitza per a assistir a nens hospitalitzats en teràpia o simplement per jugar. Tot i els seus beneficis, hi ha una manca de relació emocional i interacció entre els nens i el Pleo que amb el temps fa que els nens perdin l'interès en ell. Així doncs, hem creat un escenari d'interacció mixt on el Vleo (un AvatAR en forma de Pleo; element virtual) i el Pleo (element real) interactuen de manera natural. Aquest escenari s'ha testejat i els resultats conclouen que els AvatARs milloren la motivació per jugar amb el Pleo, obrint un nou horitzó en la interacció dels personatges virtuals amb robots.Esta tesis se enmarca dentro del mundo de la animación de personajes virtuales tridimensionales. Los personajes virtuales se utilizan en muchas aplicaciones de interacción hombre máquina, como los videojuegos y los serious games, donde dentro de mundo virtuales se mueven y actúan de manera similar a los humanos, y son controlados por usuarios por mediante de alguna interfaz, o de otro modo, por sistemas inteligentes. Retos como conseguir movimientos fluidos y comportamiento natural, controlar en tiempo real el movimiento de manera intuitiva y precisa, y incluso explorar la interacción de los personajes virtuales con elementos físicos inteligentes; son los que se trabajan a continuación con el objetivo de contribuir en la generación de personajes virtuales responsivos, expresivos y tangibles. La navegación dentro de los mundos virtuales hace uso de locomociones como andar, correr, etc. Para conseguir el máximo realismo, se capturan y reutilizan movimientos de actores para animar los personajes virtuales. Así funcionan los motion graphs, una estructura que encapsula movimientos y que por mediante búsquedas en ella, los concatena creando un flujo contínuo. La síntesi de locomociones usando los motion graphs comporta un compromiso entre el número de transiciones entre las distintas locomociones, y la calidad de estas (similitud entre las posturas a conectar). Para superar este inconveniente, proponemos el método transiciones progresivas usando Body Part Motion Graphs (BPMGs). Este método trata los movimientos de manera parcial, y genera transiciones específicas y sincronizadas para cada parte del cuerpo (grupo de articulaciones) dentro de una ventana temporal. Por lo tanto, la conectividad del sistema no está vinculada a la similitud de posturas globales, permitiendo encontrar más puntos de transición y de más calidad, incrementando la rapidez en respuesta y ejecución de las transiciones respeto a los motion graphs estándards. En segundo lugar, más allá de conseguir transiciones rápidas y movimientos fluídos, los personajes virtuales también interaccionan entre ellos y con los usuarios hablando, creando la necesidad de generar movimientos apropiados a la voz que reproducen. Los gestos forman parte del lenguaje no verbal que acostumbra a acompañar a la voz. La credibilidad de los personajes virtuales parlantes está vinculada a la naturalidad de sus movimientos y a la concordancia que estos tienen con la voz, sobretodo con la entonación de esta. Así pues, hemos realizado el análisis de la relación entre los gestos y la voz, y la consecuente generación de gestos de acuerdo a la voz. Se han definido indicadores de intensidad tanto para gestos (GSI, Gesture Strength Indicator) como para la voz (PSI, Pitch Strength Indicator), y se ha estudiado la relación temporal y de intensidad para establecer unas reglas de sincronía temporal y de intensidad. Más adelante se presenta el Gesture Motion Graph (GMG), que selecciona gestos adientes a la voz de entrada (texto etiquetado a partir de la señal de voz) y las normas mencionadas. La evaluación de las animaciones resultantes demuestra la importancia de relacionar la intensidad para generar animaciones creíbles, más allá de la sincronización temporal. Posteriormente, presentamos un sistema de generación automática de gestos y animación facial a partir de una señal de voz: BodySpeech. Este sistema también incluye mejoras en la animación, como un mayor aprovechamiento de los datos de entrada y una sincronización más flexible, y nuevas funcionalidades como la edición del estilo de las animaciones de salida. Además, la animación facial también tiene en cuenta la entonación de la voz. Finalmente, se han trasladado los personajes virtuales de los entornos virtuales al mundo físico para explorar las posibilidades de interacción con objetos reales. Para este fin, presentamos los AvatARs, personajes virtuales que tienen representación tangible y que se visualizan integrados en la realidad a través de un dispositivo móvil gracias a la realidad aumentada. El control de la animación se lleva a cabo mediante un objeto físico que el usuario manipula, seleccionando y configurando las animaciones, y que a su vez sirve como soporte para la representación del personaje. Posteriormente, se ha explorado la interacción de los AvatARs con objetos físicos inteligentes como el robot Pleo. Pleo se utiliza para asistir a niños en terapia o simplemente para jugar. Todo y sus beneficios, hay una falta de relación emocional y interacción entre los niños y Pleo que con el tiempo hace que los niños pierdan el interés. Así pues, hemos creado un escenario de interacción mixto donde Vleo (AvatAR en forma de Pleo; virtual) y Pleo (real) interactúan de manera natural. Este escenario se ha testeado y los resultados concluyen que los AvatARs mejoran la motivación para jugar con Pleo, abriendo un nuevo horizonte en la interacción de los personajes virtuales con robots

    Animation de personnages 3D par le sketching 2D

    No full text
    Free-form animation allows for exaggerated and artistic styles of motions such as stretching character limbs and animating imaginary creatures such as dragons. Creating these animations requires tools flexible enough to shape characters into arbitrary poses, and control motion at any instant in time. The current approach to free-form animation is keyframing: a manual task in which animators deform characters at individual instants in time by clicking-and-dragging individual body parts one at a time. While this approach is flexible, it is challenging to create quality animations that follow high-level artistic principles---as keyframing tools only provide localized control both spatially and temporally. When drawing poses and motions, artists rely on different sketch-based abstractions that help fulfill high-level aesthetic and artistic principles. For instance, animators will draw textit{lines of action} to create more readable and textit{expressive} poses. To coordinate movements, animators will sketch textit{motion abstractions} such as semi-circles and loops to coordinate a bouncing and rolling motions. Unfortunately, these drawing tools are not part of the free-form animation tool set today. The fact that we cannot use the same artistic tools for drawing when animating 3D characters has an important consequence: 3D animation tools are not involved in the creative process. Instead, animators create by first drawing on paper, and only later are 3D animation tools used to fulfill the pose or animation. The reason we do not have these artistic tools (the line of action, and motion abstractions) in the current animation tool set is because we lack a formal understanding relating the character's shape---possible over time---to the drawn abstraction's shape. Hence the main contribution of this thesis is a formal understanding of pose and motion abstractions (line of action and motion abstractions) together with a set of algorithms that allow using these tools in a free-form setting. As a result, the techniques described in this thesis allow exaggerated poses and movements that may include squash and stretch, and can be used with various character morphologies. These pose and animation drafting tools can be extended. For instance, an animator can sketch and compose different layers of motion on top of one another, add twist around strokes, or turning the strokes into elastic ribbons. The main contributions of this thesis are summarized as follows: -The line of action facilitating expressive posing by directly sketching the overall flow of the character's pose. -The space-time curve allowing to draft full coordinated movements with a single stroke---applicable to arbitrary characters. -A fast and robust skeletal line matching algorithm that supports squash-and-stretch. -Elastic lines of action with dynamically constrained bones for driving the motion of a multi-legged character with a single moving 2D line.L'animation expressive permet des styles de mouvements exagerés et artistiques comme l'étirement de parties du corps ou encore l'animation de créatures imaginaires comme un dragon. Créer ce genre d'animation nécessite des outils assez flexible afin de déformer les personnages en des poses quelconques, ainsi que de pouvoir contrôler l'animation à tout moment dans le temps. L'approche acutelle pour l'animation expressive est le keyframing: une approche manuelle avec laquelle les animateurs déforment leur personnage un moment spécifique dans le temps en cliquand et glissant la souris sur une partis spécifique du corps---un à la fois. Malgré le fait que cette approche soit flexible, il est difficile de créer des animations de qualité qui suivent les principes artistiques, puisque le keyframing permet seulement qu'un contrôle local spatiallement et temporellement. Lorsqu'ils dessinent des poses ou des mouvements, les artistes s'appuient sur différentes abstractions sous forme de croquis qui facillitent la réalisation de certain principes artistiques. Par example, certains animateurs dessinent des lignes d'action afin de créer une pose plus lisible et expressive. Afin de coordonner un mouvement, les animateurs vont souvent dessiner des abstractions de mouvement comme des demi-cercles pour des sauts, ou des boucles pour des pirouettes---leur permettant de pratiquer la coordination du mouvement. Malheureusement, ces outils artistiques ne font pas partis de l'ensemble d'outils de keyframing actuelle. Le fait que l'on ne puisse pas employer les même outils artistiques pour animater des personnages 3D a une forte conséquence: les outils d'animation 3D ne sont pas employés dans le processus créatif. Aujourd'hui, les animateurs créent sur du papier et utilisent le keyframing seulement à la fin pour réaliser leur animation. La raison pour laquelle nous n'avons pas ces outils artistiques (ligne d'action, abstractions de mouvement) en animation 3D, est parce qu'il manque une compréhension formelle de ceux-ci qui nous permettrais d'exprimer la forme du personnage---potentiellement au cours du temps---en fonction de la forme de ces croquis. Ainsi la contribution principale de cette thèse est une compréhension formelle et mathématique des abstractions de forme et de mouvement courrament employées par des artistes, ainsi qu'un ensemble d'algorithme qui permet l'utilisation de ces outils artistiques pour créer des animations expressives. C'est-à-dire que les outils développés dans cette thèse permettent d'étirer des parties du corps ainsi que d'animer des personnages de différentes morphologies. J'introduis aussi plusieurs extentions à ces outils. Par example, j'explore l'idée de sculpter du mouvement en permettant à l'artiste de dessigner plusieurs couches de mouvement une par dessus l'autre, de twister en 3D les croquis, ou encore d'animer un croquis ligne comme un élastique. Les contributions principales de cette thèse, aussi résumé ci-dessous: -La ligne d'action facilitant la création de poses expressives en dessinant directement le flow complet du personnage. -La courbe spatio-temporelle qui permet de spécifier un mouvement coordoné complet avec un seul geste (en dessinant une seule courbe), applicable à n'importe quel personnage 3D. -Un algorithme de matching rapide et robuste qui permet du ``squash and stretch''. -La ligne d'action élastique avec des attachements dynamiques à la ligne permettant d'animer un personnages à plusieurs jambes (bras) avec une seule ligne 2D animée

    Animation de personnages 3D par le sketching 2D

    Get PDF
    Free-form animation allows for exaggerated and artistic styles of motions such as stretching character limbs and animating imaginary creatures such as dragons. Creating these animations requires tools flexible enough to shape characters into arbitrary poses, and control motion at any instant in time. The current approach to free-form animation is keyframing: a manual task in which animators deform characters at individual instants in time by clicking-and-dragging individual body parts one at a time. While this approach is flexible, it is challenging to create quality animations that follow high-level artistic principles---as keyframing tools only provide localized control both spatially and temporally. When drawing poses and motions, artists rely on different sketch-based abstractions that help fulfill high-level aesthetic and artistic principles. For instance, animators will draw textit{lines of action} to create more readable and textit{expressive} poses. To coordinate movements, animators will sketch textit{motion abstractions} such as semi-circles and loops to coordinate a bouncing and rolling motions. Unfortunately, these drawing tools are not part of the free-form animation tool set today. The fact that we cannot use the same artistic tools for drawing when animating 3D characters has an important consequence: 3D animation tools are not involved in the creative process. Instead, animators create by first drawing on paper, and only later are 3D animation tools used to fulfill the pose or animation. The reason we do not have these artistic tools (the line of action, and motion abstractions) in the current animation tool set is because we lack a formal understanding relating the character's shape---possible over time---to the drawn abstraction's shape. Hence the main contribution of this thesis is a formal understanding of pose and motion abstractions (line of action and motion abstractions) together with a set of algorithms that allow using these tools in a free-form setting. As a result, the techniques described in this thesis allow exaggerated poses and movements that may include squash and stretch, and can be used with various character morphologies. These pose and animation drafting tools can be extended. For instance, an animator can sketch and compose different layers of motion on top of one another, add twist around strokes, or turning the strokes into elastic ribbons. The main contributions of this thesis are summarized as follows: -The line of action facilitating expressive posing by directly sketching the overall flow of the character's pose. -The space-time curve allowing to draft full coordinated movements with a single stroke---applicable to arbitrary characters. -A fast and robust skeletal line matching algorithm that supports squash-and-stretch. -Elastic lines of action with dynamically constrained bones for driving the motion of a multi-legged character with a single moving 2D line.L'animation expressive permet des styles de mouvements exagerés et artistiques comme l'étirement de parties du corps ou encore l'animation de créatures imaginaires comme un dragon. Créer ce genre d'animation nécessite des outils assez flexible afin de déformer les personnages en des poses quelconques, ainsi que de pouvoir contrôler l'animation à tout moment dans le temps. L'approche acutelle pour l'animation expressive est le keyframing: une approche manuelle avec laquelle les animateurs déforment leur personnage un moment spécifique dans le temps en cliquand et glissant la souris sur une partis spécifique du corps---un à la fois. Malgré le fait que cette approche soit flexible, il est difficile de créer des animations de qualité qui suivent les principes artistiques, puisque le keyframing permet seulement qu'un contrôle local spatiallement et temporellement. Lorsqu'ils dessinent des poses ou des mouvements, les artistes s'appuient sur différentes abstractions sous forme de croquis qui facillitent la réalisation de certain principes artistiques. Par example, certains animateurs dessinent des lignes d'action afin de créer une pose plus lisible et expressive. Afin de coordonner un mouvement, les animateurs vont souvent dessiner des abstractions de mouvement comme des demi-cercles pour des sauts, ou des boucles pour des pirouettes---leur permettant de pratiquer la coordination du mouvement. Malheureusement, ces outils artistiques ne font pas partis de l'ensemble d'outils de keyframing actuelle. Le fait que l'on ne puisse pas employer les même outils artistiques pour animater des personnages 3D a une forte conséquence: les outils d'animation 3D ne sont pas employés dans le processus créatif. Aujourd'hui, les animateurs créent sur du papier et utilisent le keyframing seulement à la fin pour réaliser leur animation. La raison pour laquelle nous n'avons pas ces outils artistiques (ligne d'action, abstractions de mouvement) en animation 3D, est parce qu'il manque une compréhension formelle de ceux-ci qui nous permettrais d'exprimer la forme du personnage---potentiellement au cours du temps---en fonction de la forme de ces croquis. Ainsi la contribution principale de cette thèse est une compréhension formelle et mathématique des abstractions de forme et de mouvement courrament employées par des artistes, ainsi qu'un ensemble d'algorithme qui permet l'utilisation de ces outils artistiques pour créer des animations expressives. C'est-à-dire que les outils développés dans cette thèse permettent d'étirer des parties du corps ainsi que d'animer des personnages de différentes morphologies. J'introduis aussi plusieurs extentions à ces outils. Par example, j'explore l'idée de sculpter du mouvement en permettant à l'artiste de dessigner plusieurs couches de mouvement une par dessus l'autre, de twister en 3D les croquis, ou encore d'animer un croquis ligne comme un élastique. Les contributions principales de cette thèse, aussi résumé ci-dessous: -La ligne d'action facilitant la création de poses expressives en dessinant directement le flow complet du personnage. -La courbe spatio-temporelle qui permet de spécifier un mouvement coordoné complet avec un seul geste (en dessinant une seule courbe), applicable à n'importe quel personnage 3D. -Un algorithme de matching rapide et robuste qui permet du ``squash and stretch''. -La ligne d'action élastique avec des attachements dynamiques à la ligne permettant d'animer un personnages à plusieurs jambes (bras) avec une seule ligne 2D animée

    Human Expressivity in the Control and Integration of Computationally Generated Audio

    Get PDF
    PhDWhile physics-based synthesis offers a wide range of benefits in the real-time generation of sound for interactive environments, it is difficult to incorporate nuanced and complex behaviour that enhances the sound in a narrative or aesthetic context. The work presented in this thesis explores real-time human performance as a means of stylistically augmenting computational sound models. Transdisciplinary in nature, this thesis builds upon previous work in sound synthesis, film sound theory and physical sound interaction. Two levels on which human performance can enhance the aesthetic value of computational models are investigated: first, in the real-time manipulation of an idiosyncratic parameter space to generate unique sound effects, and second, in the performance of physical source models in synchrony with moving images. In the former, various mapping techniques were evaluated to control a model of a creaking door based on a proposed extension of practical synthesis techniques. In the latter, audio post-production professionals with extensive experience in performing Foley were asked to perform the soundtrack to a physics-based animation using bespoke physical interfaces and synthesis engines. The generated dataset was used to gain insights into stylistic features afforded by performed sound synchronisation, and potential ways of integrating them into an interactive environment such as a game engine. Interacting with practical synthesis models that have extended to incorporate performability enables rapid generation of unique and expressive sound effects, while maintaining a believable source-sound relationship. Performatively authoring behaviours of sound models makes it possible to enhance the relationship between sound and image (both stylistically and perceptually) in ways precluded by one-to-one mappings between physics-based parameters. Mediation layers are required in order to facilitate performed behaviour: in the design of the model on one hand, and in the integration of such behaviours into interactive environments on the other. This thesis provides some examples of how such a system could be implemented. Furthermore, some interesting observations are made regarding the design of physical interfaces for performing environmental sound, and the creative exploitation of model constraints.Engineering and Physical Sciences Research Council (EPSRC) Doctoral Training Centre in Media and Arts Technology (ref: EP/G03723X/1)
    • …
    corecore