3,928 research outputs found

    A biomechanical model of the face including muscles for the prediction of deformations during speech production

    Full text link
    A 3D biomechanical finite element model of the face is presented. Muscles are represented by piece-wise uniaxial tension cable elements linking the insertion points. Such insertion points are specific entities differing from nodes of the finite element mesh, which makes possible to change either the mesh or the muscle implementation totally independently of each other. Lip/teeth and upper lip/lower lip contacts are also modeled. Simulations of smiling and of an Orbicularis Oris activation are presented and interpreted. The importance of a proper account of contacts and of an accurate anatomical description is show

    A novel RBF-based predictive tool for facial distraction surgery in growing children with syndromic craniosynostosis

    Get PDF
    PURPOSE: Predicting changes in face shape from corrective surgery is challenging in growing children with syndromic craniosynostosis. A prediction tool mimicking composite bone and skin movement during facial distraction would be useful for surgical audit and planning. To model surgery, we used a radial basis function (RBF) that is smooth and continuous throughout space whilst corresponding to measured distraction at landmarks. Our aim is to showcase the pipeline for a novel landmark-based, RBF-driven simulation for facial distraction surgery in children. METHODS: An individual's dataset comprised of manually placed skin and bone landmarks on operated and unoperated regions. Surgical warps were produced for 'older' monobloc, 'older' bipartition and 'younger' bipartition groups by applying a weighted least-squares RBF fitted to the average landmarks and change vectors. A 'normalisation' warp, from fitting an RBF to craniometric landmark differences from the average, was applied to each dataset before the surgical warp. The normalisation was finally reversed to obtain the individual prediction. Predictions were compared to actual post-operative outcomes. RESULTS: The averaged change vectors for all groups showed skin and bone movements characteristic of the operations. Normalisation for shape-size removed individual asymmetry, size and proportion differences but retained typical pre-operative shape features. The surgical warps removed the average syndromic features. Reversing the normalisation reintroduced the individual's variation into the prediction. The mid-facial regions were well predicted for all groups. Forehead and brow regions were less well predicted. CONCLUSIONS: Our novel, landmark-based, weighted RBF can predict the outcome for facial distraction in younger and older children with a variety of head and face shapes. It can replicate the surgical reality of composite bone and skin movement jointly in one model. The potential applications include audit of existing patient outcomes, and predicting outcome for new patients to aid surgical planning

    Emergency Eye Simulation Model

    Get PDF

    A physically-based muscle and skin model for facial animation

    Get PDF
    Facial animation is a popular area of research which has been around for over thirty years, but even with this long time scale, automatically creating realistic facial expressions is still an unsolved goal. This work furthers the state of the art in computer facial animation by introducing a new muscle and skin model and a method of easily transferring a full muscle and bone animation setup from one head mesh to another with very little user input. The developed muscle model allows muscles of any shape to be accurately simulated, preserving volume during contraction and interacting with surrounding muscles and skin in a lifelike manner. The muscles can drive a rigid body model of a jaw, giving realistic physically-based movement to all areas of the face. The skin model has multiple layers, mimicking the natural structure of skin and it connects onto the muscle model and is deformed realistically by the movements of the muscles and underlying bones. The skin smoothly transfers underlying movements into skin surface movements and propagates forces smoothly across the face. Once a head model has been set up with muscles and bones, moving this muscle and bone set to another head is a simple matter using the developed techniques. The developed software employs principles from forensic reconstruction, using specific landmarks on the head to map the bone and muscles to the new head model and once the muscles and skull have been quickly transferred, they provide animation capabilities on the new mesh within minutes

    Full Measures

    Get PDF
    “Full Measures” is a 3D Animated Short Film about a pianist’s struggle writing the music he desires before his deadline. With musical creatures taunting him, will he defeat his nightmares in time? “Full Measures” has two meanings, it’s literally definition is “to perform a task as well as possible.” The second definition is a play on words meaning “Passages with heavily written music.” It’s a phrase I found through speaking with musician at the Eastman School of Music. In life we all aspire to achieve what we want to create. Often the greatest obstacles are constructs in our minds. We must overcome these walls to accomplish the things we desire. It is the same with our pianist; this film attempts to represent his struggle through taking the audience through a fanciful journey inside his mind. This thesis outlines the whole creation process of making this animation from concept to completion. It describes my intentions, obstacles, effort, and successes throughout this entire production

    Development and evaluation of an interactive virtual audience for a public speaking training application

    Get PDF
    Einleitung: Eine der häufigsten sozialen Ängste ist die Angst vor öffentlichem Sprechen. Virtual-Reality- (VR-) Trainingsanwendungen sind ein vielversprechendes Instrument, um die Sprechangst zu reduzieren und die individuellen Sprachfähigkeiten zu verbessern. Grundvoraussetzung hierfür ist die Implementierung eines realistischen und interaktiven Sprecher-Publikum-Verhaltens. Ziel: Die Studie zielte darauf ab, ein realistisches und interaktives Publikum für eine VR-Anwendung zu entwickeln und zu bewerten, welches für die Trainingsanwendung von öffentlichem Sprechen angewendet wird. Zunächst wurde eine Beobachtungsstudie zu den Verhaltensmustern von Sprecher und Publikum durchgeführt. Anschließend wurden die identifizierten Muster in eine VR-Anwendung implementiert. Die Wahrnehmung der implementierten Interaktionsmuster wurde in einer weiteren Studie aus Sicht der Nutzer evaluiert. Beobachtungsstudie (1): Aufgrund der nicht ausreichenden Datengrundlage zum realen interaktiven Verhalten zwischen Sprecher und Publikum lautet die erste Forschungsfrage "Welche Sprecher-Publikums-Interaktionsmuster können im realen Umfeld identifiziert werden?". Es wurde eine strukturierte, nicht teilnehmende, offene Beobachtungsstudie durchgeführt. Ein reales Publikum wurde auf Video aufgezeichnet und die Inhalte analysiert. Die Stichprobe ergab N = 6484 beobachtete Interaktionsmuster. Es wurde festgestellt, dass Sprecher mehr Dialoge als das Publikum initiieren und wie die Zuschauer auf Gesichtsausdrücke und Gesten der Sprecher reagieren. Implementierungsstudie (2): Um effiziente Wege zur Implementierung der Ergebnisse der Beobachtungsstudie in die Trainingsanwendung zu finden, wurde die Forschungsfrage wie folgt formuliert: "Wie können Interaktionsmuster zwischen Sprecher und Publikum in eine virtuelle Anwendung implementiert werden?". Das Hardware-Setup bestand aus einer CAVE, Infitec-Brille und einem ART Head-Tracking. Die Software wurde mit 3D-Excite RTT DeltaGen 12.2 realisiert. Zur Beantwortung der zweiten Forschungsfrage wurden mehrere mögliche technische Lösungen systematisch untersucht, bis effiziente Lösungen gefunden wurden. Infolgedessen wurden die selbst erstellte Audioerkennung, die Kinect-Bewegungserkennung, die Affectiva-Gesichtserkennung und die selbst erstellten Fragen implementiert, um das interaktive Verhalten des Publikums in der Trainingsanwendung für öffentliches Sprechen zu realisieren. Evaluationsstudie (3): Um herauszufinden, ob die Implementierung interaktiver Verhaltensmuster den Erwartungen der Benutzer entsprach, wurde die dritte Forschungsfrage folgendermaßen formuliert: “Wie beeinflusst die Interaktivität einer virtuellen Anwendung für öffentliches Reden die Benutzererfahrung?”. Eine experimentelle Benutzer-Querschnittsstudie wurde mit N = 57 Teilnehmerinnen (65% Männer, 35% Frauen; Durchschnittsalter = 25.98, SD = 4.68) durchgeführt, die entweder der interaktiven oder nicht-interaktiven VR-Anwendung zugewiesen wurden. Die Ergebnisse zeigten, dass, es einen signifikanten Unterschied in der Wahrnehmung zwischen den beiden Anwendungen gab. Allgemeine Schlussfolgerungen: Interaktionsmuster zwischen Sprecher und Publikum, die im wirklichen Leben beobachtet werden können, wurden in eine VR-Anwendung integriert, die Menschen dabei hilft, Angst vor dem öffentlichen Sprechen zu überwinden und ihre öffentlichen Sprechfähigkeiten zu trainieren. Die Ergebnisse zeigten eine hohe Relevanz der VR-Anwendungen für die Simulation öffentlichen Sprechens. Obwohl die Fragen des Publikums manuell gesteuert wurden, konnte das neu gestaltete Publikum mit den Versuchspersonen interagieren. Die vorgestellte VR-Anwendung zeigt daher einen hohen potenziellen Nutzen, Menschen beim Trainieren von Sprechfähigkeiten zu unterstützen. Die Fragen des Publikums wurden immer noch manuell von einem Bediener reguliert und die Studie wurde mit Teilnehmern durchgeführt, die nicht unter einem hohen Grad an Angst vor öffentlichem Sprechen leiden. Bei zukünftigen Studien sollten fortschrittlichere Technologien eingesetzt werden, beispielsweise Spracherkennung, 3D-Aufzeichnungen oder 3D-Livestreams einer realen Person und auch Teilnehmer mit einem hohen Grad an Angst vor öffentlichen Ansprachen beziehungsweise Sprechen in der Öffentlichkeit.Introduction: Fear of public speaking is the most common social fear. Virtual reality (VR) training applications are a promising tool to improve public speaking skills. To be successful, applications should feature a high scenario fidelity. One way to improve it is to implement realistic speaker-audience interactive behavior. Objective: The study aimed to develop and evaluate a realistic and interactive audience for a VR public speaking training application. First, an observation study on real speaker-audience interactive behavior patterns was conducted. Second, identified patterns were implemented in the VR application. Finally, an evaluation study identified users’ perceptions of the training application. Observation Study (1): Because of the lack of data on real speaker-audience interactive behavior, the first research question to be answered was “What speaker-audience interaction patterns can be identified in real life?”. A structured, non-participant, overt observation study was conducted. A real audience was video recorded, and content analyzed. The sample resulted in N = 6,484 observed interaction patterns. It was found that speakers, more often than audience members, initiate dialogues and how audience members react to speakers’ facial expressions and gestures. Implementation Study (2): To find efficient ways of implementing the results of the observation study in the training application, the second research question was formulated as: “How can speaker-audience interaction patterns be implemented into the virtual public speaking application?”. The hardware setup comprised a CAVE, Infitec glasses, and ART head tracking. The software was realized with 3D-Excite RTT DeltaGen 12.2. To answer the second research question, several possible technical solutions were explored systematically, until efficient solutions were found. As a result, self-created audio recognition, Kinect motion recognition, Affectiva facial recognition, and manual question generation were implemented to provide interactive audience behavior in the public speaking training application. Evaluation Study (3): To find out if implementing interactive behavior patterns met users’ expectations, the third research question was formulated as “How does interactivity of a virtual public speaking application affect user experience?”. An experimental, cross-sectional user study was conducted with (N = 57) participants (65% men, 35% women; Mage = 25.98, SD = 4.68) who used either an interactive or a non-interactive VR application condition. Results revealed that there was a significant difference in users’ perception of the two conditions. General Conclusions: Speaker-audience interaction patterns that can be observed in real life were incorporated into a VR application that helps people to overcome the fear of public speaking and train their public speaking skills. The findings showed a high relevance of interactivity for VR public speaking applications. Although questions from the audience were still regulated manually, the newly designed audience could interact with the speakers. Thus, the presented VR application is of potential value in helping people to train their public speaking skills. The questions from the audience were still regulated manually by an operator and we conducted the study with participants not suffering from high degrees of public speaking fear. Future work may use more advanced technology, such as speech recognition, 3D-records, or live 3D-streams of an actual person and include participants with high degrees of public speaking fear

    CGAMES'2009

    Get PDF

    A Domain-Specific Modeling approach for a simulation-driven validation of gamified learning environments Case study about teaching the mimicry of emotions to children with autism

    Get PDF
    Game elements are rarely explicit when designing serious games or gamified learning activities. We think that the overall design, including instructional design aspects and gamification elements, should be validate by involved experts in the earlier stage of the general design & develop process. We tackle this challenge by proposing a Domain-specific Modeling orientation to our proposals: a metamodeling formalism to capture the gamified instructional design model, and a specific validation process involving domain experts. The validation includes a static verification , by using this formalism to model concrete learning sessions based on concrete informations from real situations described by experts, and a dynamic verification, by developing a simplified simulator for 'execut-ing' the learning sessions scenarios with experts. This propositions are part of the EmoTED research project about a learning application, the mimicry of emotions, for children with ASD. It aims at reinforce face-to-face teaching sessions with therapists by training sessions at home with the supervision of the children's parents. This case-study will ground our proposals and their experimentations

    Öğretim Elemanı Coşkusunun Üniversite Öğrencilerinin Başarı Güdüsü Düzeyleri Üzerindeki Etkisi

    Get PDF
    The purpose of this study is to analyze university students’level of achievement motivation in terms of instructor enthusiasm and some variables such as gender, grade level, academic achievement, course attendance, and the dependence of students’ course attendance upon the instructor’s enthusiasm. The study was conducted with 334 university students. In order to collect the necessary data forthe study, “Instructor EnthusiasmAssessment Form”, “Achievement Motivation Scale” and “Student Personal Information Form” were used. To analyze the data, independent samples t-test, one-way variance analysis and Tukey test were administered. The results of the study revealed that the students who perceived high instructor enthusiasm had a significantly higher level of achievement motivation when compared to the students who perceived low instructor enthusiasm. The level of achievement motivation was significantly higher among female students than males. The students who regularly attended classes had significantly higher levels of achievement motivation than the absentees. The students who stated that their regular attendance depended on the instructors’ enthusiasm had significantly higherlevels of achievement motivation when compared to the students stating that theirregular attendance did not depend on the instructors’ enthusiasm. The achievement motivations levels were significantly higher among the students with high academic achievement than those with low academic achievement. The students with higher grade levels had significantly higher levels of achievement motivation than those with lower grade levels.Araştırmanın amacı, üniversite öğrencilerinin başarı güdüsü düzeylerini öğretim elemanı coşkusuna ve diğer bazı değişkenlere(cinsiyet,sınıfdüzeyi,akademik başarı,derse devam durumu ve öğrencinin derse devamının öğretim elemanının coşkusuna bağlı olması) göre incelemektir. Araştırma, 334 üniversite öğrencisi üzerinde yapılmıştır. Araştırmada gerekli bilgileri toplamak amacı ile “Öğretim Elemanının Coşkusunu Değerlendirme Formu”, “Başarı Güdüsü Ölçeği” ve “Öğrenci Kişisel Bilgi Formu” kullanılmıştır. Toplanan veriler üzerinde, bağımsız gruplarda t testi, tek yönlü varyans analizi ve Tukey testi yapılmıştır. Araştırma sonucunda, öğretim elemanı coşkusunu yüksek olarak gören öğrencilerin başarı güdüsü düzeyi, öğretim elemanı coşkusunu düşük olarak gören öğrencilerin başarı güdüsü düzeyinden anlamlı düzeyde yüksek çıkmıştır. Kız öğrencilerin başarı güdüsü düzeyi, erkek öğrencilerinkinden anlamlı düzeyde yüksek çıkmıştır. Derse devam eden öğrencilerin başarı güdüsü düzeyi, derse devam etmeyen öğrencilerinkinden anlamlı düzeyde yüksek çıkmıştır. Derse devamının öğretim elemanlarının coşkusuna bağlı olduğunu söyleyen öğrencilerin başarı güdüsü düzeyi, derse devamının öğretim elemanlarının coşkusuna bağlı olmadığını söyleyen öğrencilerinkinden anlamlı düzeyde yüksek çıkmıştır. Akademik başarısı yüksek olan öğrencilerin başarı güdüsü düzeyi, akademik başarısı düşük olan öğrencilerinkinden anlamlı düzeyde yüksek çıkmıştır. Sınıf düzeyi yüksek olan öğrencilerin başarı güdüsü düzeyi, sınıf düzeyi düşük olan öğrencilerinkinden anlamlı düzeyde yüksek çıkmıştır
    corecore