2,868 research outputs found

    Modeling the Speed and Timing of American Sign Language to Generate Realistic Animations

    Get PDF
    While there are many Deaf or Hard of Hearing (DHH) individuals with excellent reading literacy, there are also some DHH individuals who have lower English literacy. American Sign Language (ASL) is not simply a method of representing English sentences. It is possible for an individual to be fluent in ASL, while having limited fluency in English. To overcome this barrier, we aim to make it easier to generate ASL animations for websites, through the use of motion-capture data recorded from human signers to build different predictive models for ASL animations; our goal is to automate this aspect of animation synthesis to create realistic animations. This dissertation consists of several parts: Part I, defines key terminology for timing and speed parameters, and surveys literature on prior linguistic and computational research on ASL. Next, the motion-capture data that our lab recorded from human signers is discussed, and details are provided about how we enhanced this corpus to make it useful for speed and timing research. Finally, we present the process of adding layers of linguistic annotation and processing this data for speed and timing research. Part II presents our research on data-driven predictive models for various speed and timing parameters of ASL animations. The focus is on predicting the (1) existence of pauses after each ASL sign, (2) predicting the time duration of these pauses, and (3) predicting the change of speed for each ASL sign within a sentence. We measure the quality of the proposed models by comparing our models with state-of-the-art rule-based models. Furthermore, using these models, we synthesized ASL animation stimuli and conducted a user-based evaluation with DHH individuals to measure the usability of the resulting animation. Finally, Part III presents research on whether the timing parameters individuals prefer for animation may differ from those in recordings of human signers. Furthermore, it also includes research to investigate the distribution of acceleration curves in recordings of human signers and whether utilizing a similar set of curves in ASL animations leads to measurable improvements in DHH users\u27 perception of animation quality

    KEER2022

    Get PDF
    Avanttítol: KEER2022. DiversitiesDescripció del recurs: 25 juliol 202

    Privacy protecting biometric authentication systems

    Get PDF
    As biometrics gains popularity and proliferates into the daily life, there is an increased concern over the loss of privacy and potential misuse of biometric data held in central repositories. The major concerns are about i) the use of biometrics to track people, ii) non-revocability of biometrics (eg. if a fingerprint is compromised it can not be canceled or reissued), and iii) disclosure of sensitive information such as race, gender and health problems which may be revealed by biometric traits. The straightforward suggestion of keeping the biometric data in a user owned token (eg. smart cards) does not completely solve the problem, since malicious users can claim that their token is broken to avoid biometric verification altogether. Put together, these concerns brought the need for privacy preserving biometric authentication methods in the recent years. In this dissertation, we survey existing privacy preserving biometric systems and implement and analyze fuzzy vault in particular; we propose a new privacy preserving approach; and we study the discriminative capability of online signatures as it relates to the success of using online signatures in the available privacy preserving biometric verification systems. Our privacy preserving authentication scheme combines multiple biometric traits to obtain a multi-biometric template that hides the constituent biometrics and allows the possibility of creating non-unique identifiers for a person, such that linking separate template databases is impossible. We provide two separate realizations of the framework: one uses two separate fingerprints of the same individual to obtain a combined biometric template, while the other one combines a fingerprint with a vocal pass-phrase. We show that both realizations of the framework are successful in verifying a person's identity given both biometric traits, while preserving privacy (i.e. biometric data is protected and the combined identifier can not be used to track people). The Fuzzy Vault emerged as a promising construct which can be used in protecting biometric templates. It combines biometrics and cryptography in order to get the benefits of both fields; while biometrics provides non-repudiation and convenience, cryptography guarantees privacy and adjustable levels of security. On the other hand, the fuzzy vault is a general construct for unordered data, and as such, it is not straightforward how it can be used with different biometric traits. In the scope of this thesis, we demonstrate realizations of the fuzzy vault using fingerprints and online signatures such that authentication can be done while biometric templates are protected. We then demonstrate how to use the fuzzy vault for secret sharing, using biometrics. Secret sharing schemes are cryptographic constructs where a secret is split into shares and distributed amongst the participants in such a way that it is constructed/revealed only when a necessary number of share holders come together (e.g. in joint bank accounts). The revealed secret can then be used for encryption or authentication. Finally, we implemented how correlation attacks can be used to unlock the vault; showing that further measures are needed to protect the fuzzy vault against such attacks. The discriminative capability of a biometric modality is based on its uniqueness/entropy and is an important factor in choosing a biometric for a large-scale deployment or a cryptographic application. We present an individuality model for online signatures in order to substantiate their applicability in biometric authentication. In order to build our model, we adopt the Fourier domain representation of the signature and propose a matching algorithm. The signature individuality is measured as the probability of a coincidental match between two arbitrary signatures, where model parameters are estimated using a large signature database. Based on this preliminary model and estimated parameters, we conclude that an average online signature provides a high level of security for authentication purposes. Finally, we provide a public online signature database along with associated testing protocols that can be used for testing signature verification system

    Improvisatory music and painting interface

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2004.Includes bibliographical references (p. 101-104).(cont.) theoretical section is accompanied by descriptions of historic and contemporary works that have influenced IMPI.Shaping collective free improvisations in order to obtain solid and succinct works with surprising and synchronized events is not an easy task. This thesis is a proposal towards that goal. It presents the theoretical, philosophical and technical framework of the Improvisatory Music and Painting Interface (IMPI) system: a new computer program for the creation of audiovisual improvisations performed in real time by ensembles of acoustic musicians. The coordination of these improvisations is obtained using a graphical language. This language is employed by one "conductor" in order to generate musical scores and abstract visual animations in real time. Doodling on a digital tablet following the syntax of the language allows both the creation of musical material with different levels of improvisatory participation from the ensemble and also the manipulation of the projected graphics in coordination with the music. The generated musical information is displayed in several formats on multiple computer screens that members of the ensemble play from. The digital graphics are also projected on a screen to be seen by an audience. This system is intended for a non-tonal, non-rhythmic, and texture-oriented musical style, which means that strong emphasis is put on the control of timbral qualities and continuum transitions. One of the main goals of the system is the translation of planned compositional elements (such as precise structure and synchronization between instruments) into the improvisatory domain. The graphics that IMPI generates are organic, fluid, vivid, dynamic, and unified with the music. The concept of controlled improvisation as well as the paradigm of the relationships between acoustic and visual material are both analyzed from an aesthetic point of view. TheHugo Solís García.S.M

    Constructing 3D faces from natural language interface

    Get PDF
    This thesis presents a system by which 3D images of human faces can be constructed using a natural language interface. The driving force behind the project was the need to create a system whereby a machine could produce artistic images from verbal or composed descriptions. This research is the first to look at constructing and modifying facial image artwork using a natural language interface. Specialised modules have been developed to control geometry of 3D polygonal head models in a commercial modeller from natural language descriptions. These modules were produced from research on human physiognomy, 3D modelling techniques and tools, facial modelling and natural language processing. [Continues.

    Animation and Interaction of Responsive, Expressive, and Tangible 3D Virtual Characters

    Get PDF
    This thesis is framed within the field of 3D Character Animation. Virtual characters are used in many Human Computer Interaction applications such as video games and serious games. Within these virtual worlds they move and act in similar ways to humans controlled by users through some form of interface or by artificial intelligence. This work addresses the challenges of developing smoother movements and more natural behaviors driving motions in real-time, intuitively, and accurately. The interaction between virtual characters and intelligent objects will also be explored. With these subjects researched the work will contribute to creating more responsive, expressive, and tangible virtual characters. The navigation within virtual worlds uses locomotion such as walking, running, etc. To achieve maximum realism, actors' movements are captured and used to animate virtual characters. This is the philosophy of motion graphs: a structure that embeds movements where the continuous motion stream is generated from concatenating motion pieces. However, locomotion synthesis, using motion graphs, involves a tradeoff between the number of possible transitions between different kinds of locomotion, and the quality of these, meaning smooth transition between poses. To overcome this drawback, we propose the method of progressive transitions using Body Part Motion Graphs (BPMGs). This method deals with partial movements, and generates specific, synchronized transitions for each body part (group of joints) within a window of time. Therefore, the connectivity within the system is not linked to the similarity between global poses allowing us to find more and better quality transition points while increasing the speed of response and execution of these transitions in contrast to standard motion graphs method. Secondly, beyond getting faster transitions and smoother movements, virtual characters also interact with each other and with users by speaking. This interaction requires the creation of appropriate gestures according to the voice that they reproduced. Gestures are the nonverbal language that accompanies voiced language. The credibility of virtual characters when speaking is linked to the naturalness of their movements in sync with the voice in speech and intonation. Consequently, we analyzed the relationship between gestures, speech, and the performed gestures according to that speech. We defined intensity indicators for both gestures (GSI, Gesture Strength Indicator) and speech (PSI, Pitch Strength Indicator). We studied the relationship in time and intensity of these cues in order to establish synchronicity and intensity rules. Later we adapted the mentioned rules to select the appropriate gestures to the speech input (tagged text from speech signal) in the Gesture Motion Graph (GMG). The evaluation of resulting animations shows the importance of relating the intensity of speech and gestures to generate believable animations beyond time synchronization. Subsequently, we present a system that leads automatic generation of gestures and facial animation from a speech signal: BodySpeech. This system also includes animation improvements such as: increased use of data input, more flexible time synchronization, and new features like editing style of output animations. In addition, facial animation also takes into account speech intonation. Finally, we have moved virtual characters from virtual environments to the physical world in order to explore their interaction possibilities with real objects. To this end, we present AvatARs, virtual characters that have tangible representation and are integrated into reality through augmented reality apps on mobile devices. Users choose a physical object to manipulate in order to control the animation. They can select and configure the animation, which serves as a support for the virtual character represented. Then, we explored the interaction of AvatARs with intelligent physical objects like the Pleo social robot. Pleo is used to assist hospitalized children in therapy or simply for playing. Despite its benefits, there is a lack of emotional relationship and interaction between the children and Pleo which makes children lose interest eventually. This is why we have created a mixed reality scenario where Vleo (AvatAR as Pleo, virtual element) and Pleo (real element) interact naturally. This scenario has been tested and the results conclude that AvatARs enhances children's motivation to play with Pleo, opening a new horizon in the interaction between virtual characters and robots.Aquesta tesi s'emmarca dins del món de l'animació de personatges virtuals tridimensionals. Els personatges virtuals s'utilitzen en moltes aplicacions d'interacció home màquina, com els videojocs o els serious games, on es mouen i actuen de forma similar als humans dins de mons virtuals, i on són controlats pels usuaris per mitjà d'alguna interfície, o d'altra manera per sistemes intel·ligents. Reptes com aconseguir moviments fluids i comportament natural, controlar en temps real el moviment de manera intuitiva i precisa, i inclús explorar la interacció dels personatges virtuals amb elements físics intel·ligents; són els que es treballen a continuació amb l'objectiu de contribuir en la generació de personatges virtuals responsius, expressius i tangibles. La navegació dins dels mons virtuals fa ús de locomocions com caminar, córrer, etc. Per tal d'aconseguir el màxim de realisme, es capturen i reutilitzen moviments d'actors per animar els personatges virtuals. Així funcionen els motion graphs, una estructura que encapsula moviments i per mitjà de cerques dins d'aquesta, els concatena creant un flux continu. La síntesi de locomocions usant els motion graphs comporta un compromís entre el número de transicions entre les diferents locomocions, i la qualitat d'aquestes (similitud entre les postures a connectar). Per superar aquest inconvenient, proposem el mètode transicions progressives usant Body Part Motion Graphs (BPMGs). Aquest mètode tracta els moviments de manera parcial, i genera transicions específiques i sincronitzades per cada part del cos (grup d'articulacions) dins d'una finestra temporal. Per tant, la conectivitat del sistema no està lligada a la similitud de postures globals, permetent trobar més punts de transició i de més qualitat, i sobretot incrementant la rapidesa en resposta i execució de les transicions respecte als motion graphs estàndards. En segon lloc, més enllà d'aconseguir transicions ràpides i moviments fluids, els personatges virtuals també interaccionen entre ells i amb els usuaris parlant, creant la necessitat de generar moviments apropiats a la veu que reprodueixen. Els gestos formen part del llenguatge no verbal que acostuma a acompanyar a la veu. La credibilitat dels personatges virtuals parlants està lligada a la naturalitat dels seus moviments i a la concordança que aquests tenen amb la veu, sobretot amb l'entonació d'aquesta. Així doncs, hem realitzat l'anàlisi de la relació entre els gestos i la veu, i la conseqüent generació de gestos d'acord a la veu. S'han definit indicadors d'intensitat tant per gestos (GSI, Gesture Strength Indicator) com per la veu (PSI, Pitch Strength Indicator), i s'ha estudiat la relació entre la temporalitat i la intensitat de les dues senyals per establir unes normes de sincronia temporal i d'intensitat. Més endavant es presenta el Gesture Motion Graph (GMG), que selecciona gestos adients a la veu d'entrada (text anotat a partir de la senyal de veu) i les regles esmentades. L'avaluació de les animaciones resultants demostra la importància de relacionar la intensitat per generar animacions cre\"{ibles, més enllà de la sincronització temporal. Posteriorment, presentem un sistema de generació automàtica de gestos i animació facial a partir d'una senyal de veu: BodySpeech. Aquest sistema també inclou millores en l'animació, major reaprofitament de les dades d'entrada i sincronització més flexible, i noves funcionalitats com l'edició de l'estil les animacions de sortida. A més, l'animació facial també té en compte l'entonació de la veu. Finalment, s'han traslladat els personatges virtuals dels entorns virtuals al món físic per tal d'explorar les possibilitats d'interacció amb objectes reals. Per aquest fi, presentem els AvatARs, personatges virtuals que tenen representació tangible i que es visualitzen integrats en la realitat a través d'un dispositiu mòbil gràcies a la realitat augmentada. El control de l'animació es duu a terme per mitjà d'un objecte físic que l'usuari manipula, seleccionant i parametritzant les animacions, i que al mateix temps serveix com a suport per a la representació del personatge virtual. Posteriorment, s'ha explorat la interacció dels AvatARs amb objectes físics intel·ligents com el robot social Pleo. El Pleo s'utilitza per a assistir a nens hospitalitzats en teràpia o simplement per jugar. Tot i els seus beneficis, hi ha una manca de relació emocional i interacció entre els nens i el Pleo que amb el temps fa que els nens perdin l'interès en ell. Així doncs, hem creat un escenari d'interacció mixt on el Vleo (un AvatAR en forma de Pleo; element virtual) i el Pleo (element real) interactuen de manera natural. Aquest escenari s'ha testejat i els resultats conclouen que els AvatARs milloren la motivació per jugar amb el Pleo, obrint un nou horitzó en la interacció dels personatges virtuals amb robots.Esta tesis se enmarca dentro del mundo de la animación de personajes virtuales tridimensionales. Los personajes virtuales se utilizan en muchas aplicaciones de interacción hombre máquina, como los videojuegos y los serious games, donde dentro de mundo virtuales se mueven y actúan de manera similar a los humanos, y son controlados por usuarios por mediante de alguna interfaz, o de otro modo, por sistemas inteligentes. Retos como conseguir movimientos fluidos y comportamiento natural, controlar en tiempo real el movimiento de manera intuitiva y precisa, y incluso explorar la interacción de los personajes virtuales con elementos físicos inteligentes; son los que se trabajan a continuación con el objetivo de contribuir en la generación de personajes virtuales responsivos, expresivos y tangibles. La navegación dentro de los mundos virtuales hace uso de locomociones como andar, correr, etc. Para conseguir el máximo realismo, se capturan y reutilizan movimientos de actores para animar los personajes virtuales. Así funcionan los motion graphs, una estructura que encapsula movimientos y que por mediante búsquedas en ella, los concatena creando un flujo contínuo. La síntesi de locomociones usando los motion graphs comporta un compromiso entre el número de transiciones entre las distintas locomociones, y la calidad de estas (similitud entre las posturas a conectar). Para superar este inconveniente, proponemos el método transiciones progresivas usando Body Part Motion Graphs (BPMGs). Este método trata los movimientos de manera parcial, y genera transiciones específicas y sincronizadas para cada parte del cuerpo (grupo de articulaciones) dentro de una ventana temporal. Por lo tanto, la conectividad del sistema no está vinculada a la similitud de posturas globales, permitiendo encontrar más puntos de transición y de más calidad, incrementando la rapidez en respuesta y ejecución de las transiciones respeto a los motion graphs estándards. En segundo lugar, más allá de conseguir transiciones rápidas y movimientos fluídos, los personajes virtuales también interaccionan entre ellos y con los usuarios hablando, creando la necesidad de generar movimientos apropiados a la voz que reproducen. Los gestos forman parte del lenguaje no verbal que acostumbra a acompañar a la voz. La credibilidad de los personajes virtuales parlantes está vinculada a la naturalidad de sus movimientos y a la concordancia que estos tienen con la voz, sobretodo con la entonación de esta. Así pues, hemos realizado el análisis de la relación entre los gestos y la voz, y la consecuente generación de gestos de acuerdo a la voz. Se han definido indicadores de intensidad tanto para gestos (GSI, Gesture Strength Indicator) como para la voz (PSI, Pitch Strength Indicator), y se ha estudiado la relación temporal y de intensidad para establecer unas reglas de sincronía temporal y de intensidad. Más adelante se presenta el Gesture Motion Graph (GMG), que selecciona gestos adientes a la voz de entrada (texto etiquetado a partir de la señal de voz) y las normas mencionadas. La evaluación de las animaciones resultantes demuestra la importancia de relacionar la intensidad para generar animaciones creíbles, más allá de la sincronización temporal. Posteriormente, presentamos un sistema de generación automática de gestos y animación facial a partir de una señal de voz: BodySpeech. Este sistema también incluye mejoras en la animación, como un mayor aprovechamiento de los datos de entrada y una sincronización más flexible, y nuevas funcionalidades como la edición del estilo de las animaciones de salida. Además, la animación facial también tiene en cuenta la entonación de la voz. Finalmente, se han trasladado los personajes virtuales de los entornos virtuales al mundo físico para explorar las posibilidades de interacción con objetos reales. Para este fin, presentamos los AvatARs, personajes virtuales que tienen representación tangible y que se visualizan integrados en la realidad a través de un dispositivo móvil gracias a la realidad aumentada. El control de la animación se lleva a cabo mediante un objeto físico que el usuario manipula, seleccionando y configurando las animaciones, y que a su vez sirve como soporte para la representación del personaje. Posteriormente, se ha explorado la interacción de los AvatARs con objetos físicos inteligentes como el robot Pleo. Pleo se utiliza para asistir a niños en terapia o simplemente para jugar. Todo y sus beneficios, hay una falta de relación emocional y interacción entre los niños y Pleo que con el tiempo hace que los niños pierdan el interés. Así pues, hemos creado un escenario de interacción mixto donde Vleo (AvatAR en forma de Pleo; virtual) y Pleo (real) interactúan de manera natural. Este escenario se ha testeado y los resultados concluyen que los AvatARs mejoran la motivación para jugar con Pleo, abriendo un nuevo horizonte en la interacción de los personajes virtuales con robots

    Advances in Human-Robot Interaction

    Get PDF
    Rapid advances in the field of robotics have made it possible to use robots not just in industrial automation but also in entertainment, rehabilitation, and home service. Since robots will likely affect many aspects of human existence, fundamental questions of human-robot interaction must be formulated and, if at all possible, resolved. Some of these questions are addressed in this collection of papers by leading HRI researchers

    Proceedings of the 7th Sound and Music Computing Conference

    Get PDF
    Proceedings of the SMC2010 - 7th Sound and Music Computing Conference, July 21st - July 24th 2010

    Metamorphosis of recognition: representation of a fictional city

    Get PDF
    Cada pessoa tem a sua própria cidade, cidade imaginária que combina pormenores de locais reais, mundos do sonho ou fantasias pessoais. É a nossa própria forma de encaixarmos na realidade. Absorvemos uma variedade de estímulos e conservamos o que na realidade nos deixou um eco nas nossas almas — hábitos culturais, experiências exóticas, pessoas, sentimentos, memórias, trajectos favoritos ou somente pequenos detalhes arquitetónicos… Ligando as nossas impressões, construímos o nosso mundo. Ele já não pertence somente a um lugar ou país, é global e misturado a partir de diferentes aspectos culturais. Com cada novo contato nós naturalmente mudamos e a nossa mente muda também. Este trabalho é minha cidade imaginária. Com este projeto quero mostrar partes da minha imaginação e incentivar as pessoas a partilharem as suas cidades «reais», as quais, no meu ponto de vista, ajudarão a compreendermo-nos melhor uns aos outros. Partes da minha cidade ficcional foram espalhadas pelas ruas de Lisboa, uma vez que este lugar tem uma influência mágica em mim. A cidade ficcional é representada através de diversas peças, feitas recorrendo a técnicas variadas. No fim deste projeto descobri que a metamorfose do reconhecimento pode de fato tornar-nos conscientes de aspetos familiares das nossas vidas quotidianas quando olhamos — ou contemplamos através de, mesmo que só visualmente — uma cidade ficcional.Every person has their own city, imaginary city, that combines details of real or fictional places, of dream worlds or just personal fantasies. It´s our own way to fit to reality. We absorb lots of things and save what really has left an echo in our souls — cultural habits, exotic experiences, people, feelings, memories, favourite routes or just small architectural details… By connecting our impressions, we build our world. It no longer belongs just to one place or country, it´s global and mixed up from different cultural aspects. With every new piece we naturally change, and our mind changes as well. This work is my imaginary city. With this project I want to show parts of my imagination and encourage people to share their «real» cities, which, in my point of view, will help us understand each other better. Parts of the fictional city were spread and implicated on streets of Lisbon, as this place has a magical influence on me. The fictional city is represented through several pieces, made in different visual techniques. At the end of this project, I found that the metamorphosis of recognition can indeed make us aware of familiar aspects of our everyday lives when we look at — or travel through, even if only visually — a fictional city

    Functional Animation:Interactive Animation in Digital Artifacts

    Get PDF
    • …
    corecore