51,707 research outputs found

    Generation of Whole-Body Expressive Movement Based on Somatical Theories

    Get PDF
    An automatic choreography method to generate lifelike body movements is proposed. This method is based on somatics theories that are conventionally used to evaluate human’s psychological and developmental states by analyzing the body movement. The idea of this paper is to use the theories in the inverse way: to facilitate generation of artificial body movements that are plausible regarding evolutionary, developmental and emotional states of robots or other non-living movers. This paper reviews somatic theories and describes a strategy for implementations of automatic body movement generation. In addition, a psychological experiment is reported to verify expression ability on body movement rhythm. This method facilitates to choreographing body movement of humanoids, animal-shaped robots, and computer graphics characters in video games

    NĂ€gemistaju automaatsete protsesside eksperimentaalne uurimine

    Get PDF
    VĂ€itekirja elektrooniline versioon ei sisalda publikatsiooneVĂ€itekiri keskendub nĂ€gemistaju protsesside eksperimentaalsele uurimisele, mis on suuremal vĂ”i vĂ€hemal mÀÀral automaatsed. Uurimistöös on kasutatud erinevaid eksperimentaalseid katseparadigmasid ja katsestiimuleid ning nii kĂ€itumuslikke- kui ka ajukuvamismeetodeid. Esimesed kolm empiirilist uurimust kĂ€sitlevad liikumisinformatsiooni töötlust, mis on evolutsiooni kĂ€igus kujunenud ĂŒheks olulisemaks baasprotsessiks nĂ€gemistajus. Esmalt huvitas meid, kuidas avastatakse liikuva objekti suunamuutusi, kui samal ajal toimub ka taustal liikumine (Uurimus I). NĂ€gemistaju uurijad on pikka aega arvanud, et liikumist arvutatakse alati mĂ”ne vĂ€lise objekti vĂ”i tausta suhtes. Meie uurimistulemused ei kinnitanud taolise suhtelise liikumise printsiibi paikapidavust ning toetavad pigem seisukohta, et eesmĂ€rkobjekti liikumisinformatsiooni töötlus on automaatne protsess, mis tuvastab silma pĂ”hjas toimuvaid nihkeid, ja taustal toimuv seda eriti ei mĂ”juta. Teise uurimuse tulemused (Uurimus II) nĂ€itasid, et nĂ€gemissĂŒsteem töötleb vĂ€ga edukalt ka seda liikumisinformatsiooni, millele vaatleja teadlikult tĂ€helepanu ei pööra. See tĂ€hendab, et samal ajal, kui inimene on mĂ”ne tĂ€helepanu hĂ”lmava tegevusega ametis, suudab tema aju taustal toimuvaid sĂŒndmusi automaatselt registreerida. IgapĂ€evaselt on inimese nĂ€gemisvĂ€ljas alati palju erinevaid objekte, millel on erinevad omadused, mistĂ”ttu jĂ€rgmiseks huvitas meid (Uurimus III), kuidas ĂŒhe tunnuse (antud juhul vĂ€rvimuutuse) töötlemist mĂ”jutab mĂ”ne teise tunnusega toimuv (antud juhul liikumiskiiruse) muutus. NĂ€itasime, et objekti liikumine parandas sama objekti vĂ€rvimuutuse avastamist, mis viitab, et nende kahe omaduse töötlemine ajus ei ole pĂ€ris eraldiseisev protsess. Samuti tĂ€hendab taoline tulemus, et hoolimata ĂŒhele tunnusele keskendumisest ei suuda inimene ignoreerida teist tĂ€helepanu tĂ”mbavat tunnust (liikumine), mis viitab taas kord automaatsetele töötlusprotsessidele. Neljas uurimus keskendus emotsionaalsete nĂ€ovĂ€ljenduste töötlusele, kuna need kannavad keskkonnas hakkamasaamiseks vajalikke sotsiaalseid signaale, mistĂ”ttu on alust arvata, et nende töötlus on kujunenud suuresti automaatseks protsessiks. NĂ€itasime, et emotsiooni vĂ€ljendavaid nĂ€gusid avastati kiiremini ja kergemini kui neutraalse ilmega nĂ€gusid ning et vihane nĂ€gu tĂ”mbas rohkem tĂ€helepanu kui rÔÔmus (Uurimus IV). VĂ€itekirja viimane osa puudutab visuaalset lahknevusnegatiivsust (ingl Visual Mismatch Negativity ehk vMMN), mis nĂ€itab aju vĂ”imet avastada automaatselt erinevusi enda loodud mudelist ĂŒmbritseva keskkonna kohta. Selle automaatse erinevuse avastamise mehhanismi uurimisse andsid oma panuse nii Uurimus II kui Uurimus IV, mis mĂ”lemad pakuvad vĂ€lja tĂ”endusi vMMN tekkimise kohta eri tingimustel ja katseparadigmades ning ka vajalikke metodoloogilisi tĂ€iendusi. Uurimus V on esimene kogu siiani ilmunud temaatilist teadustööd hĂ”lmav ĂŒlevaateartikkel ja metaanalĂŒĂŒs visuaalsest lahknevusnegatiivsusest psĂŒhhiaatriliste ja neuroloogiliste haiguste korral, mis panustab oluliselt visuaalse lahknevusnegatiivsuse valdkonna arengusse.The research presented and discussed in the thesis is an experimental exploration of processes in visual perception, which all display a considerable amount of automaticity. These processes are targeted from different angles using different experimental paradigms and stimuli, and by measuring both behavioural and brain responses. In the first three empirical studies, the focus is on motion detection that is regarded one of the most basic processes shaped by evolution. Study I investigated how motion information of an object is processed in the presence of background motion. Although it is widely believed that no motion can be perceived without establishing a frame of reference with other objects or motion on the background, our results found no support for relative motion principle. This finding speaks in favour of a simple and automatic process of detecting motion, which is largely insensitive to the surrounding context. Study II shows that the visual system is built to automatically process motion information that is outside of our attentional focus. This means that even if we are concentrating on some task, our brain constantly monitors the surrounding environment. Study III addressed the question of what happens when multiple stimulus qualities (motion and colour) are present and varied, which is the everyday reality of our visual input. We showed that velocity facilitated the detection of colour changes, which suggests that processing motion and colour is not entirely isolated. These results also indicate that it is hard to ignore motion information, and processing it is rather automatically initiated. The fourth empirical study focusses on another example of visual input that is processed in a rather automatic way and carries high survival value – emotional expressions. In Study IV, participants detected emotional facial expressions faster and more easily compared with neutral facial expressions, with a tendency towards more automatic attention to angry faces. In addition, we investigated the emergence of visual mismatch negativity (vMMN) that is one of the most objective and efficient methods for analysing automatic processes in the brain. Study II and Study IV proposed several methodological gains for registering this automatic change-detection mechanism. Study V is an important contribution to the vMMN research field as it is the first comprehensive review and meta-analysis of the vMMN studies in psychiatric and neurological disorders

    Affective games:a multimodal classification system

    Get PDF
    Affective gaming is a relatively new field of research that exploits human emotions to influence gameplay for an enhanced player experience. Changes in player’s psychology reflect on their behaviour and physiology, hence recognition of such variation is a core element in affective games. Complementary sources of affect offer more reliable recognition, especially in contexts where one modality is partial or unavailable. As a multimodal recognition system, affect-aware games are subject to the practical difficulties met by traditional trained classifiers. In addition, inherited game-related challenges in terms of data collection and performance arise while attempting to sustain an acceptable level of immersion. Most existing scenarios employ sensors that offer limited freedom of movement resulting in less realistic experiences. Recent advances now offer technology that allows players to communicate more freely and naturally with the game, and furthermore, control it without the use of input devices. However, the affective game industry is still in its infancy and definitely needs to catch up with the current life-like level of adaptation provided by graphics and animation

    Capture, Learning, and Synthesis of 3D Speaking Styles

    Full text link
    Audio-driven 3D facial animation has been widely explored, but achieving realistic, human-like performance is still unsolved. This is due to the lack of available 3D datasets, models, and standard evaluation metrics. To address this, we introduce a unique 4D face dataset with about 29 minutes of 4D scans captured at 60 fps and synchronized audio from 12 speakers. We then train a neural network on our dataset that factors identity from facial motion. The learned model, VOCA (Voice Operated Character Animation) takes any speech signal as input - even speech in languages other than English - and realistically animates a wide range of adult faces. Conditioning on subject labels during training allows the model to learn a variety of realistic speaking styles. VOCA also provides animator controls to alter speaking style, identity-dependent facial shape, and pose (i.e. head, jaw, and eyeball rotations) during animation. To our knowledge, VOCA is the only realistic 3D facial animation model that is readily applicable to unseen subjects without retargeting. This makes VOCA suitable for tasks like in-game video, virtual reality avatars, or any scenario in which the speaker, speech, or language is not known in advance. We make the dataset and model available for research purposes at http://voca.is.tue.mpg.de.Comment: To appear in CVPR 201

    First impressions: A survey on vision-based apparent personality trait analysis

    Get PDF
    © 2019 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes,creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.Personality analysis has been widely studied in psychology, neuropsychology, and signal processing fields, among others. From the past few years, it also became an attractive research area in visual computing. From the computational point of view, by far speech and text have been the most considered cues of information for analyzing personality. However, recently there has been an increasing interest from the computer vision community in analyzing personality from visual data. Recent computer vision approaches are able to accurately analyze human faces, body postures and behaviors, and use these information to infer apparent personality traits. Because of the overwhelming research interest in this topic, and of the potential impact that this sort of methods could have in society, we present in this paper an up-to-date review of existing vision-based approaches for apparent personality trait recognition. We describe seminal and cutting edge works on the subject, discussing and comparing their distinctive features and limitations. Future venues of research in the field are identified and discussed. Furthermore, aspects on the subjectivity in data labeling/evaluation, as well as current datasets and challenges organized to push the research on the field are reviewed.Peer ReviewedPostprint (author's final draft
    • 

    corecore