172 research outputs found

    ADAPT: The Agent Development and Prototyping Testbed

    Get PDF
    We present ADAPT, a flexible platform for designing and authoring functional, purposeful human characters in a rich virtual environment. Our framework incorporates character animation, navigation, and behavior with modular interchangeable components to produce narrative scenes. Our animation system provides locomotion, reaching, gaze tracking, gesturing, sitting, and reactions to external physical forces, and can easily be extended with more functionality due to a decoupled, modular structure. Additionally, our navigation component allows characters to maneuver through a complex environment with predictive steering for dynamic obstacle avoidance. Finally, our behavior framework allows a user to fully leverage a characterโ€™s animation and navigation capabilities when authoring both individual decision-making and complex interactions between actors using a centralized, event-driven model

    Interfaces for human-centered production and use of computer graphics assets

    Get PDF
    L'abstract รจ presente nell'allegato / the abstract is in the attachmen

    ์‹ฌ์ธต ๊ฐ•ํ™”ํ•™์Šต์„ ์ด์šฉํ•œ ์‚ฌ๋žŒ์˜ ๋ชจ์…˜์„ ํ†ตํ•œ ์ดํ˜•์  ์บ๋ฆญํ„ฐ ์ œ์–ด๊ธฐ ๊ฐœ๋ฐœ

    Get PDF
    ํ•™์œ„๋…ผ๋ฌธ(์„์‚ฌ) -- ์„œ์šธ๋Œ€ํ•™๊ต๋Œ€ํ•™์› : ๊ณต๊ณผ๋Œ€ํ•™ ์ปดํ“จํ„ฐ๊ณตํ•™๋ถ€, 2022. 8. ์„œ์ง„์šฑ.์‚ฌ๋žŒ์˜ ๋ชจ์…˜์„ ์ด์šฉํ•œ ๋กœ๋ด‡ ์ปจํŠธ๋กค ์ธํ„ฐํŽ˜์ด์Šค๋Š” ์‚ฌ์šฉ์ž์˜ ์ง๊ด€๊ณผ ๋กœ๋ด‡์˜ ๋ชจํ„ฐ ๋Šฅ๋ ฅ์„ ํ•ฉํ•˜์—ฌ ์œ„ํ—˜ํ•œ ํ™˜๊ฒฝ์—์„œ ๋กœ๋ด‡์˜ ์œ ์—ฐํ•œ ์ž‘๋™์„ ๋งŒ๋“ค์–ด๋‚ธ๋‹ค. ํ•˜์ง€๋งŒ, ํœด๋จธ๋…ธ์ด๋“œ ์™ธ์˜ ์‚ฌ์กฑ๋ณดํ–‰ ๋กœ๋ด‡์ด๋‚˜ ์œก์กฑ๋ณดํ–‰ ๋กœ๋ด‡์„ ์œ„ํ•œ ๋ชจ์…˜ ์ธํ„ฐํŽ˜์ด์Šค๋ฅผ ๋””์ž์ธ ํ•˜๋Š” ๊ฒƒ์€ ์‰ฌ์šด์ผ์ด ์•„๋‹ˆ๋‹ค. ์ด๊ฒƒ์€ ์‚ฌ๋žŒ๊ณผ ๋กœ๋ด‡ ์‚ฌ์ด์˜ ํ˜•ํƒœ ์ฐจ์ด๋กœ ์˜ค๋Š” ๋‹ค์ด๋‚˜๋ฏน์Šค ์ฐจ์ด์™€ ์ œ์–ด ์ „๋žต์ด ํฌ๊ฒŒ ์ฐจ์ด๋‚˜๊ธฐ ๋•Œ๋ฌธ์ด๋‹ค. ์šฐ๋ฆฌ๋Š” ์‚ฌ๋žŒ ์‚ฌ์šฉ์ž๊ฐ€ ์›€์ง์ž„์„ ํ†ตํ•˜์—ฌ ์‚ฌ์กฑ๋ณดํ–‰ ๋กœ๋ด‡์—์„œ ๋ถ€๋“œ๋Ÿฝ๊ฒŒ ์—ฌ๋Ÿฌ ๊ณผ์ œ๋ฅผ ์ˆ˜ํ–‰ํ•  ์ˆ˜ ์žˆ๊ฒŒ๋” ํ•˜๋Š” ์ƒˆ๋กœ์šด ๋ชจ์…˜ ์ œ์–ด ์‹œ์Šคํ…œ์„ ์ œ์•ˆํ•œ๋‹ค. ์šฐ๋ฆฌ๋Š” ์šฐ์„  ์บก์ณํ•œ ์‚ฌ๋žŒ์˜ ๋ชจ์…˜์„ ์ƒ์‘ํ•˜๋Š” ๋กœ๋ด‡์˜ ๋ชจ์…˜์œผ๋กœ ๋ฆฌํƒ€๊ฒŸ ์‹œํ‚จ๋‹ค. ์ด๋•Œ ์ƒ์‘ํ•˜๋Š” ๋กœ๋ด‡์˜ ๋ชจ์…˜์€ ์œ ์ €๊ฐ€ ์˜๋„ํ•œ ์˜๋ฏธ๋ฅผ ๋‚ดํฌํ•˜๊ฒŒ ๋˜๋ฉฐ, ์šฐ๋ฆฌ๋Š” ์ด๋ฅผ ์ง€๋„ํ•™์Šต ๋ฐฉ๋ฒ•๊ณผ ํ›„์ฒ˜๋ฆฌ ๊ธฐ์ˆ ์„ ์ด์šฉํ•˜์—ฌ ๊ฐ€๋Šฅ์ผ€ ํ•˜์˜€๋‹ค. ๊ทธ ๋’ค ์šฐ๋ฆฌ๋Š” ๋ชจ์…˜์„ ๋ชจ์‚ฌํ•˜๋Š” ํ•™์Šต์„ ์ปค๋ฆฌํ˜๋Ÿผ ํ•™์Šต๊ณผ ๋ณ‘ํ–‰ํ•˜์—ฌ ์ฃผ์–ด์ง„ ๋ฆฌํƒ€๊ฒŸ๋œ ์ฐธ์กฐ ๋ชจ์…˜์„ ๋”ฐ๋ผ๊ฐ€๋Š” ์ œ์–ด ์ •์ฑ…์„ ์ƒ์„ฑํ•˜์˜€๋‹ค. ์šฐ๋ฆฌ๋Š” "์ „๋ฌธ๊ฐ€ ์ง‘๋‹จ"์„ ํ•™์Šตํ•จ์œผ๋กœ ๋ชจ์…˜ ๋ฆฌํƒ€๊ฒŒํŒ… ๋ชจ๋“ˆ๊ณผ ๋ชจ์…˜ ๋ชจ์‚ฌ ๋ชจ๋“ˆ์˜ ์„ฑ๋Šฅ์„ ํฌ๊ฒŒ ์ฆ๊ฐ€์‹œ์ผฐ๋‹ค. ๊ฒฐ๊ณผ์—์„œ ๋ณผ ์ˆ˜ ์žˆ๋“ฏ, ์šฐ๋ฆฌ์˜ ์‹œ์Šคํ…œ์„ ์ด์šฉํ•˜์—ฌ ์‚ฌ์šฉ์ž๊ฐ€ ์‚ฌ์กฑ๋ณดํ–‰ ๋กœ๋ด‡์˜ ์„œ์žˆ๊ธฐ, ์•‰๊ธฐ, ๊ธฐ์šธ์ด๊ธฐ, ํŒ” ๋ป—๊ธฐ, ๊ฑท๊ธฐ, ๋Œ๊ธฐ์™€ ๊ฐ™์€ ๋‹ค์–‘ํ•œ ๋ชจํ„ฐ ๊ณผ์ œ๋“ค์„ ์‹œ๋ฎฌ๋ ˆ์ด์…˜ ํ™˜๊ฒฝ๊ณผ ํ˜„์‹ค์—์„œ ๋‘˜ ๋‹ค ์ˆ˜ํ–‰ํ•  ์ˆ˜ ์žˆ์—ˆ๋‹ค. ์šฐ๋ฆฌ๋Š” ์—ฐ๊ตฌ์˜ ์„ฑ๋Šฅ์„ ํ‰๊ฐ€ํ•˜๊ธฐ ์œ„ํ•˜์—ฌ ๋‹ค์–‘ํ•œ ๋ถ„์„์„ ํ•˜์˜€์œผ๋ฉฐ, ํŠนํžˆ ์šฐ๋ฆฌ ์‹œ์Šคํ…œ์˜ ๊ฐ๊ฐ์˜ ์š”์†Œ๋“ค์˜ ์ค‘์š”์„ฑ์„ ๋ณด์—ฌ์ค„ ์ˆ˜ ์žˆ๋Š” ์‹คํ—˜๋“ค์„ ์ง„ํ–‰ํ•˜์˜€๋‹ค.A human motion-based interface fuses operator intuitions with the motor capabilities of robots, enabling adaptable robot operations in dangerous environments. However, the challenge of designing a motion interface for non-humanoid robots, such as quadrupeds or hexapods, is emerged from the different morphology and dynamics of a human controller, leading to an ambiguity of control strategy. We propose a novel control framework that allows human operators to execute various motor skills on a quadrupedal robot by their motion. Our system first retargets the captured human motion into the corresponding robot motion with the operator's intended semantics. The supervised learning and post-processing techniques allow this retargeting skill which is ambiguity-free and suitable for control policy training. To enable a robot to track a given retargeted motion, we then obtain the control policy from reinforcement learning that imitates the given reference motion with designed curriculums. We additionally enhance the system's performance by introducing a set of experts. Finally, we randomize the domain parameters to adapt the physically simulated motor skills to real-world tasks. We demonstrate that a human operator can perform various motor tasks using our system including standing, tilting, manipulating, sitting, walking, and steering on both physically simulated and real quadruped robots. We also analyze the performance of each system component ablation study.1 Introduction 1 2 Related Work 5 2.1 Legged Robot Control 5 2.2 Motion Imitation 6 2.3 Motion-based Control 7 3 Overview 9 4 Motion Retargeting Module 11 4.1 Motion Retargeting Network 12 4.2 Post-processing for Consistency 14 4.3 A Set of Experts for Multi-task Support 15 5 Motion Imitation Module 17 5.1 Background: Reinforcement Learning 18 5.2 Formulation of Motion Imitation 18 5.3 Curriculum Learning over Tasks and Difficulties 21 5.4 Hierarchical Control with States 21 5.5 Domain Randomization 22 6 Results and Analysis 23 6.1 Experimental Setup 23 6.2 Motion Performance 24 6.3 Analysis 28 6.4 Comparison to Other Methods 31 7 Conclusion And Future Work 32 Bibliography 34 Abstract (In Korean) 44 ๊ฐ์‚ฌ์˜ ๊ธ€ 45์„

    Dancing Media: The Contagious Movement of Posthuman Bodies (or Towards A Posthuman Theory of Dance)

    Get PDF
    My dissertation seeks to define a posthuman theory of dance through a historical study of the dancer as an instrument or technology for exploring emergent visual media, and by positioning screendance as an experimental technique for animating posthuman relation and thought. Commonly understood as ephemeral, dance is produced by assemblages that include bodies but are not limited to them. In this way, dance exceeds the human body. There is a central tension in the practice of dance, between the persistent presumption of the dancing body as a channel for human expression, and dance as a technicity of the bodyโ€”a discipline and a practice of repeated gestureโ€”that calls into question categories of the human. A posthuman theory of dance invites examination of such tensions and interrogates traditional notions of authenticity, ownership and commodification, as well as the bounded, individual subject who can assess the surrounding world with precise clarity, certain of where the human begins and ends. The guiding historical question for my dissertation is: if it is possible to describe both a modern form of posthuman dance (turn of the 19th-20th century), and a more recent form of posthuman dance (turn of the 20th-21st century), are they part of the same assemblage or are they constituted differently, and if so, how? Throughout my four chapters, I explore an array of case studies from early modernism to advanced capitalism, including Loie Fullerโ€™s otherworldly stage dances; the scientific motion studies of Muybridge and Marey; Fritz Langโ€™s dancing maschinenmensch (or the first on-screen dancing machine) in the 1927 film Metropolis; the performances of singer-dancer hologram pop star, Hatsune Miku; and American engineering firm Boston Dynamicsโ€™ dancing military robots. The figure of the โ€œdancing machineโ€ (McCarren) is central to my project, especially given that dance has historically been used as a means of testing machinesโ€”from automata to robots to CGI images animated with MoCapโ€”in their capacity to be lively or human-like. In each case, I am interested in how dance continues to be productive of some kind of subjectivity (or interiority, or โ€œsoulโ€), even in the absence of the human body, and how technique and gesture passes between bodies, both virtual and organic, dispersing agency often attributed to the human alone. I propose that a posthuman theory of dance is a necessary intervention to the broad and contradictory field of posthumanism because dance returns us to questions about bodies that are often suspiciously ignored in theories of posthumanism, especially regarding race (and historically racist categories of non/inhumanity), thereby exposing many of posthumanismโ€™s biases, appropriations, dispossessions and erasures. Throughout my dissertation, I look to dance as both a concrete example and as a method of thinking through the potentials and limitations of posthumanism

    Expressive movement generation with machine learning

    Get PDF
    Movement is an essential aspect of our lives. Not only do we move to interact with our physical environment, but we also express ourselves and communicate with others through our movements. In an increasingly computerized world where various technologies and devices surround us, our movements are essential parts of our interaction with and consumption of computational devices and artifacts. In this context, incorporating an understanding of our movements within the design of the technologies surrounding us can significantly improve our daily experiences. This need has given rise to the field of movement computing โ€“ developing computational models of movement that can perceive, manipulate, and generate movements. In this thesis, we contribute to the field of movement computing by building machine-learning-based solutions for automatic movement generation. In particular, we focus on using machine learning techniques and motion capture data to create controllable, generative movement models. We also contribute to the field by creating datasets, tools, and libraries that we have developed during our research. We start our research by reviewing the works on building automatic movement generation systems using machine learning techniques and motion capture data. Our review covers background topics such as high-level movement characterization, training data, features representation, machine learning models, and evaluation methods. Building on our literature review, we present WalkNet, an interactive agent walking movement controller based on neural networks. The expressivity of virtual, animated agents plays an essential role in their believability. Therefore, WalkNet integrates controlling the expressive qualities of movement with the goal-oriented behaviour of an animated virtual agent. It allows us to control the generation based on the valence and arousal levels of affect, the movementโ€™s walking direction, and the moverโ€™s movement signature in real-time. Following WalkNet, we look at controlling movement generation using more complex stimuli such as music represented by audio signals (i.e., non-symbolic music). Music-driven dance generation involves a highly non-linear mapping between temporally dense stimuli (i.e., the audio signal) and movements, which renders a more challenging modelling movement problem. To this end, we present GrooveNet, a real-time machine learning model for music-driven dance generation

    The Turning, Stretching and Boxing Technique: a Direction Worth Looking Towards

    Get PDF
    3D avatar user interfaces (UI) are now used for many applications, a growing area for their use is serving location sensitive information to users as they need it while visiting or touring a building. Users communicate directly with an avatar rendered to a display in order to ask a question, get directions or partake in a guided tour and as a result of this kind of interaction with avatar UI, they have become a familiar part of modern human-computer interaction (HCI). However, if the viewer is not in the sweet spot (defined by Raskar et al. (1999) as a stationary viewing position at the optimal 90ยฐ angle to a 2D display) of the 2D display, the 3D illusion of the avatar deteriorates, which becomes evident as the userโ€™s ability to interpret the avatarโ€™s gaze direction towards points of interests (PoI) in the userโ€™s real-world surroundings deteriorates also. This thesis combats the above problem by allowing the user to view the 3D avatar UI from outside the sweet spot, without any deterioration in the 3D illusion. The user does not lose their ability to interpret the avatarโ€™s gaze direction and thus, the user experiences no loss in the perceived corporeal presence (Holz et al., 2011) for the avatar. This is facilitated by a three pronged graphical process called the Turning, Stretching and Boxing (TSB) technique, which maintains the avatarโ€™s 3D illusion regardless of the userโ€™s viewing angle and is achieved by using head-tracking data from the user captured by a Microsoft Kinect. The TSB technique is a contribution of this thesis because of how it is used with an avatar UI, where the user is free to move outside of the sweet spot without losing the 3D illusion of the rendered avatar. Then each consecutive empirical study evaluates the claims of the TSB Technique are also contributions of this thesis, those claims are as follows: (1) increase interpretability of the avatarโ€™s gaze direction and (2) increase perception of corporeal presence for the avatar. The last of the empirical studies evaluates the use of 3D display technology in conjunction with the TSB technique. The results of Study 1 and Study 2 indicate that there is a significant increase in the participantsโ€™ abilities to interpret the avatarโ€™s gaze direction when the TSB technique is switched on. The survey from Study 1 shows a significant increase in the perceived corporeal presence of the avatar when the TSB technique is switched on. The results from Study 3 indicate that there is no significant benefit for participantsโ€™ when interpreting the avatarโ€™s gaze direction with 3D display technology turned on or off when the TSB technique is switched on

    Designing Touchless Gestural Interfaces for Public Displays

    Get PDF
    Nell\u2019ultimo decennio, molti autori hanno studiato la possibilit\ue0 di utilizzare le interfacce a gesti come strumento innovativo per supportare l\u2019interazione con i computer. Inoltre, le recenti innovazioni tecnologiche hanno permesso di installare display interattivi in ambienti privati e pubblici. Tuttavia, l\u2019interattivit\ue0 di tali display \ue8 spesso basata sull\u2019uso di touchscreen, mentre tecnologie come i dispositivi Kinect-like vengono adottate molto pi\uf9 raramente, soprattutto se si considera l\u2019ambito dei display pubblici. Al giorno d\u2019oggi, l\u2019opportunit\ue0 di studiare le interfacce touchless per i display pubblici \ue8 diventata concreta, e rappresenta il campo di studio di diversi ricercatori. L\u2019obiettivo principale di questa tesi \ue8 quello di descrivere e studiare i problemi legati alla progettazione e all\u2019implementazione di un\u2019interfaccia grafica dedicata all\u2019interazione touchless a gesti con display pubblici. Ci\uf2 implica la necessit\ue0 di superare alcuni problemi tipici, sia dei display pubblici (ad esempio, l\u2019interaction blindness e l\u2019usabilit\ue0 immediata), che delle interfacce touchless (per esempio, comunicare che l\u2019interattivit\ue0 \ue8 gestuale). La tesi, inoltre, include uno studio che analizza quanto la presenza dell\u2019Avatar possa influire sulle interazioni degli utenti, in termini di carico di lavoro percepito, e quanto essa sia in grado di incoraggiare le interazioni a due mani. Poich\ue9 ABaToGI \ue8 stata progettata per i display pubblici, l\u2019interfaccia \ue8 stata anche inclusa in un\u2019installazione pubblica per essere valutata sul campo. I risultati di questo studio (e di quelli precedenti) sono stati quindi riassunti al fine di sviluppare una serie di linee guida per lo sviluppo di nuove interfacce touchless a gesti basata sull\u2019uso di un Avatar. La tesi si conclude con alcuni spunti di ricerca per il futuro.In the last decade, many authors have investigated and studied touchless and gestural interactions as a novel tool for interacting with computers. Moreover, technological innovations have allowed for installations of interactive displays in private and public places. However, interactivity is usually implemented by touchscreens, whereas technologies able to recognize body gestures are more rarely adopted, especially in integration with commercial public displays. Nowadays, the opportunity to investigate touchless interfaces for such systems has become concrete and studied by many researchers. Indeed, this interaction modality offers the possibility to overcome several issues that cannot be solved by touch-based solutions, e.g. keeping a high hygiene level of the screen surface, as well as providing big displays with interactive capabilities. The main goal of this thesis is to describe the design process for implementing touchless gestural interfaces for public displays. This implies the need for overcoming several typical issues of both public displays (e.g. interaction blindness, immediate usability) and touchless interfaces (e.g. communicating touchless interactivity). To this end, a novel Avatar-based Touchless Gestural Interface (or ABaToGI) has been developed, and its design process is described in the thesis, along with the user studies conducted for its evaluation. Moreover, the thesis analyzes how the presence of the Avatar may affect user interactions in terms of perceived cognitive workload, and if it may be able to foster bimanual interactions. Then, as ABaToGI was designed for public displays, it has been installed in an actual deployment in order to be evaluated in-the-wild (i.e. not in a lab setting). The resulting outcomes, along with the previously described studies, have been used to introduce a set of design guidelines for developing future touchless gestural interfaces, with a particular focus on Avatar-based ones. The results of this thesis provide also a basis for future research, which concludes this work

    Responding to human full-body gestures embedded in motion data streams.

    Full text link
     This research created a neural-network enabled artificially intelligent performing agent that was able to learn to dance and recognise movement through a rehearsal and performance process with a human dancer. The agent exhibited emergent dance behaviour and successfully engaged in a live, semi-improvised dance performance with the human dancer
    • โ€ฆ
    corecore