67 research outputs found

    Multi-destination beaming: apparently being in three places at once through robotic and virtual embodiment

    Get PDF
    It has been shown that an illusion of ownership over an artificial limb or even an entire body can be induced in people through multisensory stimulation, providing evidence that the surrogate body is the person’s actual body. Such body ownership illusions (BOIs) have been shown to occur with virtual bodies, mannequins, and humanoid robots. In this study, we show the possibility of eliciting a full-BOI over not one, but multiple artificial bodies concurrently. We demonstrate this by describing a system that allowed a participant to inhabit and fully control two different humanoid robots located in two distinct places and a virtual body in immersive virtual reality, using real-time full-body tracking and two-way audio communication, thereby giving them the illusion of ownership over each of them. We implemented this by allowing the participant be embodied in any one surrogate body at a given moment and letting them instantaneously switch between them. While the participant was embodied in one of the bodies, a proxy system would track the locations currently unoccupied and would control their remote representation in order to continue performing the tasks in those locations in a logical fashion. To test the efficacy of this system, an exploratory study was carried out with a fully functioning setup with three destinations and a simplified version of the proxy for use in a social interaction. The results indicate that the system was physically and psychologically comfortable and was rated highly by participants in terms of usability. Additionally, feelings of BOI and agency were reported, which were not influenced by the type of body representation. The results provide us with clues regarding BOI with humanoid robots of different dimensions, along with insight about self-localization and multilocation

    Unifying nonholonomic and holonomic behaviors in human locomotion

    Get PDF
    Our motivation is to understand human locomotion to better control locomotion of virtual systems (robots and mannequins). Human locomotion has been studied so far in different disciplines. We consider locomotion as the level of a body frame (in direction and orientation) instead of the complexity of many kinematic joints systems as other approaches. Our approach concentrates on the computational foundation of human locomotion. The ultimate goal is to find a model that explains the shape of human locomotion in space. To do that, we first base on the behavior of trajectories on the ground during intentional locomotion. When human walk, they put one foot in front of the other and consequently, the direction of motion is deduced by the body orientation. That’s what we called the nonholonomic behavior hypothesis. However, in the case of a sideward step, the body orientation is not coupled to the tangential direction of the trajectory, and the hypothesis is no longer validated. The behavior of locomotion becomes holonomic. The aim of this thesis is to distinguish these two behaviors and to exploit them in neuroscience, robotics and computer animation. The first part of the thesis is to determine the configurations of the holonomic behavior by an experimental protocol and an original analytical tool segmenting the nonholonomic and holonomic behaviors of any trajectory. In the second part, we present a model unifying nonholonomic and holonomic behaviors. This model combines three velocities generating human locomotion: forward, angular and lateral. The experimental data in the first part are used in an inverse optimal control approach to find a multi-objective function which produces calculated trajectories as those of natural human locomotion. The last part is the application that uses the two behaviors to synthesize human locomotion in computer animation. Each locomotion is characterized by three velocities and is therefore considered as a point in 3D control space (of three speeds). We collected a library that contains locomotions at different velocities - points in 3D space. These points are structured in a tetrahedra cloud. When a desired speed is given, it is projected into the 3D space and we find the corresponding tetrahedron that contains it. The new animation is interpolated by four locomotions corresponding to four vertices of the selected tetrahedron. We exhibit several animation scenarios on a virtual character

    Multi-touch Detection and Semantic Response on Non-parametric Rear-projection Surfaces

    Get PDF
    The ability of human beings to physically touch our surroundings has had a profound impact on our daily lives. Young children learn to explore their world by touch; likewise, many simulation and training applications benefit from natural touch interactivity. As a result, modern interfaces supporting touch input are ubiquitous. Typically, such interfaces are implemented on integrated touch-display surfaces with simple geometry that can be mathematically parameterized, such as planar surfaces and spheres; for more complicated non-parametric surfaces, such parameterizations are not available. In this dissertation, we introduce a method for generalizable optical multi-touch detection and semantic response on uninstrumented non-parametric rear-projection surfaces using an infrared-light-based multi-camera multi-projector platform. In this paradigm, touch input allows users to manipulate complex virtual 3D content that is registered to and displayed on a physical 3D object. Detected touches trigger responses with specific semantic meaning in the context of the virtual content, such as animations or audio responses. The broad problem of touch detection and response can be decomposed into three major components: determining if a touch has occurred, determining where a detected touch has occurred, and determining how to respond to a detected touch. Our fundamental contribution is the design and implementation of a relational lookup table architecture that addresses these challenges through the encoding of coordinate relationships among the cameras, the projectors, the physical surface, and the virtual content. Detecting the presence of touch input primarily involves distinguishing between touches (actual contact events) and hovers (near-contact proximity events). We present and evaluate two algorithms for touch detection and localization utilizing the lookup table architecture. One of the algorithms, a bounded plane sweep, is additionally able to estimate hover-surface distances, which we explore for interactions above surfaces. The proposed method is designed to operate with low latency and to be generalizable. We demonstrate touch-based interactions on several physical parametric and non-parametric surfaces, and we evaluate both system accuracy and the accuracy of typical users in touching desired targets on these surfaces. In a formative human-subject study, we examine how touch interactions are used in the context of healthcare and present an exploratory application of this method in patient simulation. A second study highlights the advantages of touch input on content-matched physical surfaces achieved by the proposed approach, such as decreases in induced cognitive load, increases in system usability, and increases in user touch performance. In this experiment, novice users were nearly as accurate when touching targets on a 3D head-shaped surface as when touching targets on a flat surface, and their self-perception of their accuracy was higher

    Interactive avatar control: Case studies on physics and performance based character animation

    Get PDF
    Master'sMASTER OF SCIENC

    Analyse et simulation de mouvements d'atteinte contraints en position et orientation pour un humanoïde de synthèse

    Get PDF
    La simulation du geste humain est une thématique de recherche importante et trouve notamment une application dans l'analyse ergonomique pour l'aide à la conception de postes de travail. Le sujet de cette thèse concerne la génération automatique de tâches d'atteinte dans le plan horizontal pour un humanoïde. Ces dernières, à partir d'un objectif exprimé dans l'espace de la tâche, requièrent une coordination de l'ensemble des liaisons. L'une des principales difficultés rencontrées lors de la simulation de gestes réalistes est liée à la redondance naturelle de l'humain. Notre démarche est focalisée principalement sur deux aspects : - le mouvement de la main dans l'espace opérationnel (trajectoire spatiale et profil temporel), - la coordination des différentes sous-chaînes cinématiques. Afin de caractériser le mouvement humain, nous avons mené une campagne de capture de mouvements pour des gestes d'atteinte contraignant la position et l'orientation de la main dans le plan horizontal. Ces acquisitions nous ont permis de connaître l'évolution spatiale et temporelle de la main dans l'espace de la tâche, en translation et en rotation. Ces données acquises couplées à une méthode de rejeu ont également permis d'analyser les relations intrinsèques qui lient l'espace de la tâche à l'espace articulaire du mannequin. Le schéma de génération automatique de mouvements réalistes est basé sur une pile de tâche avec une approche cinématique. L'hypothèse retenue pour simuler le geste est de suivre le chemin le plus court dans l'espace de la tâche tout en bornant le coût dans l'espace articulaire. Un ensemble de paramètres permet de régler le schéma. Il en résulte une cartographie de réglages qui permet de simuler une classe de mouvements réalistes. Enfin, ce schéma de génération automatique de mouvements réalistes est validé par une comparaison quantitative et qualitative entre la simulation et le geste humain. ABSTRACT : The simulation of human movement is an active theme of research, particularly in ergonomic analysis to aid in the design of workstations. The aim of this thesis concerns the automatic generation of reaching tasks in the horizontal plane for a virtual humanoid. An objective expressed in the task space, requires coordination of all joints of the mannequin. The main difficulties encountered in the simulation of realistic movements is related to the natural redundancy of the human. Our approach is focused mainly on two aspects: - Motion of the hand's operator in the task space (spatial and temporal aspect), - Coordination of all kinematic chains. To characterize human movement, we conducted a set of motion capture with position and orientation constraints of the hand in the horizontal plane. These acquisitions allowed to know the spatial and temporal evolution of the hand in the task space, for translation and rotation aspects. These acquired data were coupled with a playback method to analyze the intrinsic relations that link the task space to joint space of the model. The automatic generation scheme of realistic motion is based on a stack of task with a kinematic approach. The assumption used to simulate the action is to follow the shortest path in the task space while limiting the cost in the joint space. The scheme is characterized by a set of parameters. A global map of parameter adjustment enables the simulation of a class of realistic movements. Finally, this scheme is validated quantitatively and qualitatively with comparison between the simulation and the human gesture

    The Irresistible Animacy of Lively Artefacts

    Get PDF
    This thesis explores the perception of ‘liveliness’, or ‘animacy’, in robotically driven artefacts. This perception is irresistible, pervasive, aesthetically potent and poorly understood. I argue that the Cartesian rationalist tendencies of robotic and artificial intelligence research cultures, and associated cognitivist theories of mind, fail to acknowledge the perceptual and instinctual emotional affects that lively artefacts elicit. The thesis examines how we see artefacts with particular qualities of motion to be alive, and asks what notions of cognition can explain these perceptions. ‘Irresistible Animacy’ is our human tendency to be drawn to the primitive and strangely thrilling nature of experiencing lively artefacts. I have two research methodologies; one is interdisciplinary scholarship and the other is my artistic practice of building lively artefacts. I have developed an approach that draws on first-order cybernetics’ central animating principle of feedback-control, and second-order cybernetics’ concerns with cognition. The foundations of this approach are based upon practices of machine making to embody and perform animate behaviour, both as scientific and artistic pursuits. These have inspired embodied, embedded, enactive, and extended notions of cognition. I have developed an understanding using a theoretical framework, drawing upon literature on visual perception, behavioural and social psychology, puppetry, animation, cybernetics, robotics, interaction and aesthetics. I take as a starting point, the understanding that the visual cortex of the vertebrate eye includes active feature-detection for animate agents in our environment, and actively constructs the causal and social structure of this environment. I suggest perceptual ambiguity is at the centre of all animated art forms. Ambiguity encourages natural curiosity and interactive participation. It also elicits complex visceral qualities of presence and the uncanny. In the making of my own Lively Artefacts, I demonstrate a series of different approaches including the use of abstraction, artificial life algorithms, and reactive techniques
    • …
    corecore