241 research outputs found

    Virtual Reality Games for Motor Rehabilitation

    Get PDF
    This paper presents a fuzzy logic based method to track user satisfaction without the need for devices to monitor users physiological conditions. User satisfaction is the key to any product’s acceptance; computer applications and video games provide a unique opportunity to provide a tailored environment for each user to better suit their needs. We have implemented a non-adaptive fuzzy logic model of emotion, based on the emotional component of the Fuzzy Logic Adaptive Model of Emotion (FLAME) proposed by El-Nasr, to estimate player emotion in UnrealTournament 2004. In this paper we describe the implementation of this system and present the results of one of several play tests. Our research contradicts the current literature that suggests physiological measurements are needed. We show that it is possible to use a software only method to estimate user emotion

    A Review of Verbal and Non-Verbal Human-Robot Interactive Communication

    Get PDF
    In this paper, an overview of human-robot interactive communication is presented, covering verbal as well as non-verbal aspects of human-robot interaction. Following a historical introduction, and motivation towards fluid human-robot communication, ten desiderata are proposed, which provide an organizational axis both of recent as well as of future research on human-robot communication. Then, the ten desiderata are examined in detail, culminating to a unifying discussion, and a forward-looking conclusion

    From audiences to mobs : crowd simulation with psychological factors

    Get PDF
    Ankara : The Department of Computer Engineering and the Institute of Engineering and Science of Bilkent University, 2010.Thesis (Ph. D.) -- Bilkent University, 2010.Includes bibliographical references leaves 90-101.Crowd simulation has a wide range of application areas such as biological and social modeling, military simulations, computer games and movies. Simulating the behavior of animated virtual crowds has been a challenging task for the computer graphics community. As well as the physical and the geometrical aspects, the semantics underlying the motion of real crowds inspire the design and implementation of virtual crowds. Psychology helps us understand the motivations of the individuals constituting a crowd. There has been extensive research on incorporating psychological models into the simulation of autonomous agents. However, in our study, instead of the psychological state of an individual agent as such, we are interested in the overall behavior of the crowd that consists of virtual humans with various psychological states. For this purpose, we incorporate the three basic constituents of affect: personality, emotion and mood. Each of these elements contribute variably to the emergence of different aspects of behavior. We thus examine, by changing the parameters, how groups of people with different characteristics interact with each other, and accordingly, how the global crowd behavior is influenced. In the social psychology literature, crowds are classified as mobs and audiences. Audiences are passive crowds whereas mobs are active crowds with emotional, irrational and seemingly homogeneous behavior. In this thesis, we examine how audiences turn into mobs and simulate the common properties of mobs to create collective misbehavior. So far, crowd simulation research has focused on panicking crowds among all types of mobs. We extend the state of the art to simulate different types of mobs based on the taxonomy. We demonstrate various scenarios that realize the behavior of distinct mob types. Our model is built on top of an existing crowd simulation system, HiDAC (High-Density Autonomous Crowds). HiDAC provides us with the physical and low-level psychological features of crowds. The user normally sets these parameters to model the non-uniformity and diversity of the crowd. In our work, we free the user of the tedious task of low-level parameter tuning, and combine all these behaviors in distinct psychological factors. We present the results of our experiments on whether the incorporation of a personality model into HiDAC was perceived as intended.Durupınar, FundaPh.D

    Probabilistic Models of Motor Production

    Get PDF
    N. Bernstein defined the ability of the central neural system (CNS) to control many degrees of freedom of a physical body with all its redundancy and flexibility as the main problem in motor control. He pointed at that man-made mechanisms usually have one, sometimes two degrees of freedom (DOF); when the number of DOF increases further, it becomes prohibitively hard to control them. The brain, however, seems to perform such control effortlessly. He suggested the way the brain might deal with it: when a motor skill is being acquired, the brain artificially limits the degrees of freedoms, leaving only one or two. As the skill level increases, the brain gradually "frees" the previously fixed DOF, applying control when needed and in directions which have to be corrected, eventually arriving to the control scheme where all the DOF are "free". This approach of reducing the dimensionality of motor control remains relevant even today. One the possibles solutions of the Bernstetin's problem is the hypothesis of motor primitives (MPs) - small building blocks that constitute complex movements and facilitite motor learnirng and task completion. Just like in the visual system, having a homogenious hierarchical architecture built of similar computational elements may be beneficial. Studying such a complicated object as brain, it is important to define at which level of details one works and which questions one aims to answer. David Marr suggested three levels of analysis: 1. computational, analysing which problem the system solves; 2. algorithmic, questioning which representation the system uses and which computations it performs; 3. implementational, finding how such computations are performed by neurons in the brain. In this thesis we stay at the first two levels, seeking for the basic representation of motor output. In this work we present a new model of motor primitives that comprises multiple interacting latent dynamical systems, and give it a full Bayesian treatment. Modelling within the Bayesian framework, in my opinion, must become the new standard in hypothesis testing in neuroscience. Only the Bayesian framework gives us guarantees when dealing with the inevitable plethora of hidden variables and uncertainty. The special type of coupling of dynamical systems we proposed, based on the Product of Experts, has many natural interpretations in the Bayesian framework. If the dynamical systems run in parallel, it yields Bayesian cue integration. If they are organized hierarchically due to serial coupling, we get hierarchical priors over the dynamics. If one of the dynamical systems represents sensory state, we arrive to the sensory-motor primitives. The compact representation that follows from the variational treatment allows learning of a motor primitives library. Learned separately, combined motion can be represented as a matrix of coupling values. We performed a set of experiments to compare different models of motor primitives. In a series of 2-alternative forced choice (2AFC) experiments participants were discriminating natural and synthesised movements, thus running a graphics Turing test. When available, Bayesian model score predicted the naturalness of the perceived movements. For simple movements, like walking, Bayesian model comparison and psychophysics tests indicate that one dynamical system is sufficient to describe the data. For more complex movements, like walking and waving, motion can be better represented as a set of coupled dynamical systems. We also experimentally confirmed that Bayesian treatment of model learning on motion data is superior to the simple point estimate of latent parameters. Experiments with non-periodic movements show that they do not benefit from more complex latent dynamics, despite having high kinematic complexity. By having a fully Bayesian models, we could quantitatively disentangle the influence of motion dynamics and pose on the perception of naturalness. We confirmed that rich and correct dynamics is more important than the kinematic representation. There are numerous further directions of research. In the models we devised, for multiple parts, even though the latent dynamics was factorized on a set of interacting systems, the kinematic parts were completely independent. Thus, interaction between the kinematic parts could be mediated only by the latent dynamics interactions. A more flexible model would allow a dense interaction on the kinematic level too. Another important problem relates to the representation of time in Markov chains. Discrete time Markov chains form an approximation to continuous dynamics. As time step is assumed to be fixed, we face with the problem of time step selection. Time is also not a explicit parameter in Markov chains. This also prohibits explicit optimization of time as parameter and reasoning (inference) about it. For example, in optimal control boundary conditions are usually set at exact time points, which is not an ecological scenario, where time is usually a parameter of optimization. Making time an explicit parameter in dynamics may alleviate this

    Perception of Human Movement Based on Modular Movement Primitives

    Get PDF
    People can identify and understand human movement from very degraded visual information without effort. A few dots representing the position of the joints are enough to induce a vivid and stable percept of the underlying movement. Due to this ability, the realistic animation of 3D characters requires great skill. Studying the constituents of movement that looks natural would not only help these artists, but also bring better understanding of the underlying information processing in the brain. Analogous to the hurdles in animation, the efforts of roboticists reflect the complexity of motion production: controlling the many degrees of freedom of a body requires time-consuming computations. Modularity is one strategy to address this problem: Complex movement can be decomposed into simple primitives. A few primitives can conversely be used to compose a large number of movements. Many types of movement primitives (MPs) have been proposed on different levels of information processing hierarchy in the brain. MPs have mostly been proposed for movement production. Yet, modularity based on primitives might similarly enable robust movement perception. For my thesis, I have conducted perceptual experiments based on the assumption of a shared representation of perception and action based on MPs. The three different types of MPs I have investigated are temporal MPs (TMP), dynamical MPs (DMP), and coupled Gaussian process dynamical models (cGPDM). The MP-models have been trained on natural movements to generate new movements. I then perceptually validated these artificial movements in different psychophysical experiments. In all experiments I used a two-alternative forced choice paradigm, in which human observers were presented a movement based on motion-capturing data, and one generated by an MP-model. They were then asked to chose the movement which they perceived as more natural. In the first experiment I investigated walking movements, and found that, in line with previous results, faithful representation of movement dynamics is more important than good reconstruction of pose. In the second experiment I investigated the role of prediction in perception using reaching movements. Here, I found that perceived naturalness of the predictions is similar to the perceived naturalness of movements itself obtained in the first experiment. I have found that MP models are able to produce movement that looks natural, with the TMP achieving the highest perceptual scores as well highest predictiveness of perceived naturalness among the three model classes, suggesting their suitability for a shared representation of perception and action

    An open learning system for special needs education

    Get PDF
    The field of special needs education in case of speech and language deficiencies has seen great success, utilizing a number of paper-based systems, to help young children experiencing difficulty in language acquisition and the understanding of languages. These systems employ card and paper-based illustrations, which are combined to create scenarios for children in order to expose them to new vocabulary in context. While this success has encouraged the use of such systems for a long time, problems have been identified that need addressing. This paper presents research toward the application of an Open Learning system for special needs education that aims to provide an evolution in language learning in the context of understanding spoken instruction. Users of this Open Learning system benefit from open content with novel presentation of keywords and associated context. The learning algorithm is derived from the field of applied computing in human biology using the concept of spaced repetition and providing a novel augmentation of the memorization process for special needs education in a global Open Education setting

    CLiFF Notes: Research in the Language Information and Computation Laboratory of The University of Pennsylvania

    Get PDF
    This report takes its name from the Computational Linguistics Feedback Forum (CLIFF), an informal discussion group for students and faculty. However the scope of the research covered in this report is broader than the title might suggest; this is the yearly report of the LINC Lab, the Language, Information and Computation Laboratory of the University of Pennsylvania. It may at first be hard to see the threads that bind together the work presented here, work by faculty, graduate students and postdocs in the Computer Science, Psychology, and Linguistics Departments, and the Institute for Research in Cognitive Science. It includes prototypical Natural Language fields such as: Combinatorial Categorial Grammars, Tree Adjoining Grammars, syntactic parsing and the syntax-semantics interface; but it extends to statistical methods, plan inference, instruction understanding, intonation, causal reasoning, free word order languages, geometric reasoning, medical informatics, connectionism, and language acquisition. With 48 individual contributors and six projects represented, this is the largest LINC Lab collection to date, and the most diverse

    MĂ©thodes infographiques pour l'apprentissage des agents autonomes virtuels

    Get PDF
    There are two primary approaches to behavioural animation of an Autonomous Virtual Agent (AVA). The first one, or behavioural model, defines how AVA reacts to the current state of its environment. In the second one, or cognitive model, this AVA uses a thought process allowing it to deliberate over its possible actions. Despite the success of these approaches in several domains, there are two notable limitations which we address in this thesis. First, cognitive models are traditionally very slow to execute, as a tree search, in the form of mapping: states → actions, must be performed. On the one hand, an AVA can only make sub-optimal decisions and, on the other hand, the number of AVAs that can be used simultaneously in real-time is limited. These constraints restrict their applications to a small set of candidate actions. Second, cognitive and behavioural models can act unexpectedly, producing undesirable behaviour in certain regions of the state space. This is because it may be impossible to exhaustively test them for the entire state space, especially if the state space is continuous. This can be worrisome for end-user applications involving AVAs, such as training simulators for cars and aeronautics. Our contributions include the design of novel learning methods for approximating behavioural and cognitive models. They address the problem of input selection helped by a novel architecture ALifeE including virtual sensors and perception, regardless of the machine learning technique utilized. The input dimensionality must be kept as small as possible, this is due to the "curse of dimensionality", well known in machine learning. Thus, ALifeE simplifies and speeds up the process for the designer

    CLiFF Notes: Research in the Language, Information and Computation Laboratory of the University of Pennsylvania

    Get PDF
    One concern of the Computer Graphics Research Lab is in simulating human task behavior and understanding why the visualization of the appearance, capabilities and performance of humans is so challenging. Our research has produced a system, called Jack, for the definition, manipulation, animation and human factors analysis of simulated human figures. Jack permits the envisionment of human motion by interactive specification and simultaneous execution of multiple constraints, and is sensitive to such issues as body shape and size, linkage, and plausible motions. Enhanced control is provided by natural behaviors such as looking, reaching, balancing, lifting, stepping, walking, grasping, and so on. Although intended for highly interactive applications, Jack is a foundation for other research. The very ubiquitousness of other people in our lives poses a tantalizing challenge to the computational modeler: people are at once the most common object around us, and yet the most structurally complex. Their everyday movements are amazingly fluid, yet demanding to reproduce, with actions driven not just mechanically by muscles and bones but also cognitively by beliefs and intentions. Our motor systems manage to learn how to make us move without leaving us the burden or pleasure of knowing how we did it. Likewise we learn how to describe the actions and behaviors of others without consciously struggling with the processes of perception, recognition, and language. Present technology lets us approach human appearance and motion through computer graphics modeling and three dimensional animation, but there is considerable distance to go before purely synthesized figures trick our senses. We seek to build computational models of human like figures which manifest animacy and convincing behavior. Towards this end, we: Create an interactive computer graphics human model; Endow it with reasonable biomechanical properties; Provide it with human like behaviors; Use this simulated figure as an agent to effect changes in its world; Describe and guide its tasks through natural language instructions. There are presently no perfect solutions to any of these problems; ultimately, however, we should be able to give our surrogate human directions that, in conjunction with suitable symbolic reasoning processes, make it appear to behave in a natural, appropriate, and intelligent fashion. Compromises will be essential, due to limits in computation, throughput of display hardware, and demands of real-time interaction, but our algorithms aim to balance the physical device constraints with carefully crafted models, general solutions, and thoughtful organization. The Jack software is built on Silicon Graphics Iris 4D workstations because those systems have 3-D graphics features that greatly aid the process of interacting with highly articulated figures such as the human body. Of course, graphics capabilities themselves do not make a usable system. Our research has therefore focused on software to make the manipulation of a simulated human figure easy for a rather specific user population: human factors design engineers or ergonomics analysts involved in visualizing and assessing human motor performance, fit, reach, view, and other physical tasks in a workplace environment. The software also happens to be quite usable by others, including graduate students and animators. The point, however, is that program design has tried to take into account a wide variety of physical problem oriented tasks, rather than just offer a computer graphics and animation tool for the already computer sophisticated or skilled animator. As an alternative to interactive specification, a simulation system allows a convenient temporal and spatial parallel programming language for behaviors. The Graphics Lab is working with the Natural Language Group to explore the possibility of using natural language instructions, such as those found in assembly or maintenance manuals, to drive the behavior of our animated human agents. (See the CLiFF note entry for the AnimNL group for details.) Even though Jack is under continual development, it has nonetheless already proved to be a substantial computational tool in analyzing human abilities in physical workplaces. It is being applied to actual problems involving space vehicle inhabitants, helicopter pilots, maintenance technicians, foot soldiers, and tractor drivers. This broad range of applications is precisely the target we intended to reach. The general capabilities embedded in Jack attempt to mirror certain aspects of human performance, rather than the specific requirements of the corresponding workplace. We view the Jack system as the basis of a virtual animated agent that can carry out tasks and instructions in a simulated 3D environment. While we have not yet fooled anyone into believing that the Jack figure is real , its behaviors are becoming more reasonable and its repertoire of actions more extensive. When interactive control becomes more labor intensive than natural language instructional control, we will have reached a significant milestone toward an intelligent agent
    • …
    corecore