89 research outputs found

    Cognitive Modeling for Computer Animation: A Comparative Review

    Get PDF
    Cognitive modeling is a provocative new paradigm that paves the way towards intelligent graphical characters by providing them with logic and reasoning skills. Cognitively empowered self-animating characters will see in the near future a widespread use in the interactive game, multimedia, virtual reality and production animation industries. This review covers three recently-published papers from the field of cognitive modeling for computer animation. The approaches and techniques employed are very different. The cognition model in the first paper is built on top of Soar, which is intended as a general cognitive architecture for developing systems that exhibit intelligent behaviors. The second paper uses an active plan tree and a plan library to achieve the fast and robust reactivity to the environment changes. The third paper, based on an AI formalism known as the situation calculus, develops a cognitive modeling language called CML and uses it to specify a behavior outline or sketch plan to direct the characters in terms of goals. Instead of presenting each paper in isolation then comparatively analyzing them, we take a top-down approach by first classifying the field into three different categories and then attempting to put each paper into a proper category. Hopefully in this way it can provide a more cohesive, systematic view of cognitive modeling approaches employed in computer animation

    Models and evaluation of human-machine systems

    Get PDF
    "September 1993.""Prepared for: International Atomic Energy Association [sic], Wagramerstrasse 5, P. 0. Box 100 A-1400 Vienna, Austria."Part of appendix A and bibliography missingIncludes bibliographical referencesThe field of human-machine systems and human-machine interfaces is very multidisciplinary. We have to navigate between the knowledge waves brought by several areas of the human learning: cognitive psychology, artificial intelligence, philosophy, linguistics, ergonomy, control systems engineering, neurophysiology, sociology, computer sciences, among others. At the present moment, all these disciplines seek to be close each other to generate synergy. It is necessary to homogenize the different nomenclatures and to make that each one can benefit from the results and advances found in the other. Accidents like TMI, Chernobyl, Challenger, Bhopal, and others demonstrated that the human beings shall deal with complex systems that are created by the technological evolution more carefully. The great American writer Allan Bloom died recently wrote in his book 'The Closing of the American Mind' (1987) about the universities curriculum that are commonly separated in tight departments. This was a necessity of the industrial revolution that put emphasis in practical courses in order to graduate specialists in many fields. However, due the great complexity of our technological world, we feel the necessity to integrate again those disciplines that one day were separated to make possible their fast development. This Report is a modest trial to do this integration in a holistic way, trying to capture the best tendencies in those areas of the human learning mentioned in the first lines above. I expect that it can be useful to those professionals who, like me, would desire to build better human-machine systems in order to avoid those accidents also mentioned above

    A society of mind approach to cognition and metacognition in a cognitive architecture

    Get PDF
    This thesis investigates the concept of mind as a control system using the "Society of Agents" metaphor. "Society of Agents" describes collective behaviours of simple and intelligent agents. "Society of Mind" is more than a collection of task-oriented and deliberative agents; it is a powerful concept for mind research and can benefit from the use of metacognition. The aim is to develop a self configurable computational model using the concept of metacognition. A six tiered SMCA (Society of Mind Cognitive Architecture) control model is designed that relies on a society of agents operating using metrics associated with the principles of artificial economics in animal cognition. This research investigates the concept of metacognition as a powerful catalyst for control, unify and self-reflection. Metacognition is used on BDI models with respect to planning, reasoning, decision making, self reflection, problem solving, learning and the general process of cognition to improve performance.One perspective on how to develop metacognition in a SMCA model is based on the differentiation between metacognitive strategies and metacomponents or metacognitive aids. Metacognitive strategies denote activities such as metacomphrension (remedial action) and metamanagement (self management) and schema training (meaning full learning over cognitive structures). Metacomponents are aids for the representation of thoughts. To develop an efficient, intelligent and optimal agent through the use of metacognition requires the design of a multiple layered control model which includes simple to complex levels of agent action and behaviours. This SMCA model has designed and implemented for six layers which includes reflexive, reactive, deliberative (BDI), learning (Q-Ieamer), metacontrol and metacognition layers

    Automated generation of geometrically-precise and semantically-informed virtual geographic environnements populated with spatially-reasoning agents

    Get PDF
    La Géo-Simulation Multi-Agent (GSMA) est un paradigme de modélisation et de simulation de phénomènes dynamiques dans une variété de domaines d'applications tels que le domaine du transport, le domaine des télécommunications, le domaine environnemental, etc. La GSMA est utilisée pour étudier et analyser des phénomènes qui mettent en jeu un grand nombre d'acteurs simulés (implémentés par des agents) qui évoluent et interagissent avec une représentation explicite de l'espace qu'on appelle Environnement Géographique Virtuel (EGV). Afin de pouvoir interagir avec son environnement géographique qui peut être dynamique, complexe et étendu (à grande échelle), un agent doit d'abord disposer d'une représentation détaillée de ce dernier. Les EGV classiques se limitent généralement à une représentation géométrique du monde réel laissant de côté les informations topologiques et sémantiques qui le caractérisent. Ceci a pour conséquence d'une part de produire des simulations multi-agents non plausibles, et, d'autre part, de réduire les capacités de raisonnement spatial des agents situés. La planification de chemin est un exemple typique de raisonnement spatial dont un agent pourrait avoir besoin dans une GSMA. Les approches classiques de planification de chemin se limitent à calculer un chemin qui lie deux positions situées dans l'espace et qui soit sans obstacle. Ces approches ne prennent pas en compte les caractéristiques de l'environnement (topologiques et sémantiques), ni celles des agents (types et capacités). Les agents situés ne possèdent donc pas de moyens leur permettant d'acquérir les connaissances nécessaires sur l'environnement virtuel pour pouvoir prendre une décision spatiale informée. Pour répondre à ces limites, nous proposons une nouvelle approche pour générer automatiquement des Environnements Géographiques Virtuels Informés (EGVI) en utilisant les données fournies par les Systèmes d'Information Géographique (SIG) enrichies par des informations sémantiques pour produire des GSMA précises et plus réalistes. De plus, nous présentons un algorithme de planification hiérarchique de chemin qui tire avantage de la description enrichie et optimisée de l'EGVI pour fournir aux agents un chemin qui tient compte à la fois des caractéristiques de leur environnement virtuel et de leurs types et capacités. Finalement, nous proposons une approche pour la gestion des connaissances sur l'environnement virtuel qui vise à supporter la prise de décision informée et le raisonnement spatial des agents situés

    A Sandbox in Which to Learn and Develop Soar Agents

    Get PDF
    It is common for military personnel to leverage simulations (and simulators) as cost-effective tools to train and become proficient at various tasks (e.g., flying an aircraft and/or performing a mission, among others). These training simulations often need to represent humans within the simulated world in a realistic manner. Realistic implies creating simulated humans that exhibit behaviors that mimic real-world decision making and actions. Typically, to create the decision-making logic, techniques developed from the domain of artificial intelligence are used. Although there are several approaches to developing intelligent agents; we focus on leveraging and open source project called Soar, to define agent behavior. This research took an off-the-shelf open-source software product (called the AI sandbox) that facilitates the creation of 3D virtual worlds and interfaced it to the Soar package. Because the world created by the sandbox is rich in features, easily configurable using a simple scripting system, and visually engaging, it\u27s ideal as a learning platform to develop Soar agents more aligned with military simulations. In summary, this research develops a platform (or learning environment) to learn how to develop Soar-based agents

    Towards perceptual intelligence : statistical modeling of human individual and interactive behaviors

    Get PDF
    Thesis (Ph.D.)--Massachusetts Institute of Technology, Dept. of Architecture, 2000.Includes bibliographical references (p. 279-297).This thesis presents a computational framework for the automatic recognition and prediction of different kinds of human behaviors from video cameras and other sensors, via perceptually intelligent systems that automatically sense and correctly classify human behaviors, by means of Machine Perception and Machine Learning techniques. In the thesis I develop the statistical machine learning algorithms (dynamic graphical models) necessary for detecting and recognizing individual and interactive behaviors. In the case of the interactions two Hidden Markov Models (HMMs) are coupled in a novel architecture called Coupled Hidden Markov Models (CHMMs) that explicitly captures the interactions between them. The algorithms for learning the parameters from data as well as for doing inference with those models are developed and described. Four systems that experimentally evaluate the proposed paradigm are presented: (1) LAFTER, an automatic face detection and tracking system with facial expression recognition; (2) a Tai-Chi gesture recognition system; (3) a pedestrian surveillance system that recognizes typical human to human interactions; (4) and a SmartCar for driver maneuver recognition. These systems capture human behaviors of different nature and increasing complexity: first, isolated, single-user facial expressions, then, two-hand gestures and human-to-human interactions, and finally complex behaviors where human performance is mediated by a machine, more specifically, a car. The metric that is used for quantifying the quality of the behavior models is their accuracy: how well they are able to recognize the behaviors on testing data. Statistical machine learning usually suffers from lack of data for estimating all the parameters in the models. In order to alleviate this problem, synthetically generated data are used to bootstrap the models creating 'prior models' that are further trained using much less real data than otherwise it would be required. The Bayesian nature of the approach let us do so. The predictive power of these models lets us categorize human actions very soon after the beginning of the action. Because of the generic nature of the typical behaviors of each of the implemented systems there is a reason to believe that this approach to modeling human behavior would generalize to other dynamic human-machine systems. This would allow us to recognize automatically people's intended action, and thus build control systems that dynamically adapt to suit the human's purposes better.by Nuria M. Oliver.Ph.D

    Autonomous scheduling technology for Earth orbital missions

    Get PDF
    The development of a dynamic autonomous system (DYASS) of resources for the mission support of near-Earth NASA spacecraft is discussed and the current NASA space data system is described from a functional perspective. The future (late 80's and early 90's) NASA space data system is discussed. The DYASS concept, the autonomous process control, and the NASA space data system are introduced. Scheduling and related disciplines are surveyed. DYASS as a scheduling problem is also discussed. Artificial intelligence and knowledge representation is considered as well as the NUDGE system and the I-Space system

    Barriers to Work Place Advancement: the Experience of the White Female Work Force

    Get PDF
    Glass Ceiling ReportGlassCeilingBackground17WhiteFemaleWorkForce.pdf: 8903 downloads, before Oct. 1, 2020

    Algorithms, abstraction and implementation : a massively multilevel theory of strong equivalence of complex systems

    Get PDF
    This thesis puts forward a formal theory of levels and algorithms to provide a foundation for those terms as they are used in much of cognitive science and computer science. Abstraction with respect to concreteness is distinguished from abstraction with respect to detail, resulting in three levels of concreteness and a large number of algorithmic levels, which are levels of detail and the primary focus of the theory. An algorithm or ideal machine is a set of sequences of states defining a particular level of detail. Rather than one fundamental ideal machine to describe the behaviour of a complex system, there are many possible ideal machines, extending Turing's approach to reflect the multiplicity of system descriptions required to express more than weak input-output equivalence of systems. Cognitive science is concerned with stronger equivalence; e.g., do two models go through the same states at some level of description? The state-based definition of algorithms serves as a basis for such strong equivalence and facilitates formal renditions of abstraction and implementation as relations between algorithms. It is possible to prove within the new framework whether or not one given algorithm is a valid implementation of another, or whether two unequal algorithms have a common abstraction, for example. Some implications of the theory are discussed, notably a characterisation of connectionist versus classical models

    Cognition and consciousness : developing a conceptual framework

    Get PDF
    English: The role of consciousness within the process of cognition has been very unclear to date. On the one hand, the phenomenological experience of consciousness is very real but on the other hand, theories of cognition seem to struggle to account for consciousness. Consciousness and related mentalistic phenomena have been neglected to a great extent in the past. It is only recently that cognitive theorists voiced their concern about the neglect of consciousness within cognitive studies and psychology. This study assumed that the difficulty to account for consciousness within cognition is due to the inadequacy of theoretical perspectives. The aim of this study was then to develop a conceptual framework within which the relationship between cognition and consciousness can be viewed, which can also open avenues for theory construction and empirical investigation. To obtain this goal, a particular strategy was followed. The assumption from which the argument in this study originated was that cognition and consciousness must be viewed from a systemic emergentist perspective. From this assumption certain systemic emergentist principles followed which include emergence, structure, function, the fusion between structure and function, the constitution of systemic wholes, and interaction. Two principles, or perspectives, namely structure and function, were used in an heuristic fashion to discuss approaches to consciousness. These two perspectives need to be incorporated in an understanding and definition of consciousness. The same strategy was followed with the analysis of four mainstream approaches to cognition, namely the information processing approach, the move beyond information processing, symbolicism and connectionism. It was hypothesised that the ability to account for the systemic emergentist principles within a particular approach determines its ability to incorporate consciousness within the process of cognition. The nature of structure, function and emergence was clarified from the perspective of General Systems Theory and Emergent lnteractionism. The various approaches to cognition contributed in different ways to the understanding of the systemic emergentist principles. A conceptual model, namely the systemic emergentist model, was developed, based on a principle of a fused function and structure. This means that a system has a microstructure consisting of active and functional elements. The concept of a fused function and structure overcomes the traditional separation of structure and function/process. This fusion enables emergence to take place. Due to the configuration of elements (processes) a system as a whole and its properties emerge. Systems form subsystems in a hierarchical fashion which allows for interaction between levels of systems. Emergents cannot be reduced to the elements of a system. The model was evaluated against the characteristics of cognition and consciousness determined on the psychological and phenomenological levels of analysis. This showed that consciousness is functional and an integral part of the process of cognition. In terms of the requirements for a conceptual model as a rudimentary explanatory and heuristic device, it was found that the systemic emergentist model was able to satisfy these requirements to a large extent. The model was also able to indicate further avenues for research and point out certain deficiencies in itself.Afrikaans: Tot op hede was die rol van bewussyn in die proses van kognisie onduidelik. Aan die een kant word bewussyn fenomenologies as werklik ervaar, maar aan die ander kant is dit vir kognitiewe teoriee moeilik om vir bewussyn verantwoording te doen. Bewussyn en verwante verstandelike verskynsels is in die verlede tot 'n groat mate afgeskeep. Dit is egter slegs onlangs dat kognitiewe teoretici hul besorgdheid oor die verwaarlosing van bewussyn in kognitiewe studies en sielkunde uitgespreek het. Hierdie studie neem aan dat ontoereikende teoretiese perspektiewe verantwoordelik is vir die onvermoe om rekenskap te gee van bewussyn in kognisie. Die doel van hierdie studie was om 'n konseptuele raamwerk te ontwikkel waarbinne die verwantskap tussen kognisie en bewussyn beskou kan word en waarmee nuwe rigtings vir teoriekonstruksie en empiriese ondersoek aangedui kan word. 'n Spesifieke strategie is gevolg om hierdie doel te bereik. Hierdie studie vertrek van die aanname dat kognisie en bewussyn vanuit 'n sistemiese verskynings- (emergentist) perspektief beskou moet word. Vanuit hierdie perspektief kom sekere sistemiese verskyningsbeginsels na vore wat verskyning, struktuur, funksie, die samesmelting van struktuur en funksie, die konstituering van sistemiese gehele, en interaksie insluit. Twee beginsels of perspektiewe, naamlik struktuur en funksie, is op 'n heuristiese wyse gebruik om benaderings tot bewussyn te bespreek. Hierdie twee perspektiewe behoort in die begrip en definiering van bewussyn ingesluit te word. Dieselfde strategie is gevolg met die ontleding van vier hoofstroombenaderings tot kognisie, naamlik die informasieverwerkingsbenadering, die beweging verby informasieverwerking, simbolisisme, en konneksionisme. Dit is gehipotetiseer dat 'n spesifieke benadering se vermoe om verantwoording te doen vir die sistemiese verskyningsbeginsels, die vermoe om bewussyn in die proses van kognisie in te sluit, bepaal. Die aard van struktuur, funksie en verskyning is verhelder deur 'n bespreking van Algemene Sisteemteorie en Verskynings-lnteraksionisme. Die verskillende benaderings tot kognisie het op verskillende wyses tot die begrip van die sistemiese verskyningsbeginsels bygedra. 'n Konseptuele model, naamlik die sistemiese verskyningsmodel, is ontwikkel wat gebaseer is op die beginsel van 'n verenigde struktuur en funksie. Dit beteken dat 'n sisteem 'n mikrostruktuur het wat uit aktiewe en funksionele elemente bestaan. Die beginsel van 'n verenigde struktuur en funksie oorbrug die tradisionele skeiding tussen struktuur en funksiejproses. Die samesmelting het verskyning tot gevolg. Weens die konfigurasie van elemente (prosesse), verskyn 'n sisteem as geheel met sy eienskappe. Sisteme vorm subsisteme op hierargiese wyse wat wisselwerking tussen vlakke van sisteme bewerkstellig. Verskynings kan nie tot die elemente van 'n sisteem gereduseer word nie. Die model is geevalueer met die kenmerke van kognisie en bewussyn wat op 'n sielkundige en fenomenologiese vlak van ontleding bepaal is. Dit het aangetoon dat bewussyn funksioneel is en integraal is tot kognisie. Dit is bevind dat die sistemiese verskynsingsmodel die meeste van die vereistes van 'n konseptuele model as 'n rudimentere verklarings- en heuristiese meganisme kon bevredig. Die model was ook in staat om verdere rigtings vir navorsing aan te dui, asook om sekere gebreke in die model self aan te dui.Thesis (DPhil)--University of Pretoria, 1995.PsychologyDPhilUnrestricte
    • …
    corecore