3,170 research outputs found

    Towards general spatial intelligence

    Get PDF
    The goal of General Spatial Intelligence is to present a unified theory to support the various aspects of spatial experience, whether physical or cognitive. We acknowledge the fact that GIScience has to assume a particular worldview, resulting from specific positions regarding metaphysics, ontology, epistemology, mind, language, cognition and representation. Implicit positions regarding these domains may allow solutions to isolated problems but often hamper a more encompassing approach. We argue that explicitly defining a worldview allows the grounding and derivation of multi-modal models, establishing precise problems, allowing falsifiability. We present an example of such a theory founded on process metaphysics, where the ontological elements are called differences. We show that a worldview has implications regarding the nature of space and, in the case of the chosen metaphysical layer, favours a model of space as true spacetime, i.e. four-dimensionality. Finally we illustrate the approach using a scenario from psychology and AI based planning

    How could a rational analysis model explain?

    Get PDF
    Rational analysis is an influential but contested account of how probabilistic modeling can be used to construct non-mechanistic but self-standing explanatory models of the mind. In this paper, I disentangle and assess several possible explanatory contributions which could be attributed to rational analysis. Although existing models suffer from evidential problems that question their explanatory power, I argue that rational analysis modeling can complement mechanistic theorizing by providing models of environmental affordances

    AI and affordances for mental action

    Get PDF
    To perceive an affordance is to perceive an object or situation as presenting an opportunity for action. The concept of affordances has been taken up across wide range of disciplines, including AI. I explore an interesting extension of the concept of affordances in robotics. Among the affordances that artificial systems have been engineered to detect are affordances to deliberate. In psychology, affordances are typically limited to bodily action, so the it is noteworthy that AI researchers have found it helpful to extend the concept to encompass mental actions. I propose that psychologists can learn from this extension, and argue that human subjects can perceive mental affordances, such as affordances to attend, affordances to imagine and affordances to count

    Context-Independent Task Knowledge for Neurosymbolic Reasoning in Cognitive Robotics

    Get PDF
    One of the current main goals of artificial intelligence and robotics research is the creation of an artificial assistant which can have flexible, human like behavior, in order to accomplish everyday tasks. A lot of what is context-independent task knowledge to the human is what enables this flexibility at multiple levels of cognition. In this scope the author analyzes how to acquire, represent and disambiguate symbolic knowledge representing context-independent task knowledge, abstracted from multiple instances: this thesis elaborates the incurred problems, implementation constraints, current state-of-the-art practices and ultimately the solutions newly introduced in this scope. The author specifically discusses acquisition of context-independent task knowledge from large amounts of human-written texts and their reusability in the robotics domain; the acquisition of knowledge on human musculoskeletal dependencies constraining motion which allows a better higher level representation of observed trajectories; the means of verbalization of partial contextual and instruction knowledge, increasing interaction possibilities with the human as well as contextual adaptation. All the aforementioned points are supported by evaluation in heterogeneous setups, to bring a view on how to make optimal use of statistical & symbolic applications (i.e. neurosymbolic reasoning) in cognitive robotics. This work has been performed to enable context-adaptable artificial assistants, by bringing together knowledge on what is usually regarded as context-independent task knowledge

    Life is an Adventure! An agent-based reconciliation of narrative and scientific worldviews\ud

    Get PDF
    The scientific worldview is based on laws, which are supposed to be certain, objective, and independent of time and context. The narrative worldview found in literature, myth and religion, is based on stories, which relate the events experienced by a subject in a particular context with an uncertain outcome. This paper argues that the concept of “agent”, supported by the theories of evolution, cybernetics and complex adaptive systems, allows us to reconcile scientific and narrative perspectives. An agent follows a course of action through its environment with the aim of maximizing its fitness. Navigation along that course combines the strategies of regulation, exploitation and exploration, but needs to cope with often-unforeseen diversions. These can be positive (affordances, opportunities), negative (disturbances, dangers) or neutral (surprises). The resulting sequence of encounters and actions can be conceptualized as an adventure. Thus, the agent appears to play the role of the hero in a tale of challenge and mystery that is very similar to the "monomyth", the basic storyline that underlies all myths and fairy tales according to Campbell [1949]. This narrative dynamics is driven forward in particular by the alternation between prospect (the ability to foresee diversions) and mystery (the possibility of achieving an as yet absent prospect), two aspects of the environment that are particularly attractive to agents. This dynamics generalizes the scientific notion of a deterministic trajectory by introducing a variable “horizon of knowability”: the agent is never fully certain of its further course, but can anticipate depending on its degree of prospect

    CernoCAMAL : a probabilistic computational cognitive architecture

    Get PDF
    This thesis presents one possible way to develop a computational cognitive architecture, dubbed CernoCAMAL, that can be used to govern artificial minds probabilistically. The primary aim of the CernoCAMAL research project is to investigate how its predecessor architecture CAMAL can be extended to reason probabilistically about domain model objects through perception, and how the probability formalism can be integrated into its BDI (Belief-Desire-Intention) model to coalesce a number of mechanisms and processes.The motivation and impetus for extending CAMAL and developing CernoCAMAL is the considerable evidence that probabilistic thinking and reasoning is linked to cognitive development and plays a role in cognitive functions, such as decision making and learning. This leads us to believe that a probabilistic reasoning capability is an essential part of human intelligence. Thus, it should be a vital part of any system that attempts to emulate human intelligence computationally.The extensions and augmentations to CAMAL, which are the main contributions of the CernoCAMAL research project, are as follows:- The integration of the EBS (Extended Belief Structure) that associates a probability value with every belief statement, in order to represent the degrees of belief numerically.- The inclusion of the CPR (CernoCAMAL Probabilistic Reasoner) that reasons probabilistically over the goal- and task-oriented perceptual feedback generated by reactive sub-systems.- The compatibility of the probabilistic BDI model with the affect and motivational models and affective and motivational valences used throughout CernoCAMAL.A succession of experiments in simulation and robotic testbeds is carried out to demonstrate improvements and increased efficacy in CernoCAMAL’s overall cognitive performance. A discussion and critical appraisal of the experimental results, together with a summary, a number of potential future research directions, and some closing remarks conclude the thesis

    CernoCAMAL : a probabilistic computational cognitive architecture

    Get PDF
    This thesis presents one possible way to develop a computational cognitive architecture, dubbed CernoCAMAL, that can be used to govern artificial minds probabilistically. The primary aim of the CernoCAMAL research project is to investigate how its predecessor architecture CAMAL can be extended to reason probabilistically about domain model objects through perception, and how the probability formalism can be integrated into its BDI (Belief-Desire-Intention) model to coalesce a number of mechanisms and processes. The motivation and impetus for extending CAMAL and developing CernoCAMAL is the considerable evidence that probabilistic thinking and reasoning is linked to cognitive development and plays a role in cognitive functions, such as decision making and learning. This leads us to believe that a probabilistic reasoning capability is an essential part of human intelligence. Thus, it should be a vital part of any system that attempts to emulate human intelligence computationally. The extensions and augmentations to CAMAL, which are the main contributions of the CernoCAMAL research project, are as follows: - The integration of the EBS (Extended Belief Structure) that associates a probability value with every belief statement, in order to represent the degrees of belief numerically. - The inclusion of the CPR (CernoCAMAL Probabilistic Reasoner) that reasons probabilistically over the goal- and task-oriented perceptual feedback generated by reactive sub-systems. - The compatibility of the probabilistic BDI model with the affect and motivational models and affective and motivational valences used throughout CernoCAMAL. A succession of experiments in simulation and robotic testbeds is carried out to demonstrate improvements and increased efficacy in CernoCAMAL’s overall cognitive performance. A discussion and critical appraisal of the experimental results, together with a summary, a number of potential future research directions, and some closing remarks conclude the thesis
    • …
    corecore