1,407 research outputs found

    Effects of Training Data Variation and Temporal Representation in a QSR-Based Action Prediction System

    Get PDF
    Understanding of behaviour is a crucial skill for Artificial Intelligence systems expected to interact with external agents – whether other AI systems, or humans, in scenarios involving co-operation, such as domestic robots capable of helping out with household jobs, or disaster relief robots expected to collaborate and lend assistance to others. It is useful for such systems to be able to quickly learn and re-use models and skills in new situations. Our work centres around a behaviourlearning system utilising Qualitative Spatial Relations to lessen the amount of training data required by the system, and to aid generalisation. In this paper, we provide an analysis of the advantages provided to our system by the use of QSRs. We provide a comparison of a variety of machine learning techniques utilising both quantitative and qualitative representations, and show the effects of varying amounts of training data and temporal representations upon the system. The subject of our work is the game of simulated RoboCup Soccer Keepaway. Our results show that employing QSRs provides clear advantages in scenarios where training data is limited, and provides for better generalisation performance in classifiers. In addition, we show that adopting a qualitative representation of time can provide significant performance gains for QSR systems

    Learning by observation using Qualitative Spatial Relations

    Get PDF
    We present an approach to the problem of learning by observation in spatially-situated tasks, whereby an agent learns to imitate the behaviour of an observed expert, with no direct interaction and limited observations. The form of knowledge representation used for these observations is crucial, and we apply Qualitative Spatial-Relational representations to compress continuous, metric state-spaces into symbolic states to maximise the generalisability of learned models and minimise knowledge engineering. Our system self-configures these representations of the world to discover configurations of features most relevant to the task, and thus build good predictive models. We then show how these models can be employed by situated agents to control their behaviour, closing the loop from observation to practical implementation. We evaluate our approach in the simulated RoboCup Soccer domain and the Real-Time Strategy game Starcraft, and successfully demonstrate how a system using our approach closely mimics the behaviour of both synthetic (AI controlled) players, and also human-controlled players through observation. We further evaluate our work in Reinforcement Learning tasks in these domains, and show that our approach improves the speed at which such models can be learned

    Unsupervised Human Activity Analysis for Intelligent Mobile Robots

    Get PDF
    The success of intelligent mobile robots in daily living environments depends on their ability to understand human movements and behaviours. One goal of recent research is to understand human activities performed in real human environments from long term observation. We consider a human activity to be a temporally dynamic configuration of a person interacting with key objects within the environment that provide some functionality. This can be a motion trajectory made of a sequence of 2-dimensional points representing a person’s position, as well as more detailed sequences of high-dimensional body poses, a collection of 3-dimensional points representing body joints positions, as estimated from the point of view of the robot. The limited field of view of the robot, restricted by the limitations of its sensory modalities, poses the challenge of understanding human activities from obscured, incomplete and noisy observations. As an embedded system it also has perceptual limitations which restrict the resolution of the human activity representations it can hope to achieve. In this thesis an approach for unsupervised learning of activities implemented on an autonomous mobile robot is presented. This research makes the following novel contributions: 1) A qualitative spatial-temporal vector space encoding of human activities as observed by an autonomous mobile robot. 2) Methods for learning a low dimensional representation of common and repeated patterns from multiple encoded visual observations. In order to handle the perceptual challenges, multiple abstractions are applied to the robot’s perception data. The human observations are first encoded using a leg-detector, an upper-body image classifier, and a convolutional neural network for pose estimation, while objects within the environment are automatically segmented from a 3-dimensional point cloud representation. Central to the success of the presented framework is mapping these encodings into an abstract qualitative space in order to generalise patterns invariant to exact quantitative positions within the real world. This is performed using a number of qualitative spatial-temporal representations which capture different aspects of the relations between the human subject and the objects in the environment. The framework auto-generates a vocabulary of discrete spatial-temporal descriptors extracted from the video sequences and each observation is represented as a vector over this vocabulary. Analogously to information retrieval on text corpora we use generative probabilistic techniques to recover latent, semantically meaningful, concepts in the encoded observations in an unsupervised manner. The relatively small number of concepts discovered are defined as multinomial distributions over the vocabulary and considered as human activity classes, granting the robot a high-level understanding of visually observed complex scenes. We validate the framework using, 1) A dataset collected from a physical robot autonomously patrolling and performing tasks in an office environment during a six week deployment, and 2) a high-dimensional “full body pose” dataset captured over multiple days by a mobile robot observing a kitchen area of an office environment from multiple view points. We show that the emergent categories from our framework align well with how humans interpret behaviours andsimple activities. Our presented framework models each extended observation as a probabilistic mixture over the learned activities, meaning it can learn human activity models even when embedded in continuous video sequences without the need for manual temporal segmentation, which can be time consuming and costly. Finally, we present methods for learning such human activity models in an incremental and continuous setting using variational inference methods to update the activity distribution online. This allows the mobile robot to efficiently learn and update its models of human activity over time, discarding the raw data, allowing for life-long learning

    A study of nuclear plant heat rate optimization using nonlinear artificial intelligence and linear statistical analysis models

    Get PDF
    The emphasis of this dissertation is on developing methods by which a combination of multivariate analysis techniques (MAT) and artificial intelligence (Al) procedures can be adapted to on-line, real time monitoring systems for improving nuclear plant thermal efficiency. Present-day first principle models involve performing a heat balance of plant systems and the reactor coolant system. Typical variables involved in the plant data acquisition system usually number one-to-two thousand. The goal of the current work is twofold. First, simulate the heat rate with MAT and Al computer models. The second objective is to selectively reduce the number of predictors to only the most important variables, induce small perturbations around normal operating levels, and evaluate changes in the magnitude of plant efficiency. It is anticipated that making small changes will improve the thermal efficiency of the plant and lead to supplementary cost savings. Conclusions of this report are several. A sensitivity analysis showed the reduction of input variables by dimensionality reduction, i.e., principal component analysis or factor analysis, removes valuable information. Predictors can simply be eliminated from the input space, but dimensionality reduction of the input matrix is not an alternative option. However, perturbation modeling does require data to be standardized and collinear variables removed. Filtering of input data is not recommended except to remove outliers. It\u27s ascertained that perturbation or sensitivity analysis differs from prediction modeling in that two additional requirements are necessary besides the criterion prediction. One is the magnitude of the criterion result given an input perturbation, and second, is the directionality of the model. Directionality is defined as the positive or negative movement of the heat rate (criterion) given a predetermined increase/decrease in predictor value, or input perturbation. While the criterion prediction is still important, it is directionality that determines whether a model is capturing proper changes in system process information. Final results showed that although the secondary-side of a nuclear plant might meet thermodynamic conditions for a steady-flow system, temporal information is needed by the model in order to capture system process information. Modeling of the data is governed by quasi-static range theory, which states data must be closely spaced (in time) and prior temporal information is necessary. The conclusion reached is the perturbation model of a nuclear plant is a time-dependent, dynamic system; all indications as of date show it is also nonlinear. Hence a time-dependent nonlinear modeling method, such as a neural network with time delayed inputs, is needed for sensitivity modeling

    Using spatiotemporal patterns to qualitatively represent and manage dynamic situations of interest : a cognitive and integrative approach

    Get PDF
    Les situations spatio-temporelles dynamiques sont des situations qui Ă©voluent dans l’espace et dans le temps. L’ĂȘtre humain peut identifier des configurations de situations dans son environnement et les utilise pour prendre des dĂ©cisions. Ces configurations de situations peuvent aussi ĂȘtre appelĂ©es « situations d’intĂ©rĂȘt » ou encore « patrons spatio-temporels ». En informatique, les situations sont obtenues par des systĂšmes d’acquisition de donnĂ©es souvent prĂ©sents dans diverses industries grĂące aux rĂ©cents dĂ©veloppements technologiques et qui gĂ©nĂšrent des bases de donnĂ©es de plus en plus volumineuses. On relĂšve un problĂšme important dans la littĂ©rature liĂ© au fait que les formalismes de reprĂ©sentation utilisĂ©s sont souvent incapables de reprĂ©senter des phĂ©nomĂšnes spatiotemporels dynamiques et complexes qui reflĂštent la rĂ©alitĂ©. De plus, ils ne prennent pas en considĂ©ration l’apprĂ©hension cognitive (modĂšle mental) que l’humain peut avoir de son environnement. Ces facteurs rendent difficile la mise en Ɠuvre de tels modĂšles par des agents logiciels. Dans cette thĂšse, nous proposons un nouveau modĂšle de reprĂ©sentation des situations d’intĂ©rĂȘt s’appuyant sur la notion des patrons spatiotemporels. Notre approche utilise les graphes conceptuels pour offrir un aspect qualitatif au modĂšle de reprĂ©sentation. Le modĂšle se base sur les notions d’évĂ©nement et d’état pour reprĂ©senter des phĂ©nomĂšnes spatiotemporels dynamiques. Il intĂšgre la notion de contexte pour permettre aux agents logiciels de raisonner avec les instances de patrons dĂ©tectĂ©s. Nous proposons aussi un outil de gĂ©nĂ©ration automatisĂ©e des relations qualitatives de proximitĂ© spatiale en utilisant un classificateur flou. Finalement, nous proposons une plateforme de gestion des patrons spatiotemporels pour faciliter l’intĂ©gration de notre modĂšle dans des applications industrielles rĂ©elles. Ainsi, les contributions principales de notre travail sont : Un formalisme de reprĂ©sentation qualitative des situations spatiotemporelles dynamiques en utilisant des graphes conceptuels. ; Une approche cognitive pour la dĂ©finition des patrons spatio-temporels basĂ©e sur l’intĂ©gration de l’information contextuelle. ; Un outil de gĂ©nĂ©ration automatique des relations spatiales qualitatives de proximitĂ© basĂ© sur les classificateurs neuronaux flous. ; Une plateforme de gestion et de dĂ©tection des patrons spatiotemporels basĂ©e sur l’extension d’un moteur de traitement des Ă©vĂ©nements complexes (Complex Event Processing).Dynamic spatiotemporal situations are situations that evolve in space and time. They are part of humans’ daily life. One can be interested in a configuration of situations occurred in the environment and can use it to make decisions. In the literature, such configurations are referred to as “situations of interests” or “spatiotemporal patterns”. In Computer Science, dynamic situations are generated by large scale data acquisition systems which are deployed everywhere thanks to recent technological advances. Spatiotemporal pattern representation is a research subject which gained a lot of attraction from two main research areas. In spatiotemporal analysis, various works extended query languages to represent patterns and to query them from voluminous databases. In Artificial Intelligence, predicate-based models represent spatiotemporal patterns and detect their instances using rule-based mechanisms. Both approaches suffer several shortcomings. For example, they do not allow for representing dynamic and complex spatiotemporal phenomena due to their limited expressiveness. Furthermore, they do not take into account the human’s mental model of the environment in their representation formalisms. This limits the potential of building agent-based solutions to reason about these patterns. In this thesis, we propose a novel approach to represent situations of interest using the concept of spatiotemporal patterns. We use Conceptual Graphs to offer a qualitative representation model of these patterns. Our model is based on the concepts of spatiotemporal events and states to represent dynamic spatiotemporal phenomena. It also incorporates contextual information in order to facilitate building the knowledge base of software agents. Besides, we propose an intelligent proximity tool based on a neuro-fuzzy classifier to support qualitative spatial relations in the pattern model. Finally, we propose a framework to manage spatiotemporal patterns in order to facilitate the integration of our pattern representation model to existing applications in the industry. The main contributions of this thesis are as follows: A qualitative approach to model dynamic spatiotemporal situations of interest using Conceptual Graphs. ; A cognitive approach to represent spatiotemporal patterns by integrating contextual information. ; An automated tool to generate qualitative spatial proximity relations based on a neuro-fuzzy classifier. ; A platform for detection and management of spatiotemporal patterns using an extension of a Complex Event Processing engine

    Episodic Memory for Cognitive Robots in Dynamic, Unstructured Environments

    Full text link
    Elements from cognitive psychology have been applied in a variety of ways to artificial intelligence. One of the lesser studied areas is in how episodic memory can assist learning in cognitive robots. In this dissertation, we investigate how episodic memories can assist a cognitive robot in learning which behaviours are suited to different contexts. We demonstrate the learning system in a domestic robot designed to assist human occupants of a house. People are generally good at anticipating the intentions of others. When around people that we are familiar with, we can predict what they are likely to do next, based on what we have observed them doing before. Our ability to record and recall different types of events that we know are relevant to those types of events is one reason our cognition is so powerful. For a robot to assist rather than hinder a person, artificial agents too require this functionality. This work makes three main contributions. Since episodic memory requires context, we first propose a novel approach to segmenting a metric map into a collection of rooms and corridors. Our approach is based on identifying critical points on a Generalised Voronoi Diagram and creating regions around these critical points. Our results show state of the art accuracy with 98% precision and 96% recall. Our second contribution is our approach to event recall in episodic memory. We take a novel approach in which events in memory are typed and a unique recall policy is learned for each type of event. These policies are learned incrementally, using only information presented to the agent and without any need to take that agent off line. Ripple Down Rules provide a suitable learning mechanism. Our results show that when trained appropriately we achieve a near perfect recall of episodes that match to an observation. Finally we propose a novel approach to how recall policies are trained. Commonly an RDR policy is trained using a human guide where the instructor has the option to discard information that is irrelevant to the situation. However, we show that by using Inductive Logic Programming it is possible to train a recall policy for a given type of event after only a few observations of that type of event

    Human-robot spatial interaction using probabilistic qualitative representations

    Get PDF
    Current human-aware navigation approaches use a predominantly metric representation of the interaction which makes them susceptible to changes in the environment. In order to accomplish reliable navigation in ever-changing human populated environments, the presented work aims to abstract from the underlying metric representation by using Qualitative Spatial Relations (QSR), namely the Qualitative Trajectory Calculus (QTC), for Human-Robot Spatial Interaction (HRSI). So far, this form of representing HRSI has been used to analyse different types of interactions online. This work extends this representation to be able to classify the interaction type online using incrementally updated QTC state chains, create a belief about the state of the world, and transform this high-level descriptor into low-level movement commands. By using QSRs the system becomes invariant to change in the environment, which is essential for any form of long-term deployment of a robot, but most importantly also allows the transfer of knowledge between similar encounters in different environments to facilitate interaction learning. To create a robust qualitative representation of the interaction, the essence of the movement of the human in relation to the robot and vice-versa is encoded in two new variants of QTC especially designed for HRSI and evaluated in several user studies. To enable interaction learning and facilitate reasoning, they are employed in a probabilistic framework using Hidden Markov Models (HMMs) for online classiffication and evaluation of their appropriateness for the task of human-aware navigation. In order to create a system for an autonomous robot, a perception pipeline for the detection and tracking of humans in the vicinity of the robot is described which serves as an enabling technology to create incrementally updated QTC state chains in real-time using the robot's sensors. Using this framework, the abstraction and generalisability of the QTC based framework is tested by using data from a different study for the classiffication of automatically generated state chains which shows the benefits of using such a highlevel description language. The detriment of using qualitative states to encode interaction is the severe loss of information that would be necessary to generate behaviour from it. To overcome this issue, so-called Velocity Costmaps are introduced which restrict the sampling space of a reactive local planner to only allow the generation of trajectories that correspond to the desired QTC state. This results in a exible and agile behaviour I generation that is able to produce inherently safe paths. In order to classify the current interaction type online and predict the current state for action selection, the HMMs are evolved into a particle filter especially designed to work with QSRs of any kind. This online belief generation is the basis for a exible action selection process that is based on data acquired using Learning from Demonstration (LfD) to encode human judgement into the used model. Thereby, the generated behaviour is not only sociable but also legible and ensures a high experienced comfort as shown in the experiments conducted. LfD itself is a rather underused approach when it comes to human-aware navigation but is facilitated by the qualitative model and allows exploitation of expert knowledge for model generation. Hence, the presented work bridges the gap between the speed and exibility of a sampling based reactive approach by using the particle filter and fast action selection, and the legibility of deliberative planners by using high-level information based on expert knowledge about the unfolding of an interaction

    Lost in trauma: post-traumatic stress disorder, spatial processing and the brain derived neutrophic factor gene.

    Get PDF
    This study enquired into a puzzling feature of Post-Traumatic Stress Disorder (PTSD); a loss of wayfinding ability (Osofsky et al., 2010; Ehringa et al., 2006; Lubit et al., 2003; Kowitz, 2011; Adler et al., 2009; Handley et al., 2009; Butler et al., 1999). Previous research by Smith et al. (2015) demonstrated that in cases of PTSD allocentric processing was impaired. This thesis pursued this line of enquiry and assessed the impact of PTSD and of any trauma exposure on navigation performance using a static perspective taking task and a more ‘active’ navigation paradigm. The study also introduced navigation questionnaires to these assessments, to see how accurate individuals with different experiences of trauma (including combat) were in their perceptions of their own navigation competence (or indeed impairment). Finally, the thesis approached the issue of genetics and explored the influence of the Brain-Derived Neurotrophic Factor (BDNF) gene on experiences of PTSD and on navigation behaviour. In summary, the study’s findings confirmed those of Smith et al. (2015) that PTSD impaired allocentric processing. What is more, this thesis revealed that PTSD also impaired egocentric navigation and that allocentric navigation performance was also impaired in healthy trauma exposed individuals who reported no ill-effects from their trauma. The thesis demonstrated for the first time that PTSD brought with it an associative bias which was transferable to navigation behaviour. This was interpreted as being the consequence of a competition for hippocampal resources between trauma processing and navigation in otherwise healthy individuals (Vasterling & Brewin, 2005). When it came to perceptions of navigation competence, healthy trauma exposed participants were accurate in their self-reported competence, but those with PTSD-related navigation impairment (including those who had been military trained) were not. Notably, the correlation between self-reported and actual navigation competence was limited to allocentric (not egocentric) navigation competence. This was explained using models of neural processing which present hippocampal dependent memory systems as being more declarative than associative memory systems (e.g. Morris in Andersen et al., 2007). In the final chapter, the explorative analysis of the BDNF gene produced some noteworthy findings. Zhang et al. (2014) speculated that the relationship between BDNF and PTSD is likely confounded by environmental conditions (i.e. the diversity and extent of trauma exposure and opportunities individuals have to process it). BDNF did not influence the PTSD prevalence or severity in this study which did not control for such conditions. In terms of navigation, there were no distinct performance disadvantages from carrying the met allele and this is in line with many similar studies (e.g. Sanchez et al., 2011, etc.). Nonetheless, BDNF met carriers showed different patterns of egocentric performance to valval homozygotes. What is more, met carriers showed an inability to accurately describe their competence at allocentric navigation and observations were made of data that indicated a delay in their uptake of allocentric strategy during navigation (similar to significant findings of Banner et al., 2011). The observations were consistent with LövdĂ©n et al.’s (2011) suggestion that met carriers may require more ‘obvious’ cues to apply allocentric processing to a given task than valval homozygotes do. The implications of these genetic differences in approach to allocentric processing are considered in terms of both trauma processing and navigation training interventions
    • 

    corecore