459 research outputs found

    Where creativity comes from: the social spaces of embodied minds

    Get PDF
    This paper explores creative design, social interaction and perception. It proposes that creativity at a social level is not a result of many individuals trying to be creative at a personal level, but occurs naturally in the social interaction between comparatively simple minds embodied in a complex world. Particle swarm algorithms can model group interaction in shared spaces, but design space is not necessarily one pre-defined space of set parameters on which everyone can agree, as individual minds are very different. A computational model is proposed that allows a similar swarm to occur between spaces of different description and even dimensionality. This paper explores creative design, social interaction and perception. It proposes that creativity at a social level is not a result of many individuals trying to be creative at a personal level, but occurs naturally in the social interaction between comparatively simple minds embodied in a complex world. Particle swarm algorithms can model group interaction in shared spaces, but design space is not necessarily one pre-defined space of set parameters on which everyone can agree, as individual minds are very different. A computational model is proposed that allows a similar swarm to occur between spaces of different description and even dimensionality

    Unsupervised Learning of Goal Spaces for Intrinsically Motivated Goal Exploration

    Get PDF
    Intrinsically motivated goal exploration algorithms enable machines to discover repertoires of policies that produce a diversity of effects in complex environments. These exploration algorithms have been shown to allow real world robots to acquire skills such as tool use in high-dimensional continuous state and action spaces. However, they have so far assumed that self-generated goals are sampled in a specifically engineered feature space, limiting their autonomy. In this work, we propose to use deep representation learning algorithms to learn an adequate goal space. This is a developmental 2-stage approach: first, in a perceptual learning stage, deep learning algorithms use passive raw sensor observations of world changes to learn a corresponding latent space; then goal exploration happens in a second stage by sampling goals in this latent space. We present experiments where a simulated robot arm interacts with an object, and we show that exploration algorithms using such learned representations can match the performance obtained using engineered representations

    Building Internal Maps of a Mobile Robot

    Get PDF

    Using machine learning to learn from demonstration: application to the AR.Drone quadrotor control

    Get PDF
    A dissertation submitted to the Faculty of Science, University of the Witwatersrand, Johannesburg, in fulfilment of requirements for the degree of Master of Science. December 14, 2015Developing a robot that can operate autonomously is an active area in robotics research. An autonomously operating robot can have a tremendous number of applications such as: surveillance and inspection; search and rescue; and operating in hazardous environments. Reinforcement learning, a branch of machine learning, provides an attractive framework for developing robust control algorithms since it is less demanding in terms of both knowledge and programming effort. Given a reward function, reinforcement learning employs a trial-and-error concept to make an agent learn. It is computationally intractable in practice for an agent to learn “de novo”, thus it is important to provide the learning system with “a priori” knowledge. Such prior knowledge would be in the form of demonstrations performed by the teacher. However, prior knowledge does not necessarily guarantee that the agent will perform well. The performance of the agent usually depends on the reward function, since the reward function describes the formal specification of the control task. However, problems arise with complex reward function that are difficult to specify manually. In order to address these problems, apprenticeship learning via inverse reinforcement learning is used. Apprenticeship learning via inverse reinforcement learning can be used to extract a reward function from the set of demonstrations so that the agent can optimise its performance with respect to that reward function. In this research, a flight controller for the Ar.Drone quadrotor was created using a reinforcement learning algorithm and function approximators with some prior knowledge. The agent was able to perform a manoeuvre that is similar to the one demonstrated by the teacher

    Unsupervised Learning of Goal Spaces for Intrinsically Motivated Goal Exploration

    Get PDF
    International audienceIntrinsically motivated goal exploration algorithms enable machines to discover repertoires of policies that produce a diversity of effects in complex environments. These exploration algorithms have been shown to allow real world robots to acquire skills such as tool use in high-dimensional continuous state and action spaces. However, they have so far assumed that self-generated goals are sampled in a specifically engineered feature space, limiting their autonomy. In this work, we propose to use deep representation learning algorithms to learn an adequate goal space. This is a developmental 2-stage approach: first, in a perceptual learning stage, deep learning algorithms use passive raw sensor observations of world changes to learn a corresponding latent space; then goal exploration happens in a second stage by sampling goals in this latent space. We present experiments where a simulated robot arm interacts with an object, and we show that exploration algorithms using such learned representations can match the performance obtained using engineered representations

    Language impairment and colour categories

    Get PDF
    Goldstein (1948) reported multiple cases of failure to categorise colours in patients that he termed amnesic or anomic aphasics. these patients have a particular difficulty in producing perceptual categories in the absence of other aphasic impairments. we hold that neuropsychological evidence supports the view that the task of colour categorisation is logically impossible without labels

    Adaptive Robotic Information Gathering via Non-Stationary Gaussian Processes

    Full text link
    Robotic Information Gathering (RIG) is a foundational research topic that answers how a robot (team) collects informative data to efficiently build an accurate model of an unknown target function under robot embodiment constraints. RIG has many applications, including but not limited to autonomous exploration and mapping, 3D reconstruction or inspection, search and rescue, and environmental monitoring. A RIG system relies on a probabilistic model's prediction uncertainty to identify critical areas for informative data collection. Gaussian Processes (GPs) with stationary kernels have been widely adopted for spatial modeling. However, real-world spatial data is typically non-stationary -- different locations do not have the same degree of variability. As a result, the prediction uncertainty does not accurately reveal prediction error, limiting the success of RIG algorithms. We propose a family of non-stationary kernels named Attentive Kernel (AK), which is simple, robust, and can extend any existing kernel to a non-stationary one. We evaluate the new kernel in elevation mapping tasks, where AK provides better accuracy and uncertainty quantification over the commonly used stationary kernels and the leading non-stationary kernels. The improved uncertainty quantification guides the downstream informative planner to collect more valuable data around the high-error area, further increasing prediction accuracy. A field experiment demonstrates that the proposed method can guide an Autonomous Surface Vehicle (ASV) to prioritize data collection in locations with significant spatial variations, enabling the model to characterize salient environmental features.Comment: International Journal of Robotics Research (IJRR). arXiv admin note: text overlap with arXiv:2205.0642
    corecore