2,087 research outputs found

    A Pilot Study with a Novel Setup for Collaborative Play of the Humanoid Robot KASPAR with children with autism

    Get PDF
    This article is distributed under the terms of the Creative Commons Attribution License which permits any use, distribution, and reproduction in any medium, provided the original author(s) and the source are credited.This article describes a pilot study in which a novel experimental setup, involving an autonomous humanoid robot, KASPAR, participating in a collaborative, dyadic video game, was implemented and tested with children with autism, all of whom had impairments in playing socially and communicating with others. The children alternated between playing the collaborative video game with a neurotypical adult and playing the same game with the humanoid robot, being exposed to each condition twice. The equipment and experimental setup were designed to observe whether the children would engage in more collaborative behaviours while playing the video game and interacting with the adult than performing the same activities with the humanoid robot. The article describes the development of the experimental setup and its first evaluation in a small-scale exploratory pilot study. The purpose of the study was to gain experience with the operational limits of the robot as well as the dyadic video game, to determine what changes should be made to the systems, and to gain experience with analyzing the data from this study in order to conduct a more extensive evaluation in the future. Based on our observations of the childrens’ experiences in playing the cooperative game, we determined that while the children enjoyed both playing the game and interacting with the robot, the game should be made simpler to play as well as more explicitly collaborative in its mechanics. Also, the robot should be more explicit in its speech as well as more structured in its interactions. Results show that the children found the activity to be more entertaining, appeared more engaged in playing, and displayed better collaborative behaviours with their partners (For the purposes of this article, ‘partner’ refers to the human/robotic agent which interacts with the children with autism. We are not using the term’s other meanings that refer to specific relationships or emotional involvement between two individuals.) in the second sessions of playing with human adults than during their first sessions. One way of explaining these findings is that the children’s intermediary play session with the humanoid robot impacted their subsequent play session with the human adult. However, another longer and more thorough study would have to be conducted in order to better re-interpret these findings. Furthermore, although the children with autism were more interested in and entertained by the robotic partner, the children showed more examples of collaborative play and cooperation while playing with the human adult.Peer reviewe

    From single neurons to social brains

    Get PDF
    The manufacture of stone tools is an integral part of the human evolutionary trajectory. However, very little research is directed towards the social and cognitive context of the process of manufacture. This article aims to redress this balance by using insights from contemporary neuroscience. Addressing successively more inclusive levels of analysis, we will argue that the relevant unit of analysis when examining the interface between archaeology and neuroscience is not the individual neuron, nor even necessarily the individual brain, but instead the socio-cognitive context in which brains develop and tools are manufactured and used. This context is inextricably linked to the development of unique ontogenetic scheduling, as evidenced by the fossil record of evolving hominin lineages

    Information recovery from rank-order encoded images

    Get PDF
    The time to detection of a visual stimulus by the primate eye is recorded at 100 – 150ms. This near instantaneous recognition is in spite of the considerable processing required by the several stages of the visual pathway to recognise and react to a visual scene. How this is achieved is still a matter of speculation. Rank-order codes have been proposed as a means of encoding by the primate eye in the rapid transmission of the initial burst of information from the sensory neurons to the brain. We study the efficiency of rank-order codes in encoding perceptually-important information in an image. VanRullen and Thorpe built a model of the ganglion cell layers of the retina to simulate and study the viability of rank-order as a means of encoding by retinal neurons. We validate their model and quantify the information retrieved from rank-order encoded images in terms of the visually-important information recovered. Towards this goal, we apply the ‘perceptual information preservation algorithm’, proposed by Petrovic and Xydeas after slight modification. We observe a low information recovery due to losses suffered during the rank-order encoding and decoding processes. We propose to minimise these losses to recover maximum information in minimum time from rank-order encoded images. We first maximise information recovery by using the pseudo-inverse of the filter-bank matrix to minimise losses during rankorder decoding. We then apply the biological principle of lateral inhibition to minimise losses during rank-order encoding. In doing so, we propose the Filteroverlap Correction algorithm. To test the perfomance of rank-order codes in a biologically realistic model, we design and simulate a model of the foveal-pit ganglion cells of the retina keeping close to biological parameters. We use this as a rank-order encoder and analyse its performance relative to VanRullen and Thorpe’s retinal model

    Tempting food words activate eating simulations

    Get PDF
    This study shows that tempting food words activate simulations of eating the food, including simulations of the taste and texture of the food, simulations of eating situations, and simulations of hedonic enjoyment. In a feature listing task, participants generated features that are typically true of four tempting foods (e.g., chips) and four neutral foods (e.g., rice). The resulting features were coded as features of eating simulations if they referred to the taste, texture, and temperature of the food (e.g., “crunchy”; “sticky”), to situations of eating the food (e.g., “movie”; “good for Wok dishes”), and to the hedonic experience when eating the food (e.g., “tasty”). Based on the grounded cognition perspective, it was predicted that tempting foods are more likely to be represented in terms of actually eating them, so that participants would list more features referring to eating simulations for tempting than for neutral foods. Confirming this hypothesis, results showed that eating simulation features constituted 53% of the features for tempting food, and 26% of the features for neutral food. Visual features, in contrast, were mentioned more often for neutral foods (45%) than for tempting foods (19%). Exploratory analyses revealed that the proportion of eating simulation features for tempting foods was positively correlated with perceived attractiveness of the foods, and negatively with participants’ dieting concerns, suggesting that eating simulations may depend on individuals’ goals with regard to eating. These findings are discussed with regard to their implications for understanding the processes guiding eating behavior, and for interventions designed to reduce the consumption of attractive, unhealthy food

    An aesthetics of touch: investigating the language of design relating to form

    Get PDF
    How well can designers communicate qualities of touch? This paper presents evidence that they have some capability to do so, much of which appears to have been learned, but at present make limited use of such language. Interviews with graduate designer-makers suggest that they are aware of and value the importance of touch and materiality in their work, but lack a vocabulary to fully relate to their detailed explanations of other aspects such as their intent or selection of materials. We believe that more attention should be paid to the verbal dialogue that happens in the design process, particularly as other researchers show that even making-based learning also has a strong verbal element to it. However, verbal language alone does not appear to be adequate for a comprehensive language of touch. Graduate designers-makers’ descriptive practices combined non-verbal manipulation within verbal accounts. We thus argue that haptic vocabularies do not simply describe material qualities, but rather are situated competences that physically demonstrate the presence of haptic qualities. Such competencies are more important than groups of verbal vocabularies in isolation. Design support for developing and extending haptic competences must take this wide range of considerations into account to comprehensively improve designers’ capabilities

    Neural coding strategies and mechanisms of competition

    Get PDF
    A long running debate has concerned the question of whether neural representations are encoded using a distributed or a local coding scheme. In both schemes individual neurons respond to certain specific patterns of pre-synaptic activity. Hence, rather than being dichotomous, both coding schemes are based on the same representational mechanism. We argue that a population of neurons needs to be capable of learning both local and distributed representations, as appropriate to the task, and should be capable of generating both local and distributed codes in response to different stimuli. Many neural network algorithms, which are often employed as models of cognitive processes, fail to meet all these requirements. In contrast, we present a neural network architecture which enables a single algorithm to efficiently learn, and respond using, both types of coding scheme

    Effects of congenital hearing loss and cochlear implantation on audiovisual speech perception in infants and children

    Get PDF
    Purpose: Cochlear implantation has recently become available as an intervention strategy for young children with profound hearing impairment. In fact, infants as young as 6 months are now receiving cochlear implants (CIs), and even younger infants are being fitted with hearing aids (HAs). Because early audiovisual experience may be important for normal development of speech perception, it is important to investigate the effects of a period of auditory deprivation and amplification type on multimodal perceptual processes of infants and children. The purpose of this study was to investigate audiovisual perception skills in normal-hearing (NH) infants and children and deaf infants and children with CIs and HAs of similar chronological ages. Methods: We used an Intermodal Preferential Looking Paradigm to present the same woman\u27s face articulating two words ( judge and back ) in temporal synchrony on two sides of a TV monitor, along with an auditory presentation of one of the words. Results: The results showed that NH infants and children spontaneously matched auditory and visual information in spoken words; deaf infants and children with HAs did not integrate the audiovisual information; and deaf infants and children with CIs initially did not initially integrate the audiovisual information but gradually matched the auditory and visual information in spoken words. Conclusions: These results suggest that a period of auditory deprivation affects multimodal perceptual processes that may begin to develop normally after several months of auditory experience

    Nonlinear Hebbian learning as a unifying principle in receptive field formation

    Get PDF
    The development of sensory receptive fields has been modeled in the past by a variety of models including normative models such as sparse coding or independent component analysis and bottom-up models such as spike-timing dependent plasticity or the Bienenstock-Cooper-Munro model of synaptic plasticity. Here we show that the above variety of approaches can all be unified into a single common principle, namely Nonlinear Hebbian Learning. When Nonlinear Hebbian Learning is applied to natural images, receptive field shapes were strongly constrained by the input statistics and preprocessing, but exhibited only modest variation across different choices of nonlinearities in neuron models or synaptic plasticity rules. Neither overcompleteness nor sparse network activity are necessary for the development of localized receptive fields. The analysis of alternative sensory modalities such as auditory models or V2 development lead to the same conclusions. In all examples, receptive fields can be predicted a priori by reformulating an abstract model as nonlinear Hebbian learning. Thus nonlinear Hebbian learning and natural statistics can account for many aspects of receptive field formation across models and sensory modalities

    The Disunity of Consciousness

    Get PDF
    It is commonplace for both philosophers and cognitive scientists to express their allegiance to the "unity of consciousness". This is the claim that a subject’s phenomenal consciousness, at any one moment in time, is a single thing. This view has had a major influence on computational theories of consciousness. In particular, what we call single-track theories dominate the literature, theories which contend that our conscious experience is the result of a single consciousness-making process or mechanism in the brain. We argue that the orthodox view is quite wrong: phenomenal experience is not a unity, in the sense of being a single thing at each instant. It is a multiplicity, an aggregate of phenomenal elements, each of which is the product of a distinct consciousness-making mechanism in the brain. Consequently, cognitive science is in need of a multi-track theory of consciousness; a computational model that acknowledges both the manifold nature of experience, and its distributed neural basis
    corecore