38,067 research outputs found

    Bridging Between Computer and Robot Vision Through Data Augmentation: A Case Study on Object Recognition

    Get PDF
    Despite the impressive progress brought by deep network in visual object recognition, robot vision is still far from being a solved problem. The most successful convolutional architectures are developed starting from ImageNet, a large scale collection of images of object categories downloaded from the Web. This kind of images is very different from the situated and embodied visual experience of robots deployed in unconstrained settings. To reduce the gap between these two visual experiences, this paper proposes a simple yet effective data augmentation layer that zooms on the object of interest and simulates the object detection outcome of a robot vision system. The layer, that can be used with any convolutional deep architecture, brings to an increase in object recognition performance of up to 7{\%}, in experiments performed over three different benchmark databases. An implementation of our robot data augmentation layer has been made publicly available

    Low-fi skin vision: A case study in rapid prototyping a sensory substitution system

    Get PDF
    We describe the design process we have used to develop a minimal, twenty vibration motor Tactile Vision Sensory Substitution (TVSS) system which enables blind-folded subjects to successfully track and bat a rolling ball and thereby experience 'skin vision'. We have employed a low-fi rapid prototyping approach to build this system and argue that this methodology is particularly effective for building embedded interactive systems. We support this argument in two ways. First, by drawing on theoretical insights from robotics, a discipline that also has to deal with the challenge of building complex embedded systems that interact with their environments; second, by using the development of our TVSS as a case study: describing the series of prototypes that led to our successful design and highlighting what we learnt at each stage

    How getting noticed helps getting on: successful attention capture doubles children's cooperative play

    Get PDF
    Cooperative social interaction is a complex skill that involves maintaining shared attention and continually negotiating a common frame of reference. Privileged in human evolution, cooperation provides support for the development of social-cognitive skills. We hypothesize that providing audio support for capturing playmates' attention will increase cooperative play in groups of young children. Attention capture was manipulated via an audio-augmented toy to boost children's attention bids. Study 1 (48 6- to 11-year-olds) showed that the augmented toy yielded significantly more cooperative play in triads compared to the same toy without augmentation. In Study 2 (33 7- to 9-year-olds) the augmented toy supported greater success of attention bids, which were associated with longer cooperative play, associated in turn with better group narratives. The results show how cooperation requires moment-by-moment coordination of attention and how we can manipulate environments to reveal and support mechanisms of social interaction. Our findings have implications for understanding the role of joint attention in the development of cooperative action and shared understanding
    • …
    corecore