11,031 research outputs found
The implications of embodiment for behavior and cognition: animal and robotic case studies
In this paper, we will argue that if we want to understand the function of
the brain (or the control in the case of robots), we must understand how the
brain is embedded into the physical system, and how the organism interacts with
the real world. While embodiment has often been used in its trivial meaning,
i.e. 'intelligence requires a body', the concept has deeper and more important
implications, concerned with the relation between physical and information
(neural, control) processes. A number of case studies are presented to
illustrate the concept. These involve animals and robots and are concentrated
around locomotion, grasping, and visual perception. A theoretical scheme that
can be used to embed the diverse case studies will be presented. Finally, we
will establish a link between the low-level sensory-motor processes and
cognition. We will present an embodied view on categorization, and propose the
concepts of 'body schema' and 'forward models' as a natural extension of the
embodied approach toward first representations.Comment: Book chapter in W. Tschacher & C. Bergomi, ed., 'The Implications of
Embodiment: Cognition and Communication', Exeter: Imprint Academic, pp. 31-5
A Low-Cost Tele-Presence Wheelchair System
This paper presents the architecture and implementation of a tele-presence
wheelchair system based on tele-presence robot, intelligent wheelchair, and
touch screen technologies. The tele-presence wheelchair system consists of a
commercial electric wheelchair, an add-on tele-presence interaction module, and
a touchable live video image based user interface (called TIUI). The
tele-presence interaction module is used to provide video-chatting for an
elderly or disabled person with the family members or caregivers, and also
captures the live video of an environment for tele-operation and
semi-autonomous navigation. The user interface developed in our lab allows an
operator to access the system anywhere and directly touch the live video image
of the wheelchair to push it as if he/she did it in the presence. This paper
also discusses the evaluation of the user experience
Lifelong Federated Reinforcement Learning: A Learning Architecture for Navigation in Cloud Robotic Systems
This paper was motivated by the problem of how to make robots fuse and
transfer their experience so that they can effectively use prior knowledge and
quickly adapt to new environments. To address the problem, we present a
learning architecture for navigation in cloud robotic systems: Lifelong
Federated Reinforcement Learning (LFRL). In the work, We propose a knowledge
fusion algorithm for upgrading a shared model deployed on the cloud. Then,
effective transfer learning methods in LFRL are introduced. LFRL is consistent
with human cognitive science and fits well in cloud robotic systems.
Experiments show that LFRL greatly improves the efficiency of reinforcement
learning for robot navigation. The cloud robotic system deployment also shows
that LFRL is capable of fusing prior knowledge. In addition, we release a cloud
robotic navigation-learning website based on LFRL
Target Trailing With Safe Navigation With Colregs for Maritime Autonomous Surface Vehicles
Systems and methods for operating autonomous waterborne vessels in a safe manner. The systems include hardware for identifying the locations and motions of other vessels, as well as the locations of stationary objects that represent navigation hazards. By applying a computational method that uses a maritime navigation algorithm for avoiding hazards and obeying COLREGS using Velocity Obstacles to the data obtained, the autonomous vessel computes a safe and effective path to be followed in order to accomplish a desired navigational end result, while operating in a manner so as to avoid hazards and to maintain compliance with standard navigational procedures defined by international agreement. The systems and methods have been successfully demonstrated on water with radar and stereo cameras as the perception sensors, and integrated with a higher level planner for trailing a maneuvering target
A model of ant route navigation driven by scene familiarity
In this paper we propose a model of visually guided route navigation in ants that captures the known properties of real behaviour whilst retaining mechanistic simplicity and thus biological plausibility. For an ant, the coupling of movement and viewing direction means that a familiar view specifies a familiar direction of movement. Since the views experienced along a habitual route will be more familiar, route navigation can be re-cast as a search for familiar views. This search can be performed with a simple scanning routine, a behaviour that ants have been observed to perform. We test this proposed route navigation strategy in simulation, by learning a series of routes through visually cluttered environments consisting of objects that are only distinguishable as silhouettes against the sky. In the first instance we determine view familiarity by exhaustive comparison with the set of views experienced during training. In further experiments we train an artificial neural network to perform familiarity discrimination using the training views. Our results indicate that, not only is the approach successful, but also that the routes that are learnt show many of the characteristics of the routes of desert ants. As such, we believe the model represents the only detailed and complete model of insect route guidance to date. What is more, the model provides a general demonstration that visually guided routes can be produced with parsimonious mechanisms that do not specify when or what to learn, nor separate routes into sequences of waypoints
Visual perception system and method for a humanoid robot
A robotic system includes a humanoid robot with robotic joints each moveable using an actuator(s), and a distributed controller for controlling the movement of each of the robotic joints. The controller includes a visual perception module (VPM) for visually identifying and tracking an object in the field of view of the robot under threshold lighting conditions. The VPM includes optical devices for collecting an image of the object, a positional extraction device, and a host machine having an algorithm for processing the image and positional information. The algorithm visually identifies and tracks the object, and automatically adapts an exposure time of the optical devices to prevent feature data loss of the image under the threshold lighting conditions. A method of identifying and tracking the object includes collecting the image, extracting positional information of the object, and automatically adapting the exposure time to thereby prevent feature data loss of the image
Integrated Collision Avoidance System for Air Vehicle
Collision with ground/water/terrain and midair obstacles is one of the common causes of severe aircraft accidents. The various data from the coremicro AHRS/INS/GPS Integration Unit, terrain data base, and object detection sensors are processed to produce collision warning audio/visual messages and collision detection and avoidance of terrain and obstacles through generation of guidance commands in a closed-loop system. The vision sensors provide more information for the Integrated System, such as, terrain recognition and ranging of terrain and obstacles, which plays an important role to the improvement of the Integrated Collision Avoidance System
- …