1,210 research outputs found
INTELLIGENT VISION-BASED NAVIGATION SYSTEM
This thesis presents a complete vision-based navigation system that can plan and
follow an obstacle-avoiding path to a desired destination on the basis of an internal map
updated with information gathered from its visual sensor.
For vision-based self-localization, the system uses new floor-edges-specific filters
for detecting floor edges and their pose, a new algorithm for determining the orientation of
the robot, and a new procedure for selecting the initial positions in the self-localization
procedure. Self-localization is based on matching visually detected features with those
stored in a prior map.
For planning, the system demonstrates for the first time a real-world application of
the neural-resistive grid method to robot navigation. The neural-resistive grid is modified
with a new connectivity scheme that allows the representation of the collision-free space of
a robot with finite dimensions via divergent connections between the spatial memory layer
and the neuro-resistive grid layer.
A new control system is proposed. It uses a Smith Predictor architecture that has
been modified for navigation applications and for intermittent delayed feedback typical of
artificial vision. A receding horizon control strategy is implemented using Normalised
Radial Basis Function nets as path encoders, to ensure continuous motion during the delay
between measurements.
The system is tested in a simplified environment where an obstacle placed
anywhere is detected visually and is integrated in the path planning process.
The results show the validity of the control concept and the crucial importance of a
robust vision-based self-localization process
Body swarm interface (BOSI) : controlling robotic swarms using human bio-signals
Traditionally robots are controlled using devices like joysticks, keyboards, mice and other
similar human computer interface (HCI) devices. Although this approach is effective and
practical for some cases, it is restrictive only to healthy individuals without disabilities,
and it also requires the user to master the device before its usage. It becomes complicated and non-intuitive when multiple robots need to be controlled simultaneously with these traditional devices, as in the case of Human Swarm Interfaces (HSI).
This work presents a novel concept of using human bio-signals to control swarms of
robots. With this concept there are two major advantages: Firstly, it gives amputees and
people with certain disabilities the ability to control robotic swarms, which has previously
not been possible. Secondly, it also gives the user a more intuitive interface to control
swarms of robots by using gestures, thoughts, and eye movement.
We measure different bio-signals from the human body including Electroencephalography
(EEG), Electromyography (EMG), Electrooculography (EOG), using off the shelf
products. After minimal signal processing, we then decode the intended control action
using machine learning techniques like Hidden Markov Models (HMM) and K-Nearest
Neighbors (K-NN). We employ formation controllers based on distance and displacement
to control the shape and motion of the robotic swarm. Comparison for ground truth for
thoughts and gesture classifications are done, and the resulting pipelines are evaluated with both simulations and hardware experiments with swarms of ground robots and aerial vehicles
Reductionism ad absurdum: Attneave and Dennett cannot reduce Homunculus (and hence the mind)
Purpose – Neuroscientists act as proxies for implied anthropomorphic signal- processing beings within the brain, Homunculi. The latter examine the arriving neuronal spike-trains to infer internal and external states. But a Homunculus needs a brain of its own, to coordinate its capabilities – a brain that necessarily contains a Homunculus and so on indefinitely. Such infinity is impossible – and in well-cited papers, Attneave and later Dennett claim to eliminate it. How do their approaches differ and do they (in fact) obviate the Homunculi?
Design/methodology/approach – The Attneave and Dennett approaches are carefully scrutinized. To Attneave, Homunculi are effectively “decision-making” neurons that control behaviors. Attneave presumes that Homunculi, when successively nested, become successively “stupider”, limiting their numbers by diminishing their responsibilities. Dennett likewise postulates neuronal Homunculi that become “stupider” – but brain-wards, where greater sophistication might have been expected.
Findings – Attneave’s argument is Reductionist and it simply assumes-away the Homuncular infinity. Dennett’s scheme, which evidently derives from Attneave’s, ultimately involves the same mistakes. Attneave and Dennett fail, because they attempt to reduce intentionality to non-intentionality.
Research limitations/implications – Homunculus has been successively recognized over the centuries by philosophers, psychologists and (some) neuroscientists as a crucial conundrum of cognitive science. It still is.
Practical implications – Cognitive-science researchers need to recognize that Reductionist explanations of cognition may actually devolve to Homunculi, rather than eliminating them.
Originality/value – Two notable Reductionist arguments against the infinity of Homunculi are proven wrong. In their place, a non-Reductionist treatment of the mind, “Emergence”, is discussed as a means of rendering Homunculi irrelevant
Adaptive Control of Arm Movement based on Cerebellar Model
This study is an attempt to take advantage of a cerebellar model to control a biomimetic arm. Aware that a variety of cerebellar models with different levels of details has been developed, we focused on a high-level model called MOSAIC. This model is thought to be able to describe the cerebellar functionality without getting into the details of the neural circuitry. To understand where this model exactly fits, we glanced over the biology of the cerebellum and a few alternative models. Certainly, the arm control loop is composed of other components. We reviewed those elements with emphasis on modeling for our simulation. Among these models, the arm and the muscle system received the most attention. The musculoskeletal model tested independently and by means of optimization techniques, a human-like control of arm through muscle activations achieved. We have discussed how MOSAIC can solve a control problem and what drawbacks it has. Consequently, toward making a practical use of MOSAIC model, several ideas developed and tested. In this process, we borrowed concepts and methods from the control theory. Specifically, known schemes of adaptive control of a manipulator, linearization and approximation were utilized. Our final experiment dealt with a modified/adjusted MOSAIC model to adaptively control the arm. We call this model ORF-MOSAIC (Organized by Receptive Fields MOdular Selection And Identification for Control). With as few as 16 modules, we were able to control the arm in a workspace of 30 x 30 cm. The system was able to adapt to an external field as well as handling new objects despite delays. The discussion section suggests that there are similarities between microzones in the cerebellum and the modules of this new model
User centered neuro-fuzzy energy management through semantic-based optimization
This paper presents a cloud-based building energy management system, underpinned by semantic middleware, that integrates an enhanced sensor network with advanced analytics, accessible through an intuitive Web-based user interface. The proposed solution is described in terms of its three key layers: 1) user interface; 2) intelligence; and 3) interoperability. The system’s intelligence is derived from simulation-based optimized rules, historical sensor data mining, and a fuzzy reasoner. The solution enables interoperability through a semantic knowledge base, which also contributes intelligence through reasoning and inference abilities, and which are enhanced through intelligent rules. Finally, building energy performance monitoring is delivered alongside optimized rule suggestions and a negotiation process in a 3-D Web-based interface using WebGL. The solution has been validated in a real pilot building to illustrate the strength of the approach, where it has shown over 25% energy savings. The relevance of this paper in the field is discussed, and it is argued that the proposed solution is mature enough for testing across further buildings
TOWARDS THE GROUNDING OF ABSTRACT CATEGORIES IN COGNITIVE ROBOTS
The grounding of language in humanoid robots is a fundamental problem, especially
in social scenarios which involve the interaction of robots with human beings. Indeed,
natural language represents the most natural interface for humans to interact
and exchange information about concrete entities like KNIFE, HAMMER and abstract
concepts such as MAKE, USE. This research domain is very important not
only for the advances that it can produce in the design of human-robot communication
systems, but also for the implication that it can have on cognitive science.
Abstract words are used in daily conversations among people to describe events and
situations that occur in the environment. Many scholars have suggested that the
distinction between concrete and abstract words is a continuum according to which
all entities can be varied in their level of abstractness.
The work presented herein aimed to ground abstract concepts, similarly to concrete
ones, in perception and action systems. This permitted to investigate how different
behavioural and cognitive capabilities can be integrated in a humanoid robot in
order to bootstrap the development of higher-order skills such as the acquisition of
abstract words. To this end, three neuro-robotics models were implemented.
The first neuro-robotics experiment consisted in training a humanoid robot to perform
a set of motor primitives (e.g. PUSH, PULL, etc.) that hierarchically combined
led to the acquisition of higher-order words (e.g. ACCEPT, REJECT). The
implementation of this model, based on a feed-forward artificial neural networks,
permitted the assessment of the training methodology adopted for the grounding of
language in humanoid robots.
In the second experiment, the architecture used for carrying out the first study
was reimplemented employing recurrent artificial neural networks that enabled the
temporal specification of the action primitives to be executed by the robot. This
permitted to increase the combinations of actions that can be taught to the robot
for the generation of more complex movements.
For the third experiment, a model based on recurrent neural networks that integrated
multi-modal inputs (i.e. language, vision and proprioception) was implemented for
the grounding of abstract action words (e.g. USE, MAKE). Abstract representations
of actions ("one-hot" encoding) used in the other two experiments, were replaced
with the joints values recorded from the iCub robot sensors.
Experimental results showed that motor primitives have different activation patterns
according to the action's sequence in which they are embedded. Furthermore, the
performed simulations suggested that the acquisition of concepts related to abstract
action words requires the reactivation of similar internal representations activated
during the acquisition of the basic concepts, directly grounded in perceptual and
sensorimotor knowledge, contained in the hierarchical structure of the words used
to ground the abstract action words.This study was financed by the EU project RobotDoC (235065) from the Seventh
Framework Programme (FP7), Marie Curie Actions Initial Training Network
- …