53 research outputs found
Multi-Modal Human-Machine Communication for Instructing Robot Grasping Tasks
A major challenge for the realization of intelligent robots is to supply them
with cognitive abilities in order to allow ordinary users to program them
easily and intuitively. One way of such programming is teaching work tasks by
interactive demonstration. To make this effective and convenient for the user,
the machine must be capable to establish a common focus of attention and be
able to use and integrate spoken instructions, visual perceptions, and
non-verbal clues like gestural commands. We report progress in building a
hybrid architecture that combines statistical methods, neural networks, and
finite state machines into an integrated system for instructing grasping tasks
by man-machine interaction. The system combines the GRAVIS-robot for visual
attention and gestural instruction with an intelligent interface for speech
recognition and linguistic interpretation, and an modality fusion module to
allow multi-modal task-oriented man-machine communication with respect to
dextrous robot manipulation of objects.Comment: 7 pages, 8 figure
Artificial consciousness and the consciousness-attention dissociation
Artificial Intelligence is at a turning point, with a substantial increase in projects aiming to implement sophisticated forms of human intelligence in machines. This research attempts to model specific forms of intelligence through brute-force search heuristics and also reproduce features of human perception and cognition, including emotions. Such goals have implications for artificial consciousness, with some arguing that it will be achievable once we overcome short-term engineering challenges. We believe, however, that phenomenal consciousness cannot be implemented in machines. This becomes clear when considering emotions and examining the dissociation between consciousness and attention in humans. While we may be able to program ethical behavior based on rules and machine learning, we will never be able to reproduce emotions or empathy by programming such control systems—these will be merely simulations. Arguments in favor of this claim include considerations about evolution, the neuropsychological aspects of emotions, and the dissociation between attention and consciousness found in humans. Ultimately, we are far from achieving artificial consciousness
Visual Attention and Distributed Processing of Visual Information for the Control of Humanoid Robots
Modeling Bottom-Up Visual Attention Using Dihedral Group D4
Published version. Source at http://dx.doi.org/10.3390/sym8080079 In this paper, first, we briefly describe the dihedral group D4 that serves as the basis for
calculating saliency in our proposed model. Second, our saliency model makes two major changes in
a latest state-of-the-art model known as group-based asymmetry. First, based on the properties of
the dihedral group D4, we simplify the asymmetry calculations associated with the measurement
of saliency. This results is an algorithm that reduces the number of calculations by at least half
that makes it the fastest among the six best algorithms used in this research article. Second, in
order to maximize the information across different chromatic and multi-resolution features, the color
image space is de-correlated. We evaluate our algorithm against 10 state-of-the-art saliency models.
Our results show that by using optimal parameters for a given dataset, our proposed model can
outperform the best saliency algorithm in the literature. However, as the differences among the (few)
best saliency models are small, we would like to suggest that our proposed model is among the
best and the fastest among the best. Finally, as a part of future work, we suggest that our proposed
approach on saliency can be extended to include three-dimensional image data
- …