1,120 research outputs found
Multi-Modal Human-Machine Communication for Instructing Robot Grasping Tasks
A major challenge for the realization of intelligent robots is to supply them
with cognitive abilities in order to allow ordinary users to program them
easily and intuitively. One way of such programming is teaching work tasks by
interactive demonstration. To make this effective and convenient for the user,
the machine must be capable to establish a common focus of attention and be
able to use and integrate spoken instructions, visual perceptions, and
non-verbal clues like gestural commands. We report progress in building a
hybrid architecture that combines statistical methods, neural networks, and
finite state machines into an integrated system for instructing grasping tasks
by man-machine interaction. The system combines the GRAVIS-robot for visual
attention and gestural instruction with an intelligent interface for speech
recognition and linguistic interpretation, and an modality fusion module to
allow multi-modal task-oriented man-machine communication with respect to
dextrous robot manipulation of objects.Comment: 7 pages, 8 figure
Planning is Just a Way of Avoiding Figuring Out What To Do Next
The idea of planning and plan execution is just an intuition based decomposition. There is no reason it has to be that way. Most likely in the long term, real empirical evidence from systems we know to be built that way (from designing them like that) will determine whether its a very good idea or not. Any particular planner is simply an abstraction barrier. Below that level we get a choice of whether to slot in another planner or to place a program which does the right thing. Why stop there? Maybe we can go up the hierarchy and eliminate the planners there too. To do this we must move from a state based way of reasoning to a process based way of acting.MIT Artificial Intelligence Laborator
Motion-induced sound level control for socially-aware robot navigation
With the growing presence of robots in human populated environments, it
becomes necessary to render the relation between robots and humans natural,
rather than invasive. For that purpose, robots need to make sure the acoustic noise
induced by their motion does not disturb people that are in the same environment.
This dissertation proposes a method that allows a robot to learn how to control
the amount of noise it produces, taking into account the environmental context
and the robot’s mechanical characteristics. The robot performs its task while
adapting its speed, so it produces less acoustic noise than the environment’s, and
thus, not disturbing nearby people. Before the robot can execute a task in a given
environment, it needs to learn how much noise it induces while moving at different
speeds on that environment. For that, a microphone was installed on a wheeled
robot to gather information about it’s acoustic noise. To validate the proposed
solution, tests in different environments with different background sounds were
performed. After that, a PIR sensor was installed on the robot to verify the
ability of the robot to use the motion controller when someone enters the sensor’s
field of vision. This second test demonstrated the possibility of using the proposed
solution in another systems.Com a crescente presença dos robôs no espaço ocupado pelos seres humanos,
é necessário que a relação entre robôs e humans seja natural e não invasiva. Para
alcançar este objectivo, é preciso que os robôs tenham formas de assegurar que
o ruĂdo causado pelos seus movimentos nĂŁo incomodam as pessoas inseridas no
meio envolvente. Esta dissertação propôe uma solução que permite que um robô
aprenda a controlar a quantidade de ruĂdo que produz, tendo em conta as suas
caracteristicas e o meio ambiente onde Ă© colocado. O robĂ´ concretiza as suas tarefas
adaptando a sua velocidade de forma a produzir menos ruĂdo que o presente no
meio ambiente e, assim, nĂŁo incomodar as pessoas. Para que o robĂ´ possa executar
alguma tarefa num determinado ambiente, é necessário aprender a quantidade de
ruĂdo que faz ao movimentar-se a diferentes velocidades nesse ambiente. Para tal,
um microfone foi instalado num robô com rodas para obter informação sobre a sua
acústica. Para validar a solução proposta, foram realizados testes, com sucesso, em
diferentes ambientes com diferentes ruĂdos de fundo. Posteriormente, foi instalado
no robĂ´ um sensor PIR para analizar a capacidade do robĂ´ executar o controlador
de velocidade quando alguém entra no campo de visão do sensor. Este segundo
teste demonstrou a possibilidade de incluir a solução proposta em outros sistemas
Deep neural network approach in human-like redundancy optimization for anthropomorphic manipulators
© 2013 IEEE. Human-like behavior has emerged in the robotics area for improving the quality of Human-Robot Interaction (HRI). For the human-like behavior imitation, the kinematic mapping between a human arm and robot manipulator is one of the popular solutions. To fulfill this requirement, a reconstruction method called swivel motion was adopted to achieve human-like imitation. This approach aims at modeling the regression relationship between robot pose and swivel motion angle. Then it reaches the human-like swivel motion using its redundant degrees of the manipulator. This characteristic holds for most of the redundant anthropomorphic robots. Although artificial neural network (ANN) based approaches show moderate robustness, the predictive performance is limited. In this paper, we propose a novel deep convolutional neural network (DCNN) structure for reconstruction enhancement and reducing online prediction time. Finally, we utilized the trained DCNN model for managing redundancy control a 7 DoFs anthropomorphic robot arm (LWR4+, KUKA, Germany) for validation. A demonstration is presented to show the human-like behavior on the anthropomorphic manipulator. The proposed approach can also be applied to control other anthropomorphic robot manipulators in industry area or biomedical engineering
Computational intelligence approaches to robotics, automation, and control [Volume guest editors]
No abstract available
Cooperative Avoidance Control-based Interval Fuzzy Kohonen Networks Algorithm in Simple Swarm Robots
A novel technique to control swarm robot’s movement is presented and analyzed in this paper. It allows a group of robots to move as a unique entity performing the following function such as obstacle avoidance at group level. The control strategy enhances the mobile robot’s performance whereby their forthcoming decisions are impacted by its previous experiences during the navigation apart from the current range inputs. Interval Fuzzy-Kohonen Network (IFKN) algorithm is utilized in this strategy. By employing a small number of rules, the IFKN algorithms can be adapted to swarms reactive control. The control strategy provides much faster response compare to Fuzzy Kohonen Network (FKN) algorithm to expected events. The effectiveness of the proposed technique is also demonstrated in a series of practical test on our experimental by using five low cost robots with limited sensor abilities and low computational effort on each single robot in the swarm. The results show that swarm robots based on proposed technique have the ability to perform cooperative behavior, produces minimum collision and capable to navigate around square shapes obstacles
- …