132 research outputs found
Simulation and Framework for the Humanoid Robot TigerBot
Walking humanoid robotics is a developing field. Different humanoid robots allow for different kinds of testing. TigerBot is a new full-scale humanoid robot with seven degrees-of-freedom legs and with its specifications, it can serve as a platform for humanoid robotics research. Currently TigerBot has encoders set up on each joint, allowing for position control, and its sensors and joints connect to Teensy microcontrollers and the ODroid XU4 single-board computer central control unit. The components’ communication system used the Robot Operating System (ROS). This allows the user to control TigerBot with ROS. It’s important to have a simulation setup so a user can test TigerBot’s capabilities on a model before using the real robot. A working walking gait in the simulation serves as a test of the simulator, proves TigerBot’s capability to walk, and opens further development on other walking gaits. A model of TigerBot was set up using the simulator Gazebo, which allowed testing different walking gaits with TigerBot. The gaits were generated by following the linear inverse pendulum model and the basic zero-moment point (ZMP) concept. The gaits consisted of center of mass trajectories converted to joint angles through inverse kinematics. In simulation while the robot follows the predetermined joint angles, a proportional-integral controller keeps the model upright by modifying the flex joint angle of the ankles. The real robot can also run the gaits while suspended in the air. The model has shown the walking gait based off the ZMP concept to be stable, if slow, and the actual robot has been shown to air walk following the gait. The simulation and the framework on the robot can be used to continue work with this walking gait or they can be expanded on for different methods and applications such as navigation, computer vision, and walking on uneven terrain with disturbances
Locomoção bípede adaptativa a partir de uma única demonstração usando primitivas de movimento
Doutoramento em Engenharia EletrotécnicaEste trabalho aborda o problema de capacidade de imitação da locomoção
humana através da utilização de trajetórias de baixo nível codificadas com
primitivas de movimento e utilizá-las para depois generalizar para novas
situações, partindo apenas de uma demonstração única. Assim, nesta linha de
pensamento, os principais objetivos deste trabalho são dois: o primeiro é
analisar, extrair e codificar demonstrações efetuadas por um humano, obtidas
por um sistema de captura de movimento de forma a modelar tarefas de
locomoção bípede. Contudo, esta transferência não está limitada à simples
reprodução desses movimentos, requerendo uma evolução das capacidades
para adaptação a novas situações, assim como lidar com perturbações
inesperadas. Assim, o segundo objetivo é o desenvolvimento e avaliação de
uma estrutura de controlo com capacidade de modelação das ações, de tal
forma que a demonstração única apreendida possa ser modificada para o robô
se adaptar a diversas situações, tendo em conta a sua dinâmica e o ambiente
onde está inserido.
A ideia por detrás desta abordagem é resolver o problema da generalização a
partir de uma demonstração única, combinando para isso duas estruturas
básicas. A primeira consiste num sistema gerador de padrões baseado em
primitivas de movimento utilizando sistemas dinâmicos (DS). Esta abordagem
de codificação de movimentos possui propriedades desejáveis que a torna ideal
para geração de trajetórias, tais como a possibilidade de modificar determinados
parâmetros em tempo real, tais como a amplitude ou a frequência do ciclo do
movimento e robustez a pequenas perturbações. A segunda estrutura, que está
embebida na anterior, é composta por um conjunto de osciladores acoplados
em fase que organizam as ações de unidades funcionais de forma coordenada.
Mudanças em determinadas condições, como o instante de contacto ou
impactos com o solo, levam a modelos com múltiplas fases. Assim, em vez de
forçar o movimento do robô a situações pré-determinadas de forma temporal, o
gerador de padrões de movimento proposto explora a transição entre diferentes
fases que surgem da interação do robô com o ambiente, despoletadas por
eventos sensoriais. A abordagem proposta é testada numa estrutura de
simulação dinâmica, sendo que várias experiências são efetuadas para avaliar
os métodos e o desempenho dos mesmos.This work addresses the problem of learning to imitate human locomotion actions
through low-level trajectories encoded with motion primitives and generalizing
them to new situations from a single demonstration. In this line of thought, the
main objectives of this work are twofold: The first is to analyze, extract and
encode human demonstrations taken from motion capture data in order to model
biped locomotion tasks. However, transferring motion skills from humans to
robots is not limited to the simple reproduction, but requires the evaluation of
their ability to adapt to new situations, as well as to deal with unexpected
disturbances. Therefore, the second objective is to develop and evaluate a
control framework for action shaping such that the single-demonstration can be
modulated to varying situations, taking into account the dynamics of the robot
and its environment.
The idea behind the approach is to address the problem of generalization from
a single-demonstration by combining two basic structures. The first structure is
a pattern generator system consisting of movement primitives learned and
modelled by dynamical systems (DS). This encoding approach possesses
desirable properties that make them well-suited for trajectory generation, namely
the possibility to change parameters online such as the amplitude and the
frequency of the limit cycle and the intrinsic robustness against small
perturbations. The second structure, which is embedded in the previous one,
consists of coupled phase oscillators that organize actions into functional
coordinated units. The changing contact conditions plus the associated impacts
with the ground lead to models with multiple phases. Instead of forcing the robot’s
motion into a predefined fixed timing, the proposed pattern generator explores
transition between phases that emerge from the interaction of the robot system
with the environment, triggered by sensor-driven events. The proposed approach
is tested in a dynamics simulation framework and several experiments are
conducted to validate the methods and to assess the performance of a humanoid
robot
Machine Vision-based Obstacle Avoidance for Mobile Robot
Obstacle avoidance for mobile robots, especially humanoid robot, is an essential ability for the robot to perform in its environment. This ability based on the colour recognition capability of the barrier or obstacle and the field, as well as the ability to perform movements avoiding the barrier, detected when the robot detects an obstacle in its path. This research develops a detection system of barrier objects and a field with a colour range in HSV format and extracts the edges of barrier objects with the FindContoure method at a threshold filter value. The filter results are then processed using the Bounding Rect method so that the results are obtained from the object detection coordinate extraction. The test results detect the colour of the barrier object with OpenCV is 100%, the movement test uses the processing of the object's colour image and robot direction based on the contour area value> 12500 Pixels, the percentage of the robot making edging motion through the red barrier object is 80% and the contour area testing <12500 pixel is 70% of the movement of the robot forward approaching the barrier object
Machine Vision-based Obstacle Avoidance for Mobile Robot
Obstacle avoidance for mobile robots, especially humanoid robot, is an essential ability for the robot to perform in its environment. This ability based on the colour recognition capability of the barrier or obstacle and the field, as well as the ability to perform movements avoiding the barrier, detected when the robot detects an obstacle in its path. This research develops a detection system of barrier objects and a field with a colour range in HSV format and extracts the edges of barrier objects with the FindContoure method at a threshold filter value. The filter results are then processed using the Bounding Rect method so that the results are obtained from the object detection coordinate extraction. The test results detect the colour of the barrier object with OpenCV is 100%, the movement test uses the processing of the object's colour image and robot direction based on the contour area value> 12500 Pixels, the percentage of the robot making edging motion through the red barrier object is 80% and the contour area testing <12500 pixel is 70% of the movement of the robot forward approaching the barrier object
Climbing and Walking Robots
Nowadays robotics is one of the most dynamic fields of scientific researches. The shift of robotics researches from manufacturing to services applications is clear. During the last decades interest in studying climbing and walking robots has been increased. This increasing interest has been in many areas that most important ones of them are: mechanics, electronics, medical engineering, cybernetics, controls, and computers. Today’s climbing and walking robots are a combination of manipulative, perceptive, communicative, and cognitive abilities and they are capable of performing many tasks in industrial and non- industrial environments. Surveillance, planetary exploration, emergence rescue operations, reconnaissance, petrochemical applications, construction, entertainment, personal services, intervention in severe environments, transportation, medical and etc are some applications from a very diverse application fields of climbing and walking robots. By great progress in this area of robotics it is anticipated that next generation climbing and walking robots will enhance lives and will change the way the human works, thinks and makes decisions. This book presents the state of the art achievments, recent developments, applications and future challenges of climbing and walking robots. These are presented in 24 chapters by authors throughtot the world The book serves as a reference especially for the researchers who are interested in mobile robots. It also is useful for industrial engineers and graduate students in advanced study
Gait-Behavior Optimization Considering Arm Swing and Toe Mechanisms for Biped Robot on Rough Road
芝浦工業大学2019年
Machine Vision-based Obstacle Avoidance for Mobile Robot
Obstacle avoidance for mobile robots, especially humanoid robot, is an essential ability for the robot to perform in its environment. This ability based on the colour recognition capability of the barrier or obstacle and the field, as well as the ability to perform movements avoiding the barrier, detected when the robot detects an obstacle in its path. This research develops a detection system of barrier objects and a field with a colour range in HSV format and extracts the edges of barrier objects with the FindContoure method at a threshold filter value. The filter results are then processed using the Bounding Rect method so that the results are obtained from the object detection coordinate extraction. The test results detect the colour of the barrier object with OpenCV is 100%, the movement test uses the processing of the object's colour image and robot direction based on the contour area value> 12500 Pixels, the percentage of the robot making edging motion through the red barrier object is 80% and the contour area testing <12500 pixel is 70% of the movement of the robot forward approaching the barrier object
Learning Interaction Primitives for Biomechanical Prediction
abstract: This dissertation is focused on developing an algorithm to provide current state estimation and future state predictions for biomechanical human walking features. The goal is to develop a system which is capable of evaluating the current action a subject is taking while walking and then use this to predict the future states of biomechanical features.
This work focuses on the exploration and analysis of Interaction Primitives (Amor er al, 2014) and their relevance to biomechanical prediction for human walking. Built on the framework of Probabilistic Movement Primitives, Interaction Primitives utilize an EKF SLAM algorithm to localize and map a distribution over the weights of a set of basis functions. The prediction properties of Bayesian Interaction Primitives were utilized to predict real-time foot forces from a 9 degrees of freedom IMUs mounted to a subjects tibias. This method shows that real-time human biomechanical features can be predicted and have a promising link to real-time controls applications.Dissertation/ThesisMasters Thesis Electrical Engineering 201
- …