1,067 research outputs found
An algorithm to solve the inverse kinematics to a stewart platform
Mechanical systems in motion type parallel structures are solid, fast and accurate. Between mobile systems parallel the best known and used system is that of a Stewart platform, as being and the oldest system, quickly, solid and accurate. The paper presents a few main elements of the Stewart platforms. In the case where a motto element consists of a structure composed of two elements in a relative movement from the point of view of the train of propulsion and especially in the dynamic calculations, it is more convenient to represent the motto element as a single moving item. The paper presents an exact, original analytical geometry method for determining the kinematic and dynamic parameters of a parallel mobile structure. Compared with other methods already known, the presented method has the great advantage of being an exact analytical method of calculation and not one iterative-approximately
Learning body models: from humans to humanoids
Humans and animals excel in combining information from multiple sensory
modalities, controlling their complex bodies, adapting to growth, failures, or
using tools. These capabilities are also highly desirable in robots. They are
displayed by machines to some extent. Yet, the artificial creatures are lagging
behind. The key foundation is an internal representation of the body that the
agent - human, animal, or robot - has developed. The mechanisms of operation of
body models in the brain are largely unknown and even less is known about how
they are constructed from experience after birth. In collaboration with
developmental psychologists, we conducted targeted experiments to understand
how infants acquire first "sensorimotor body knowledge". These experiments
inform our work in which we construct embodied computational models on humanoid
robots that address the mechanisms behind learning, adaptation, and operation
of multimodal body representations. At the same time, we assess which of the
features of the "body in the brain" should be transferred to robots to give
rise to more adaptive and resilient, self-calibrating machines. We extend
traditional robot kinematic calibration focusing on self-contained approaches
where no external metrology is needed: self-contact and self-observation.
Problem formulation allowing to combine several ways of closing the kinematic
chain simultaneously is presented, along with a calibration toolbox and
experimental validation on several robot platforms. Finally, next to models of
the body itself, we study peripersonal space - the space immediately
surrounding the body. Again, embodied computational models are developed and
subsequently, the possibility of turning these biologically inspired
representations into safe human-robot collaboration is studied.Comment: 34 pages, 5 figures. Habilitation thesis, Faculty of Electrical
Engineering, Czech Technical University in Prague (2021
Using humanoid robots to study human behavior
Our understanding of human behavior advances as our humanoid robotics work progresses-and vice versa. This team's work focuses on trajectory formation and planning, learning from demonstration, oculomotor control and interactive behaviors. They are programming robotic behavior based on how we humans “program” behavior in-or train-each other
Tactile Perception And Visuotactile Integration For Robotic Exploration
As the close perceptual sibling of vision, the sense of touch has historically received less than deserved attention in both human psychology and robotics. In robotics, this may be attributed to at least two reasons. First, it suffers from the vicious cycle of immature sensor technology, which causes industry demand to be low, and then there is even less incentive to make existing sensors in research labs easy to manufacture and marketable. Second, the situation stems from a fear of making contact with the environment, avoided in every way so that visually perceived states do not change before a carefully estimated and ballistically executed physical interaction. Fortunately, the latter viewpoint is starting to change. Work in interactive perception and contact-rich manipulation are on the rise. Good reasons are steering the manipulation and locomotion communities’ attention towards deliberate physical interaction with the environment prior to, during, and after a task.
We approach the problem of perception prior to manipulation, using the sense of touch, for the purpose of understanding the surroundings of an autonomous robot. The overwhelming majority of work in perception for manipulation is based on vision. While vision is a fast and global modality, it is insufficient as the sole modality, especially in environments where the ambient light or the objects therein do not lend themselves to vision, such as in darkness, smoky or dusty rooms in search and rescue, underwater, transparent and reflective objects, and retrieving items inside a bag. Even in normal lighting conditions, during a manipulation task, the target object and fingers are usually occluded from view by the gripper. Moreover, vision-based grasp planners, typically trained in simulation, often make errors that cannot be foreseen until contact. As a step towards addressing these problems, we present first a global shape-based feature descriptor for object recognition using non-prehensile tactile probing alone. Then, we investigate in making the tactile modality, local and slow by nature, more efficient for the task by predicting the most cost-effective moves using active exploration. To combine the local and physical advantages of touch and the fast and global advantages of vision, we propose and evaluate a learning-based method for visuotactile integration for grasping
- …