5 research outputs found
The STRANDS project: long-term autonomy in everyday environments
Thanks to the efforts of the robotics and autonomous systems community, the myriad applications and capacities of robots are ever increasing. There is increasing demand from end users for autonomous service robots that can operate in real environments for extended periods. In the Spatiotemporal Representations and Activities for Cognitive Control in Long-Term Scenarios (STRANDS) project (http://strandsproject.eu), we are tackling this demand head-on by integrating state-of-the-art artificial intelligence and robotics research into mobile service robots and deploying these systems for long-term installations in security and care environments. Our robots have been operational for a combined duration of 104 days over four deployments, autonomously performing end-user-defined tasks and traversing 116 km in the process. In this article, we describe the approach we used to enable long-term autonomous operation in everyday environments and how our robots are able to use their long run times to improve their own performance
Approach to grasp with Romeo robot using RTM
Romeo as humanoid robot, which has for main goal help people who has some kind of
disability, needs to be capable to grasp objects. Considering that it has led to do this first
approach to grasp with Romeo. Moreover, it is wanted to grasp object which has been
recognised by the RTM software (Recognition, Tracking and Modelling Objects) with a
Romeo camera or an external one. This approach is included in the ROS framework as an
independent package named romeo_grasper whose code is free to be shared or improved.
The general idea of the approach starts with the RTM software getting the position,
on camera reference, of the object, which has to be previously modelled or at least in
the database. Then, this pose is transformed on robot reference and sent to MoveIt, that
combined with the Rviz simulator and some ROS packages for Romeo, moves the arm of
the robot to an optimal position for the grasp. Currently, it is used the IK solver from
the KDL library, but here it is also explained how has been tried to implement the IKFast
solver on Romeo, but without success.
Finally, it is achieved to make work together all the systems, but the grasp has a
some imprecision, producing that sometime it cannot be accomplish. However, in the
experiments, it has been discerned where this inaccuracy comes from and it has been
proposed some ways to reduce it. Furthermore, some guidelines has been set to lead
Romeo to achieve a grasp using machine learning and allowing it to accomplish its goal,
help the ones who need i
Semantic models of scenes and objects for service and industrial robotics
What may seem straightforward for the human perception system is still challenging for robots. Automatically segmenting the elements with highest relevance or salience, i.e. the semantics, is non-trivial given the high level of variability in the world and the limits of vision sensors. This stands up when multiple ambiguous sources of information are available, which is the case when dealing with moving robots. This thesis leverages on the availability of contextual cues and multiple points of view to make the segmentation task easier. Four robotic applications will be presented, two designed for service robotics and two for an industrial context. Semantic models of indoor environments will be built enriching geometric reconstructions with semantic information about objects, structural elements and humans. Our approach leverages on the importance of context, the availability of multiple source of information, as well as multiple view points showing with extensive experiments on several datasets that these are all crucial elements to boost state-of-the-art performances.
Furthermore, moving to applications with robots analyzing object surfaces instead of their surroundings, semantic models of Carbon Fiber Reinforced Polymers will be built augmenting geometric models with accurate measurements of superficial fiber orientations, and inner defects invisible to the human-eye. We succeeded in reaching an industrial grade accuracy making these models useful for autonomous quality inspection and process optimization. In all applications, special attention will be paid towards fast methods suitable for real robots like the two prototypes presented in this thesis