268 research outputs found
Real-time Batched Distance Computation for Time-Optimal Safe Path Tracking
In human-robot collaboration, there has been a trade-off relationship between
the speed of collaborative robots and the safety of human workers. In our
previous paper, we introduced a time-optimal path tracking algorithm designed
to maximize speed while ensuring safety for human workers. This algorithm runs
in real-time and provides the safe and fastest control input for every cycle
with respect to ISO standards. However, true optimality has not been achieved
due to inaccurate distance computation resulting from conservative model
simplification. To attain true optimality, we require a method that can compute
distances 1. at many robot configurations to examine along a trajectory 2. in
real-time for online robot control 3. as precisely as possible for optimal
control. In this paper, we propose a batched, fast and precise distance
checking method based on precomputed link-local SDFs. Our method can check
distances for 500 waypoints along a trajectory within less than 1 millisecond
using a GPU at runtime, making it suited for time-critical robotic control.
Additionally, a neural approximation has been proposed to accelerate
preprocessing by a factor of 2. Finally, we experimentally demonstrate that our
method can navigate a 6-DoF robot earlier than a geometric-primitives-based
distance checker in a dynamic and collaborative environment
A 6-DOF haptic manipulation system to verify assembly procedures on CAD models
During the design phase of products and before going into production, it is
necessary to verify the presence of mechanical plays, tolerances, and
encumbrances on production mockups. This work introduces a multi-modal system
that allows verifying assembly procedures of products in Virtual Reality
starting directly from CAD models. Thus leveraging the costs and speeding up
the assessment phase in product design. For this purpose, the design of a novel
6-DOF Haptic device is presented. The achieved performance of the system has
been validated in a demonstration scenario employing state-of-the-art
volumetric rendering of interaction forces together with a stereoscopic
visualization setup
Industrial Robot Collision Handling in Harsh Environments
The focus in this thesis is on robot collision handling systems, mainly collision detection
and collision avoidance for industrial robots operating in harsh environments
(e.g. potentially explosive atmospheres found in the oil and gas sector). Collision
detection should prevent the robot from colliding and therefore avoid a potential
accident. Collision avoidance builds on the concept of collision detection and aims
at enabling the robot to find a collision free path circumventing the obstacle and
leading to the goal position.
The work has been done in collaboration with ABB Process Automation Division
with focus on applications in oil and gas. One of the challenges in this work
has been to contribute to safer use of industrial robots in potentially explosive environments.
One of the main ideas is that a robot should be able to work together
with a human as a robotic co-worker on for instance an oil rig. The robot should
then perform heavy lifting and precision tasks, while the operator controls the steps
of the operation through typically a hand-held interface. In such situations, when
the human works alongside with the robot in potentially explosive environments, it
is important that the robot has a way of handling collisions.
The work in this thesis presents solutions for collision detection in paper A, B
and C, thereafter solutions for collision avoidance are presented in paper D and E.
Paper A approaches the problem of collision avoidance comparing an expert system
and a hidden markov model (HMM) approach. An industrial robot equipped with a
laser scanner is used to gather environment data on arbitrary set of points in the work
cell. The two methods are used to detect obstacles within the work cell and shows a different set of strengths. The expert system shows an advantage in algorithm
performance and the HMM method shows its strength in its ease of learning models
of the environment. Paper B builds upon Paper A by incorporating a CAD model
of the environment. The CAD model allows for a very fast setup of the expert
system where no manual map creation is needed. The HMM can be trained based
on the CAD model, which addresses the previous dependency on real sensor data
for training purposes.
Paper C compares two different world-model representation techniques, namely
octrees and point clouds using both a graphics processing unit (GPU) and a central
processing unit (CPU). The GPU showed its strength for uncompressed point clouds
and high resolution point cloud models. However, if the resolution gets low enough,
the CPU starts to outperform the GPU. This shows that parallel problems containing
large data sets are suitable for GPU processing, but smaller parallel problems are
still handled better by the CPU.
In paper D, real-time collision avoidance is studied for a lightweight industrial
robot using a development platform controller. A Microsoft Kinect sensor is used
for capturing 3D depth data of the environment. The environment data is used
together with an artificial potential fields method for generating virtual forces used
for obstacle avoidance. The forces are projected onto the end-effector, preventing
collision with the environment while moving towards the goal. Forces are also
projected on to the elbow of the 7-Degree of freedom robot, which allows for nullspace
movement. The algorithms for manipulating the sensor data and calculating
virtual forces were developed for the GPU, this resulted in fast algorithms and is the
enabling factor for real-time collision avoidance.
Finally, paper E builds on the work in paper D by providing a framework for
using the algorithms on a standard industrial controller and robot with minimal
modifications. Further, algorithms were specifically developed for the robot controller
to handle reactive movement. In addition, a full collision avoidance system
for an end-user application which is very simple to implement is presented.
The work described in this thesis presents solutions for collision detection and collision avoidance for safer use of robots. The work is also a step towards making
businesses more competitive by enabling easy integration of collision handling for
industrial robots
Motion planning in dynamic environments using context-aware human trajectory prediction
Over the years, the separate fields of motion planning, mapping, and human trajectory prediction have advanced considerably. However, the literature is still sparse in providing practical frameworks that enable mobile manipulators to perform whole-body movements and account for the predicted motion of moving obstacles. Previous optimisation-based motion planning approaches that use distance fields have suffered from the high computational cost required to update the environment representation. We demonstrate that GPU-accelerated predicted composite distance fields significantly reduce the computation time compared to calculating distance fields from scratch. We integrate this technique with a complete motion planning and perception framework that accounts for the predicted motion of humans in dynamic environments, enabling reactive and pre-emptive motion planning that incorporates predicted motions. To achieve this, we propose and implement a novel human trajectory prediction method that combines intention recognition with trajectory optimisation-based motion planning. We validate our resultant framework on a real-world Toyota Human Support Robot (HSR) using live RGB-D sensor data from the onboard camera. In addition to providing analysis on a publicly available dataset, we release the Oxford Indoor Human Motion (Oxford-IHM) dataset and demonstrate state-of-the-art performance in human trajectory prediction. The Oxford-IHM dataset is a human trajectory prediction dataset in which people walk between regions of interest in an indoor environment. Both static and robot-mounted RGB-D cameras observe the people while tracked with a motion-capture system
3D Sensor Placement and Embedded Processing for People Detection in an Industrial Environment
Papers I, II and III are extracted from the dissertation and uploaded as separate documents to meet post-publication requirements for self-arciving of IEEE conference papers.At a time when autonomy is being introduced in more and more areas, computer vision plays a very important role. In an industrial environment, the ability to create a real-time virtual version of a volume of interest provides a broad range of possibilities, including safety-related systems such as vision based anti-collision and personnel tracking. In an offshore environment, where such systems are not common, the task is challenging due to rough weather and environmental conditions, but the result of introducing such safety systems could potentially be lifesaving, as personnel work close to heavy, huge, and often poorly instrumented moving machinery and equipment. This thesis presents research on important topics related to enabling computer vision systems in industrial and offshore environments, including a review of the most important technologies and methods. A prototype 3D sensor package is developed, consisting of different sensors and a powerful embedded computer. This, together with a novel, highly scalable point cloud compression and sensor fusion scheme allows to create a real-time 3D map of an industrial area. The question of where to place the sensor packages in an environment where occlusions are present is also investigated. The result is algorithms for automatic sensor placement optimisation, where the goal is to place sensors in such a way that maximises the volume of interest that is covered, with as few occluded zones as possible. The method also includes redundancy constraints where important sub-volumes can be defined to be viewed by more than one sensor. Lastly, a people detection scheme using a merged point cloud from six different sensor packages as input is developed. Using a combination of point cloud clustering, flattening and convolutional neural networks, the system successfully detects multiple people in an outdoor industrial environment, providing real-time 3D positions. The sensor packages and methods are tested and verified at the Industrial Robotics Lab at the University of Agder, and the people detection method is also tested in a relevant outdoor, industrial testing facility. The experiments and results are presented in the papers attached to this thesis.publishedVersio
- …