7 research outputs found

    Vision Based Object Recognition and Localisation by a Wireless Connected Distributed Robotic Systems

    Get PDF
    Object recognition and localisation are important processes in computer vision and robotics. Advances in computer vision have resulted in many object recognition techniques, but most of them are computationally very intensive and require robots with powerful processing systems. For small robots, these techniques are not applicable because of the constraints of execution time. In this study, an optimised implementation of SURF based recognition technique is presented. Suitable image pre-processing techniques were developed which reduced the recognition time on small robots with limited processing resources. The recognition time was reduced from 39 seconds to 780 milliseconds. This recognition technique was adopted by a team of small robots which were given prior training to search for objects of interest in the environment. For the localisation of the robots and objects a new template, designed for passive markers based tracking, was introduced. These markers were placed on the top of each robot and they were tracked by the two ceiling mounted cameras. The information from both sources, that is ceiling mounted cameras and team of robots, was used collectively to localise the objects in the environment. The objects were localised with an error ranging from 2.8cm to 5.2cm from their actual positions in the test arena which has the dimensions of 150x163cm

    Development and evaluation of vision processing algorithms in multi-robotic systems.

    Get PDF
    The trend in swarm robotics research is shifting to the design of more complicated systems in which the robots have abilities to form a robotic organism. In such systems, a single robot has very limited memory and processing resources, but the complete system is rich in these resources. As vision sensors provide rich surrounding awareness and vision algorithms also requires intensive processing. Therefore, vision processing tasks are the best candidate for distributed processing in such systems. To perform distributed vision processing, a number of scenarios are considered in swarm and the robotic organism form. In the swarm form, as the robots use low bandwidth wireless communication medium, so the exchange of simple visual features should be made between robots. This is addressed in a swarm mode scenario, where novel distance vector features are exchanged within a swarm of robots to generate a precise environmental map. The generated map facilitates the robot navigation in the environment. If features require encoding with high density information, then sharing of such features is not possible using the wireless channel with limited bandwidth. So methods were devised which process such features onboard and then share the process outcome to perform vision processing in a distributed fashion. This is shown in another swarm mode scenario in which a number of optimisation stages are followed and novel image pre-processing techniques are developed which enable the robots to perform onboard object recognition, and then share the process outcome in terms of object identity and its distance from the robot, to localise the objects. In the robotic organism, the use of reliable communication medium facilitates vision processing in distributed fashion, and this is presented in two scenarios. In the first scenario, the robotic organism detect objects in the environment in distributed fashion, but to get detailed surrounding awareness, the organism needs to learn these objects. This leads to a second scenario, which presents a modular approach to object classification and recognition. This approach provides a mechanism to learn newly detected objects and also ensure faster response to object recognition. Using the modular approach, it is also demonstrated that the collective use of 4 distributed processing resources in a robotic organism can provide 5 times the performance of an individual robot module. The overall performance was comparable to an individual less flexible robot (e.g., Pioneer-3AT) with significant higher processing capability

    Behavior Flexibility for Autonomous Unmanned Aerial Systems

    Get PDF
    Autonomous unmanned aerial systems (UAS) could supplement and eventually subsume a substantial portion of the mission set currently executed by remote pilots, making UAS more robust, responsive, and numerous than permitted by teleoperation alone. Unfortunately, the development of robust autonomous systems is difficult, costly, and time-consuming. Furthermore, the resulting systems often make little reuse of proven software components and offer limited adaptability for new tasks. This work presents a development platform for UAS which promotes behavioral flexibility. The platform incorporates the Unified Behavior Framework (a modular, extensible autonomy framework), the Robotic Operating System (a RSF), and PX4 (an open- source flight controller). Simulation of UBF agents identify a combination of reactive robotic control strategies effective for small-scale navigation tasks by a UAS in the presence of obstacles. Finally, flight tests provide a partial validation of the simulated results. The development platform presented in this work offers robust and responsive behavioral flexibility for UAS agents in simulation and reality. This work lays the foundation for further development of a unified autonomous UAS platform supporting advanced planning algorithms and inter-agent communication by providing a behavior-flexible framework in which to implement, execute, extend, and reuse behaviors

    Safe navigation and motion coordination control strategies for unmanned aerial vehicles

    Full text link
    Unmanned aerial vehicles (UAVs) have become very popular for many military and civilian applications including in agriculture, construction, mining, environmental monitoring, etc. A desirable feature for UAVs is the ability to navigate and perform tasks autonomously with least human interaction. This is a very challenging problem due to several factors such as the high complexity of UAV applications, operation in harsh environments, limited payload and onboard computing power and highly nonlinear dynamics. Therefore, more research is still needed towards developing advanced reliable control strategies for UAVs to enable safe navigation in unknown and dynamic environments. This problem is even more challenging for multi-UAV systems where it is more efficient to utilize information shared among the networked vehicles. Therefore, the work presented in this thesis contributes towards the state-of-the-art in UAV control for safe autonomous navigation and motion coordination of multi-UAV systems. The first part of this thesis deals with single-UAV systems. Initially, a hybrid navigation framework is developed for autonomous mobile robots using a general 2D nonholonomic unicycle model that can be applied to different types of UAVs, ground vehicles and underwater vehicles considering only lateral motion. Then, the more complex problem of three-dimensional (3D) collision-free navigation in unknown/dynamic environments is addressed. To that end, advanced 3D reactive control strategies are developed adopting the sense-and-avoid paradigm to produce quick reactions around obstacles. A special case of navigation in 3D unknown confined environments (i.e. tunnel-like) is also addressed. General 3D kinematic models are considered in the design which makes these methods applicable to different UAV types in addition to underwater vehicles. Moreover, different implementation methods for these strategies with quadrotor-type UAVs are also investigated considering UAV dynamics in the control design. Practical experiments and simulations were carried out to analyze the performance of the developed methods. The second part of this thesis addresses safe navigation for multi-UAV systems. Distributed motion coordination methods of multi-UAV systems for flocking and 3D area coverage are developed. These methods offer good computational cost for large-scale systems. Simulations were performed to verify the performance of these methods considering systems with different sizes

    Ultrasonic sensor platforms for non-destructive evaluation

    Get PDF
    Robotic vehicles are receiving increasing attention for use in Non-Destructive Evaluation (NDE), due to their attractiveness in terms of cost, safety and their accessibility to areas where manual inspection is not practical. A reconfigurable Lamb wave scanner, using autonomous robotic platforms is presented. The scanner is built from a fleet of wireless miniature robotic vehicles, each with a non-contact ultrasonic payload capable of generating the A0 Lamb wave mode in plate specimens. An embedded Kalman filter gives the robots a positional accuracy of 10mm. A computer simulator, to facilitate the design and assessment of the reconfigurable scanner, is also presented. Transducer behaviour has been simulated using a Linear Systems approximation (LS), with wave propagation in the structure modelled using the Local Interaction Simulation Approach (LISA). Integration of the LS and LISA approaches were validated for use in Lamb wave scanning by comparison with both analytical techniques and more computationally intensive commercial finite element/diference codes. Starting with fundamental dispersion data, the work goes on to describe the simulation of wave propagation and the subsequent interaction with artificial defects and plate boundaries. The computer simulator was used to evaluate several imaging techniques, including local inspection of the area under the robot and an extended method that emits an ultrasonic wave and listens for echos (B-Scan). These algorithms were implemented in the robotic platform and experimental results are presented. The Synthetic Aperture Focusing Technique (SAFT) was evaluated as a means of improving the fidelity of B-Scan data. It was found that a SAFT is only effective for transducers with reasonably wide beam divergence, necessitating small transducers with a width of approximately 5mm. Finally, an algorithm for robot localisation relative to plate sections was proposed and experimentally validated

    VISION BASED OBSTACLE AVOIDANCE AND ODOMETERY FOR SWARMS OF SMALL SIZE ROBOTS

    No full text
    corecore