610 research outputs found

    Guidance of Autonomous Amphibious Vehicles for Flood Rescue Support

    Get PDF
    We develop a path-planning algorithm to guide autonomous amphibious vehicles (AAVs) for flood rescue support missions. Specifically, we develop an algorithm to control multiple AAVs to reach/rescue multiple victims (also called targets) in a flood scenario in 2D, where the flood water flows across the scene and the targets move (drifted by the flood water) along the flood stream. A target is said to be rescued if an AAV lies within a circular region of a certain radius around the target. The goal is to control the AAVs such that each target gets rescued while optimizing a certain performance objective. The algorithm design is based on the theory of partially observable Markov decision process (POMDP). In practice, POMDP problems are hard to solve exactly, so we use an approximation method called nominal belief-state optimization (NBO). We compare the performance of the NBO approach with a greedy approach

    Liquid slosh control by implementing model-free PID controller with derivative filter based on PSO

    Get PDF
    Conventionally, the control of liquid slosh system is done based on model-based techniques that challenging to implement practically because of the chaotic motion of fluid in the container. The aim of this article is to develop the tuning technique for model-free PID with derivative filter (PIDF) parameters for liquid slosh suppression system based on particle swarm optimization (PSO). PSO algorithm is responsible to find the optimal values for PIDF parameters based on fitness functions which are Sum Squared Error (SSE) and Sum Absolute Error (SAE) of the cart position and liquid slosh angle response. The modelling of liquid slosh in lateral movement is considered to justify the design of control scheme. The PSO tuning method is compared by heuristic tuning method in order to show the effectiveness of the proposed tuning approach. The performance evaluations of the proposed tuning method are based on the ability of the tank to follow the input in horizontal motion and liquid slosh level reduction in time domain. Based on the simulation results, the suggested tuning method is capable to reduce the liquid slosh level in the same time produces fast input tracking of the tank without precisely model the chaotic motion of the fluid

    Liquid Slosh Suppression Hardware-in-the-Loop System by Implementing PID Model-Free Controller

    Get PDF
    Traditionally, the model-based controllers are hard to implement for the container system which contains liquid due to the disordered behavior of the liquid slosh. The purpose of this article is to develop a liquid slosh suppression hardware-in-the-loop (HIL) system by implementing model-free PID controller. This system consists of DC motor to actuate the liquid container/tank to the prescribed location in the horizontal movement in the same time minimize the liquid slosh. The feedback signal from the encoder is used for developing the model-free PID controller. The experiment works is done by using LabVIEW and interfaced with hardware via data acquisition card. The performances evaluation of the liquid slosh suppression HIL system are based on the ability of the tank to follow the input in horizontal motion and liquid slosh level reduction. Based on the experimental results, the suggested model-free PID controller is capable to reduce the liquid slosh level in the same time produces fast input tracking of the tank

    Active compliance control strategies for multifingered robot hand

    Get PDF
    Safety issues have to be enhanced when the robot hand is grasping objects of different shapes, sizes and stiffness. The inability to control the grasping force and finger stiffness can lead to unsafe grasping environment. Although many researches have been conducted to resolve the grasping issues, particularly for the object with different shape, size and stiffness, the grasping control still requires further improvement. Hence, the primary aim of this work is to assess and improve the safety of the robot hand. One of the methods that allows a safe grasping is by employing an active compliance control via the force and impedance control. The implementation of force control considers the proportional–integral–derivative (PID) controller. Meanwhile, the implementation of impedance control employs the integral slidingmode controller (ISMC) and adaptive controller. A series of experiments and simulations is used to demonstrate the fundamental principles of robot grasping. Objects with different shape, size and stiffness are tested using a 3-Finger Adaptive Robot Gripper. The work introduces the Modbus remote terminal unit [RTU] protocol, a low-cost force sensor and the Arduino IO Package for a real-time hardware setup. It is found that, the results of the force control via PID controller are feasible to maintain the grasped object at certain positions, depending on the desired grasping force (i.e., 1N and 8N). Meanwhile, the implementation of impedance control via ISMC and adaptive controller yields multiple stiffness levels for the robot fingers and able to reduce collision between the fingers and the object. However, it was found that the adaptive controller produces better impedance control results as compared to the ISMC, with a 33% efficiency improvement. This work lays important foundations for long-term related research, particularly in the field of active compliance control that can be beneficial to human–robot interaction (HRI)

    Vision based automated formation for multi robot cooperation

    Get PDF
    In a multi robot system, robots are required to cooperate with each other, and therefore should have the ability to make their own decision based on multiple input sensors not only from the robots, but also from nearby robots. The task of carrying oversized objects of different shapes poses a challenge in selecting an appropriate group formation. Hence, the main objective of this project is to establish an algorithm that enables multi robot system to carry large load by automatically selecting the required group formation to successfully execute the task. At first, a robot will need to identify the shape of the object (oversized bar, rectangular, square or circular shapes). Then, the robots will form a suitable formation to carry the object. There are two main problems in this project. First, the capability of the robot to identify the shape of the object because the object’s image will be a bit skew form the actual shape, due to the slanting angle of the camera used to detect the shapes of the objects. The second challenge is maintaining the formation of the robots, while carrying the load on top of the robots, to a specified destination. A multi robot system, built in-house is used in the experiments to investigate the performance of the algorithm proposed. Algorithms implemented in this project are leader-follower and behaviour based strategy. One of the robots will operate as the command giver or the leader to the other robots. The algorithm consists of communication strategies and autonomous decision making capability. The robot will be communicating with each other using Xbee wireless modules and extracting the behaviour of the other robots. Sensors placed around the body of the robots are utilized to detect their relative distance, and hence, used to maintain their formation, so as to prevent the load from falling down. All the decisions are made by the robots autonomously via the onboard controllers. The multi robot system is shown to be able to autonomously determine the shape of the different oversized objects, thus appropriately change into formations capable of transporting large objects to a specified destination point autonomously, with no outside intervention

    Bayesian & AI driven Embedded Perception and Decision-making. Application to Autonomous Navigation in Complex, Dynamic, Uncertain and Human-populated Environments.Synoptic of Research Activity, Period 2004-20 and beyond

    Get PDF
    Robust perception & Decision-making for safe navigation in open and dynamic environments populated by human beings is an open and challenging scientific problem. Traditional approaches do not provide adequate solutions for these problems, mainly because these environments are partially unknown, open and subject to strong constraints to be satisfied (in particular high dynamicity and uncertainty). This means that the proposed solutions have to take simultaneously into account characteristics such as real-time processing, temporary occultation or false detections, dynamic changes in the scene, prediction of the future dynamic behaviors of the surrounding moving entities, continuous assessment of the collision risk, or decision-making for safe navigation. This research report presents how we have addressed this problem over the two last decades, as well as an outline of our Bayesian & IA approach for solving the Embedded Perception and Decision-making problems

    A Survey on FPGA-Based Sensor Systems: Towards Intelligent and Reconfigurable Low-Power Sensors for Computer Vision, Control and Signal Processing

    Get PDF
    The current trend in the evolution of sensor systems seeks ways to provide more accuracy and resolution, while at the same time decreasing the size and power consumption. The use of Field Programmable Gate Arrays (FPGAs) provides specific reprogrammable hardware technology that can be properly exploited to obtain a reconfigurable sensor system. This adaptation capability enables the implementation of complex applications using the partial reconfigurability at a very low-power consumption. For highly demanding tasks FPGAs have been favored due to the high efficiency provided by their architectural flexibility (parallelism, on-chip memory, etc.), reconfigurability and superb performance in the development of algorithms. FPGAs have improved the performance of sensor systems and have triggered a clear increase in their use in new fields of application. A new generation of smarter, reconfigurable and lower power consumption sensors is being developed in Spain based on FPGAs. In this paper, a review of these developments is presented, describing as well the FPGA technologies employed by the different research groups and providing an overview of future research within this field.The research leading to these results has received funding from the Spanish Government and European FEDER funds (DPI2012-32390), the Valencia Regional Government (PROMETEO/2013/085) and the University of Alicante (GRE12-17)

    Design and Development of Sensor Integrated Robotic Hand

    Get PDF
    Most of the automated systems using robots as agents do use few sensors according to the need. However, there are situations where the tasks carried out by the end-effector, or for that matter by the robot hand needs multiple sensors. The hand, to make the best use of these sensors, and behave autonomously, requires a set of appropriate types of sensors which could be integrated in proper manners. The present research work aims at developing a sensor integrated robot hand that can collect information related to the assigned tasks, assimilate there correctly and then do task action as appropriate. The process of development involves selection of sensors of right types and of right specification, locating then at proper places in the hand, checking their functionality individually and calibrating them for the envisaged process. Since the sensors need to be integrated so that they perform in the desired manner collectively, an integration platform is created using NI PXIe-1082. A set of algorithm is developed for achieving the integrated model. The entire process is first modelled and simulated off line for possible modification in order to ensure that all the sensors do contribute towards the autonomy of the hand for desired activity. This work also involves design of a two-fingered gripper. The design is made in such a way that it is capable of carrying out the desired tasks and can accommodate all the sensors within its fold. The developed sensor integrated hand has been put to work and its performance test has been carried out. This hand can be very useful for part assembly work in industries for any shape of part with a limit on the size of the part in mind. The broad aim is to design, model simulate and develop an advanced robotic hand. Sensors for pick up contacts pressure, force, torque, position, surface profile shape using suitable sensing elements in a robot hand are to be introduced. The hand is a complex structure with large number of degrees of freedom and has multiple sensing capabilities apart from the associated sensing assistance from other organs. The present work is envisaged to add multiple sensors to a two-fingered robotic hand having motion capabilities and constraints similar to the human hand. There has been a good amount of research and development in this field during the last two decades a lot remains to be explored and achieved. The objective of the proposed work is to design, simulate and develop a sensor integrated robotic hand. Its potential applications can be proposed for industrial environments and in healthcare field. The industrial applications include electronic assembly tasks, lighter inspection tasks, etc. Application in healthcare could be in the areas of rehabilitation and assistive techniques. The work also aims to establish the requirement of the robotic hand for the target application areas, to identify the suitable kinds and model of sensors that can be integrated on hand control system. Functioning of motors in the robotic hand and integration of appropriate sensors for the desired motion is explained for the control of the various elements of the hand. Additional sensors, capable of collecting external information and information about the object for manipulation is explored. Processes are designed using various software and hardware tools such as mathematical computation MATLAB, OpenCV library and LabVIEW 2013 DAQ system as applicable, validated theoretically and finally implemented to develop an intelligent robotic hand. The multiple smart sensors are installed on a standard six degree-of-freedom industrial robot KAWASAKI RS06L articulated manipulator, with the two-finger pneumatic SHUNK robotic hand or designed prototype and robot control programs are integrated in such a manner that allows easy application of grasping in an industrial pick-and-place operation where the characteristics of the object can vary or are unknown. The effectiveness of the actual recommended structure is usually proven simply by experiments using calibration involving sensors and manipulator. The dissertation concludes with a summary of the contribution and the scope of further work
    corecore