13 research outputs found

    Fast and Accurate, Convolutional Neural Network Based Approach for Object Detection from UAV

    Full text link
    Unmanned Aerial Vehicles (UAVs), have intrigued different people from all walks of life, because of their pervasive computing capabilities. UAV equipped with vision techniques, could be leveraged to establish navigation autonomous control for UAV itself. Also, object detection from UAV could be used to broaden the utilization of drone to provide ubiquitous surveillance and monitoring services towards military operation, urban administration and agriculture management. As the data-driven technologies evolved, machine learning algorithm, especially the deep learning approach has been intensively utilized to solve different traditional computer vision research problems. Modern Convolutional Neural Networks based object detectors could be divided into two major categories: one-stage object detector and two-stage object detector. In this study, we utilize some representative CNN based object detectors to execute the computer vision task over Stanford Drone Dataset (SDD). State-of-the-art performance has been achieved in utilizing focal loss dense detector RetinaNet based approach for object detection from UAV in a fast and accurate manner.Comment: arXiv admin note: substantial text overlap with arXiv:1803.0111

    Providing resilience to UAV swarms following planned missions

    Get PDF
    As we experience an unprecedented growth in the field of Unmanned Aerial Vehicles (UAVs), more and more applications keep arising due to the combination of low cost and flexibility provided by these flying devices, especially those of the multirrotor type. Within this field, solutions where several UAVs team-up to create a swarm are gaining momentum as they enable to perform more sophisticated tasks, or accelerate task execution compared to the single-UAV alternative. However, advanced solutions based on UAV swarms still lack significant advancements and validation in real environments to facilitate their adoption and deployment. In this paper we take a step ahead in this direction by proposing a solution that improves the resilience of swarm flights, focusing on handling the loss of the swarm leader, which is typically the most critical condition to be faced. Experiments using our UAV emulation tool (ArduSim) evidence the correctness of the protocol under adverse circumstances, and highlight that swarm members are able to seamlessly switch to an alternative leader when necessary, introducing a negligible delay in the process in most cases, while keeping this delay within a few seconds even in worst-case conditions

    Hiding Leader's Identity in Leader-Follower Navigation through Multi-Agent Reinforcement Learning

    Get PDF
    Leader-follower navigation is a popular class of multi-robot algorithms where a leader robot leads the follower robots in a team. The leader has specialized capabilities or mission critical information (e.g. goal location) that the followers lack which makes the leader crucial for the mission's success. However, this also makes the leader a vulnerability - an external adversary who wishes to sabotage the robot team's mission can simply harm the leader and the whole robot team's mission would be compromised. Since robot motion generated by traditional leader-follower navigation algorithms can reveal the identity of the leader, we propose a defense mechanism of hiding the leader's identity by ensuring the leader moves in a way that behaviorally camouflages it with the followers, making it difficult for an adversary to identify the leader. To achieve this, we combine Multi-Agent Reinforcement Learning, Graph Neural Networks and adversarial training. Our approach enables the multi-robot team to optimize the primary task performance with leader motion similar to follower motion, behaviorally camouflaging it with the followers. Our algorithm outperforms existing work that tries to hide the leader's identity in a multi-robot team by tuning traditional leader-follower control parameters with Classical Genetic Algorithms. We also evaluated human performance in inferring the leader's identity and found that humans had lower accuracy when the robot team used our proposed navigation algorithm

    Local ant system for allocating robot swarms to time-constrained tasks

    Get PDF
    We propose a novel application of the Ant Colony Optimization algorithm to efficiently allocate a swarm of homogeneous robots to a set of tasks that need to be accomplished by specific deadlines. We exploit the local communication between robots to periodically evaluate the quality of the allocation solutions, and agents select independently among the high-quality alternatives. The evaluation is performed using pheromone trails to favor allocations which minimize the execution time of the tasks. Our approach is validated in both static and dynamic environments (i.e. the task availability changes over time) using different sets of physics-based simulations. (C) 2018 Elsevier B.V. All rights reserved

    Cooperative heterogeneous robots for autonomous insects trap monitoring system in a precision agriculture scenario

    Get PDF
    The recent advances in precision agriculture are due to the emergence of modern robotics systems. For instance, unmanned aerial systems (UASs) give new possibilities that advance the solution of existing problems in this area in many different aspects. The reason is due to these platforms’ ability to perform activities at varying levels of complexity. Therefore, this research presents a multiple-cooperative robot solution for UAS and unmanned ground vehicle (UGV) systems for their joint inspection of olive grove inspect traps. This work evaluated the UAS and UGV vision-based navigation based on a yellow fly trap fixed in the trees to provide visual position data using the You Only Look Once (YOLO) algorithms. The experimental setup evaluated the fuzzy control algorithm applied to the UAS to make it reach the trap efficiently. Experimental tests were conducted in a realistic simulation environment using a robot operating system (ROS) and CoppeliaSim platforms to verify the methodology’s performance, and all tests considered specific real-world environmental conditions. A search and landing algorithm based on augmented reality tag (AR-Tag) visual processing was evaluated to allow for the return and landing of the UAS to the UGV base. The outcomes obtained in this work demonstrate the robustness and feasibility of the multiple-cooperative robot architecture for UGVs and UASs applied in the olive inspection scenario.The authors would like to thank the Foundation for Science and Technology (FCT, Portugal) for financial support through national funds FCT/MCTES (PIDDAC) to CeDRI (UIDB/05757/2020 and UIDP/05757/2020) and SusTEC (LA/P/0007/2021). In addition, the authors would like to thank the following Brazilian Agencies CEFET-RJ, CAPES, CNPq, and FAPERJ. In addition, the authors also want to thank the Research Centre in Digitalization and Intelligent Robotics (CeDRI), Instituto Politécnico de Braganca (IPB) - Campus de Santa Apolonia, Portugal, Laboratório Associado para a Sustentabilidade e Tecnologia em Regiões de Montanha (SusTEC), Portugal, INESC Technology and Science - Porto, Portugal and Universidade de Trás-os-Montes e Alto Douro - Vila Real, Portugal. This work was carried out under the Project “OleaChain: Competências para a sustentabilidade e inovação da cadeia de valor do olival tradicional no Norte Interior de Portugal” (NORTE-06-3559-FSE-000188), an operation used to hire highly qualified human resources, funded by NORTE 2020 through the European Social Fund (ESF).info:eu-repo/semantics/publishedVersio

    Agricultural Robotics:The Future of Robotic Agriculture

    Get PDF

    White paper - Agricultural Robotics: The Future of Robotic Agriculture

    Get PDF
    Agri-Food is the largest manufacturing sector in the UK. It supports a food chain that generates over £108bn p.a., with 3.9m employees in a truly international industry and exports £20bn of UK manufactured goods. However, the global food chain is under pressure from population growth, climate change, political pressures affecting migration, population drift from rural to urban regions and the demographics of an aging global population. These challenges are recognised in the UK Industrial Strategy white paper and backed by significant investment via a wave 2 Industrial Challenge Fund Investment (“Transforming Food Production: from Farm to Fork”). RAS and associated digital technologies are now seen as enablers of this critical food chain transformation. To meet these challenges, here we review the state of the art of the application of RAS in Agri-Food production and explore research and innovation needs to ensure novel advanced robotic and autonomous reach their full potential and deliver necessary impacts. The opportunities for RAS range from; the development of field robots that can assist workers by carrying weights and conduct agricultural operations such as crop and animal sensing, weeding and drilling; integration of autonomous system technologies into existing farm operational equipment such as tractors; robotic systems to harvest crops and conduct complex dextrous operations; the use of collaborative and “human in the loop” robotic applications to augment worker productivity and advanced robotic applications, including the use of soft robotics, to drive productivity beyond the farm gate into the factory and retail environment. RAS technology has the potential to transform food production and the UK has the potential to establish global leadership within the domain. However, there are particular barriers to overcome to secure this vision: 1.The UK RAS community with an interest in Agri-Food is small and highly dispersed. There is an urgent need to defragment and then expand the community.2.The UK RAS community has no specific training paths or Centres for Doctoral Training to provide trained human resource capacity within Agri-Food.3.While there has been substantial government investment in translational activities at high Technology Readiness Levels (TRLs), there is insufficient ongoing basic research in Agri-Food RAS at low TRLs to underpin onward innovation delivery for industry.4.There is a concern that RAS for Agri-Food is not realising its full potential, as the projects being commissioned currently are too few and too small-scale. RAS challenges often involve the complex integration of multiple discrete technologies (e.g. navigation, safe operation, multimodal sensing, automated perception, grasping and manipulation, perception). There is a need to further develop these discrete technologies but also to deliver large-scale industrial applications that resolve integration and interoperability issues. The UK community needs to undertake a few well-chosen large-scale and collaborative “moon shot” projects.5.The successful delivery of RAS projects within Agri-Food requires close collaboration between the RAS community and with academic and industry practitioners. For example, the breeding of crops with novel phenotypes, such as fruits which are easy to see and pick by robots, may simplify and accelerate the application of RAS technologies. Therefore, there is an urgent need to seek new ways to create RAS and Agri-Food domain networks that can work collaboratively to address key challenges. This is especially important for Agri-Food since success in the sector requires highly complex cross-disciplinary activity. Furthermore, within UKRI most of the Research Councils (EPSRC, BBSRC, NERC, STFC, ESRC and MRC) and Innovate UK directly fund work in Agri-Food, but as yet there is no coordinated and integrated Agri-Food research policy per se. Our vision is a new generation of smart, flexible, robust, compliant, interconnected robotic systems working seamlessly alongside their human co-workers in farms and food factories. Teams of multi-modal, interoperable robotic systems will self-organise and coordinate their activities with the “human in the loop”. Electric farm and factory robots with interchangeable tools, including low-tillage solutions, novel soft robotic grasping technologies and sensors, will support the sustainable intensification of agriculture, drive manufacturing productivity and underpin future food security. To deliver this vision the research and innovation needs include the development of robust robotic platforms, suited to agricultural environments, and improved capabilities for sensing and perception, planning and coordination, manipulation and grasping, learning and adaptation, interoperability between robots and existing machinery, and human-robot collaboration, including the key issues of safety and user acceptance. Technology adoption is likely to occur in measured steps. Most farmers and food producers will need technologies that can be introduced gradually, alongside and within their existing production systems. Thus, for the foreseeable future, humans and robots will frequently operate collaboratively to perform tasks, and that collaboration must be safe. There will be a transition period in which humans and robots work together as first simple and then more complex parts of work are conducted by robots; driving productivity and enabling human jobs to move up the value chain

    Evolutionary strategies in swarm robotics controllers

    Get PDF
    Nowadays, Unmanned Vehicles (UV) are widespread around the world. Most of these vehicles require a great level of human control, and mission success is reliant on this dependency. Therefore, it is important to use machine learning techniques that will train the robotic controllers to automate the control, making the process more efficient. Evolutionary strategies may be the key to having robust and adaptive learning in robotic systems. Many studies involving UV systems and evolutionary strategies have been conducted in the last years, however, there are still research gaps that need to be addressed, such as the reality gap. The reality gap occurs when controllers trained in simulated environments fail to be transferred to real robots. This work proposes an approach for solving robotic tasks using realistic simulation and using evolutionary strategies to train controllers. The chosen setup is easily scalable for multirobot systems or swarm robots. In this thesis, the simulation architecture and setup are presented, including the drone simulation model and software. The drone model chosen for the simulations is available in the real world and widely used, such as the software and flight control unit. This relevant factor makes the transition to reality smoother and easier. Controllers using behavior trees were evolved using a developed evolutionary algorithm, and several experiments were conducted. Results demonstrated that it is possible to evolve a robotic controller in realistic simulation environments, using a simulated drone model that exists in the real world, and also the same flight control unit and operating system that is generally used in real world experiments.Atualmente os Veículos Não Tripulados (VNT) encontram-se difundidos por todo o Mundo. A maioria destes veículos requerem um elevado controlo humano, e o sucesso das missões está diretamente dependente deste fator. Assim, é importante utilizar técnicas de aprendizagem automática que irão treinar os controladores dos VNT, de modo a automatizar o controlo, tornando o processo mais eficiente. As estratégias evolutivas podem ser a chave para uma aprendizagem robusta e adaptativa em sistemas robóticos. Vários estudos têm sido realizados nos últimos anos, contudo, existem lacunas que precisam de ser abordadas, tais como o reality gap. Este facto ocorre quando os controladores treinados em ambientes simulados falham ao serem transferidos para VNT reais. Este trabalho propõe uma abordagem para a resolução de missões com VNT, utilizando um simulador realista e estratégias evolutivas para treinar controladores. A arquitetura escolhida é facilmente escalável para sistemas com múltiplos VNT. Nesta tese, é apresentada a arquitetura e configuração do ambiente de simulação, incluindo o modelo e software de simulação do VNT. O modelo de VNT escolhido para as simulações é um modelo real e amplamente utilizado, assim como o software e a unidade de controlo de voo. Este fator é relevante e torna a transição para a realidade mais suave. É desenvolvido um algoritmo evolucionário para treinar um controlador, que utiliza behavior trees, e realizados diversos testes. Os resultados demonstram que é possível evoluir um controlador em ambientes de simulação realistas, utilizando um VNT simulado mas real, assim como utilizando as mesmas unidades de controlo de voo e software que são amplamente utilizados em ambiente real

    Synthesis of formation control for an aquatic swarm robotics system

    Get PDF
    Formations are the spatial organization of objects or entities according to some predefined pattern. They can be found in nature, in social animals such as fish schools, and insect colonies, where the spontaneous organization into emergent structures takes place. Formations have a multitude of applications such as in military and law enforcement scenarios, where they are used to increase operational performance. The concept is even present in collective sports modalities such as football, which use formations as a strategy to increase teams efficiency. Swarm robotics is an approach for the study of multi-robot systems composed of a large number of simple units, inspired in self-organization in animal societies. These have the potential to conduct tasks too demanding for a single robot operating alone. When applied to the coordination of such type of systems, formations allow for a coordinated motion and enable SRS to increase their sensing efficiency as a whole. In this dissertation, we present a virtual structure formation control synthesis for a multi-robot system. Control is synthesized through the use of evolutionary robotics, from where the desired collective behavior emerges, while displaying key-features such as fault tolerance and robustness. Initial experiments on formation control synthesis were conducted in simulation environment. We later developed an inexpensive aquatic robotic platform in order to conduct experiments in real world conditions. Our results demonstrated that it is possible to synthesize formation control for a multi-robot system making use of evolutionary robotics. The developed robotic platform was used in several scientific studies.As formações consistem na organização de objetos ou entidades de acordo com um padrão pré-definido. Elas podem ser encontradas na natureza, em animais sociais tais como peixes ou colónias de insetos, onde a organização espontânea em estruturas se verifica. As formações aplicam-se em diversos contextos, tais como cenários militares ou de aplicação da lei, onde são utilizadas para aumentar a performance operacional. O conceito está também presente em desportos coletivos tais como o futebol, onde as formações são utilizadas como estratégia para aumentar a eficiência das equipas. Os enxames de robots são uma abordagem para o estudo de sistemas multi-robô compostos de um grande número de unidades simples, inspirado na organização de sociedades animais. Estes têm um elevado potencial na resolução de tarefas demasiado complexas para um único robot. Quando aplicadas na coordenação deste tipo de sistemas, as formações permitem o movimento coordenado e o aumento da sensibilidade do enxame como um todo. Nesta dissertação apresentamos a síntese de controlo de formação para um sistema multi-robô. O controlo é sintetizado através do uso de robótica evolucionária, de onde o comportamento coletivo emerge, demonstrando ainda funcionalidadeschave tais como tolerância a falhas e robustez. As experiências iniciais na síntese de controlo foram realizadas em simulação. Mais tarde foi desenvolvida uma plataforma robótica para a condução de experiências no mundo real. Os nossos resultados demonstram que é possível sintetizar controlo de formação para um sistema multi-robô, utilizando técnicas de robótica evolucionária. A plataforma desenvolvida foi ainda utilizada em diversos estudos científicos

    Agricultural Robotics: The Future of Robotic Agriculture

    Get PDF
    Agri-Food is the largest manufacturing sector in the UK. It supports a food chain that generates over £108bn p.a., with 3.9m employees in a truly international industry and exports £20bn of UK manufactured goods. However, the global food chain is under pressure from population growth, climate change, political pressures affecting migration, population drift from rural to urban regions and the demographics of an aging global population. These challenges are recognised in the UK Industrial Strategy white paper and backed by significant investment via a Wave 2 Industrial Challenge Fund Investment (“Transforming Food Production: from Farm to Fork”). Robotics and Autonomous Systems (RAS) and associated digital technologies are now seen as enablers of this critical food chain transformation. To meet these challenges, this white paper reviews the state of the art in the application of RAS in Agri-Food production and explores research and innovation needs to ensure these technologies reach their full potential and deliver the necessary impacts in theAgri-Food sector.The opportunities for RAS range include; the development of field robots that canassist workers by carrying payloads and conduct agricultural operations such as crop and animal sensing, weeding and drilling; integration of autonomous systems technologies into existing farmoperational equipment such as tractors; robotic systems to harvest crops and conduct complex dextrous operations; the use of collaborative and “human in the loop” robotic applications to augment worker productivity; advanced robotic applications, including the use of soft robotics, to drive productivity beyond the farm gate into the factory and retail environment; and increasing the levels of automation and reducing the reliance on human labour and skill sets, for example,in farming management, planning and decision making. RAS technology has the potential totransform food production and the UK has an opportunity to establish global leadership within the domain. However, there are particular barriers to overcome to secure this vision:1. The UK RAS community with an interest in Agri-Food is small and highly dispersed. There is an urgent need to defragment and then expand the community.2. The UK RAS community has no specific training paths or Centres for Doctoral Training to provide trained human resource capacity within Agri-Food.3. While there has been substantial government investment in translational activities at high Technology Readiness Levels (TRLs), there is insufficient ongoing basic research in Agri-FoodRAS at low TRLs to underpin onward innovation delivery for industry.4. There is a concern that RAS for Agri-Food is not realising its full potential, as the projects being commissioned currently are too few and too small-scale. RAS challenges often involve the complex integration of multiple discrete technologies (e.g. navigation, safe operation, grasping and manipulation, perception). There is a need to further develop these discrete technologies but also to deliver large-scale industrial applications that resolve integration and interoperability issues. The UKcommunity needs to undertake a few well-chosen large-scale and collaborative “moon shot” projects.5. The successful delivery of RAS projects within Agri-Food requires close collaboration between the RAS community and with academic and industry practitioners. For example, the breeding of crops with novel phenotypes, such as fruits which are easy to see and pick by robots, may simplify and accelerate the application of RAS technologies. Therefore, there is an urgent need to seek new ways to create RAS and Agri-Food domain networks that can work collaboratively to addresskey challenges. This is especially important for Agri-Food since success in the sector requires highly complex cross-disciplinary activity. Furthermore, within UKRI many of the Research Councils and Innovate UK directly fund different aspects of Agri-Food, but as yet there is no coordinated and integrated Agri-Food research policy per se.Our vision is a new generation of smart, flexible, robust, compliant, interconnected robotic and autonomous systems working seamlessly alongside their human co-workers in farms and food factories. Teams of multi-modal, interoperable robotic systems will self-organise and coordinatetheir activities with the “human in the loop”. Electric farm and factory robots with interchangeable tools, including low-tillage solutions, soft robotic grasping technologies and sensors, will support the sustainable intensification of agriculture, drive manufacturing productivity and underpin future food security. To deliver this vision the research and innovation needs include the development of robust robotic platforms, suited to agricultural environments, and improved capabilities for sensing and perception, planning and coordination, manipulation and grasping, learning and adaptation, interoperability between robots and existing machinery, and human-robot collaboration, including the key issues of safety and user acceptance.Technology adoption is likely to occur in measured steps. Most farmers and food producers will need technologies that can be introduced gradually, alongside and within their existing production systems. Thus, for the foreseeable future, humans and robots will frequently operate collaboratively to perform tasks, and that collaboration must be safe. There will be a transition period in which humans and robots work together as first simple and then more complex parts of work are conducted by robots, driving productivity and enabling human jobs to move up the value chain
    corecore