2,415 research outputs found

    Comunicações com câmara para aplicações de platooning

    Get PDF
    Platooning is a technology that corresponds to all the coordinated movements of a collection of vehicles, or, in the case of mobile robotics, to all the coordinated movements of a collection of mobile robots. It brings several advantages to driving, such as, improved safety, accurate speed control, lower CO2 emission rates, and higher energy efficiency. This dissertation describes the development of a laboratory scale demonstrator of platooning based on optical camera communications, using two generic wheel steered robots. For this purpose, one of the robots is equipped with a Light Emitting Diode (LED) matrix and the other with a camera. The LED matrix acts as an Optical Camera Communication (OCC) transmitter, providing status information of the robot attitude. The camera acts as both image acquisition and as an OCC receiver. The gathered information is processed using the algorithm You Only Look Once (YOLO) to infer the robot motion. The YOLO object detector continuously checks the movement of the robot in front. Performance evaluation of 5 different YOLO models (YOLOv3, YOLOv3-tiny, YOLOv4, YOLOv4-tiny, YOLOv4-tiny-3l) was conducted to assess which model works best for this project. The outcomes demonstrate that YOLOv4-tiny surpasses the other models in terms of timing, making it the ideal choice for real-time performance. Object detection using YOLOv4-tiny was performed on the computer. This was chosen since it has a processing speed of 3.09 fps as opposed to the Raspberry Pi’s 0.2 fps.O platooning é uma tecnologia que corresponde a todas as movimentações coordenadas de um conjunto de veículos, ou, no caso da robótica movel, a todas as movimentações coordenadas de um conjunto de robots móveis. Traz várias vantagens para a condução, tais como, maior segurança, um controlo preciso da velocidade, menores taxas de emissão de CO2 e maior eficiência energética. Esta dissertação descreve o desenvolvimento de um demonstrador de platooning em escala laboratorial baseado em comunicações com câmera, usando dois robôs móveis genéricos. Para este propósito, um dos robôs é equipado com uma matriz de Light Emitting Diodes (LEDs) e o outro é equipado com uma câmera. A matriz de LEDs funciona como transmissor, fornecendo informações de estado do robô. A câmera funciona como recetor, realizando a aquisição de imagens. As informações recolhidas são processadas usando o algoritmo You Only Look Once (YOLO) de forma a prever o movimento do robô. O YOLO verifica continuamente o movimento do robô da frente. A avaliação de desempenho de 5 modelos de YOLO diferentes (YOLOv3, YOLOv3-tiny, YOLOv4, YOLOv4-tiny, YOLOv4-tiny-3l) foi realizada para identificar qual o modelo que funciona melhor no contexto deste projeto. Os resultados demonstram que o YOLOv4-tiny supera os outros modelos em termos de tempo, tornando-o a escolha ideal para desempenho em tempo real. A deteção de objetos usando YOLOv4-tiny foi realizada no computador. Esta escolhe deveuse ao facto de o computador ter uma velocidade de processamento de 3,09 fps em oposição aos 0,2 fps da Raspberry Pi.Mestrado em Engenharia Eletrónica e Telecomunicaçõe

    Machine learning techniques for robotic and autonomous inspection of mechanical systems and civil infrastructure

    Get PDF
    Machine learning and in particular deep learning techniques have demonstrated the most efficacy in training, learning, analyzing, and modelling large complex structured and unstructured datasets. These techniques have recently been commonly deployed in different industries to support robotic and autonomous system (RAS) requirements and applications ranging from planning and navigation to machine vision and robot manipulation in complex environments. This paper reviews the state-of-the-art with regard to RAS technologies (including unmanned marine robot systems, unmanned ground robot systems, climbing and crawler robots, unmanned aerial vehicles, and space robot systems) and their application for the inspection and monitoring of mechanical systems and civil infrastructure. We explore various types of data provided by such systems and the analytical techniques being adopted to process and analyze these data. This paper provides a brief overview of machine learning and deep learning techniques, and more importantly, a classification of the literature which have reported the deployment of such techniques for RAS-based inspection and monitoring of utility pipelines, wind turbines, aircrafts, power lines, pressure vessels, bridges, etc. Our research provides documented information on the use of advanced data-driven technologies in the analysis of critical assets and examines the main challenges to the applications of such technologies in the industry

    Fast and Robust Detection of Fallen People from a Mobile Robot

    Full text link
    This paper deals with the problem of detecting fallen people lying on the floor by means of a mobile robot equipped with a 3D depth sensor. In the proposed algorithm, inspired by semantic segmentation techniques, the 3D scene is over-segmented into small patches. Fallen people are then detected by means of two SVM classifiers: the first one labels each patch, while the second one captures the spatial relations between them. This novel approach showed to be robust and fast. Indeed, thanks to the use of small patches, fallen people in real cluttered scenes with objects side by side are correctly detected. Moreover, the algorithm can be executed on a mobile robot fitted with a standard laptop making it possible to exploit the 2D environmental map built by the robot and the multiple points of view obtained during the robot navigation. Additionally, this algorithm is robust to illumination changes since it does not rely on RGB data but on depth data. All the methods have been thoroughly validated on the IASLAB-RGBD Fallen Person Dataset, which is published online as a further contribution. It consists of several static and dynamic sequences with 15 different people and 2 different environments

    Trusted Artificial Intelligence in Manufacturing; Trusted Artificial Intelligence in Manufacturing

    Get PDF
    The successful deployment of AI solutions in manufacturing environments hinges on their security, safety and reliability which becomes more challenging in settings where multiple AI systems (e.g., industrial robots, robotic cells, Deep Neural Networks (DNNs)) interact as atomic systems and with humans. To guarantee the safe and reliable operation of AI systems in the shopfloor, there is a need to address many challenges in the scope of complex, heterogeneous, dynamic and unpredictable environments. Specifically, data reliability, human machine interaction, security, transparency and explainability challenges need to be addressed at the same time. Recent advances in AI research (e.g., in deep neural networks security and explainable AI (XAI) systems), coupled with novel research outcomes in the formal specification and verification of AI systems provide a sound basis for safe and reliable AI deployments in production lines. Moreover, the legal and regulatory dimension of safe and reliable AI solutions in production lines must be considered as well. To address some of the above listed challenges, fifteen European Organizations collaborate in the scope of the STAR project, a research initiative funded by the European Commission in the scope of its H2020 program (Grant Agreement Number: 956573). STAR researches, develops, and validates novel technologies that enable AI systems to acquire knowledge in order to take timely and safe decisions in dynamic and unpredictable environments. Moreover, the project researches and delivers approaches that enable AI systems to confront sophisticated adversaries and to remain robust against security attacks. This book is co-authored by the STAR consortium members and provides a review of technologies, techniques and systems for trusted, ethical, and secure AI in manufacturing. The different chapters of the book cover systems and technologies for industrial data reliability, responsible and transparent artificial intelligence systems, human centered manufacturing systems such as human-centred digital twins, cyber-defence in AI systems, simulated reality systems, human robot collaboration systems, as well as automated mobile robots for manufacturing environments. A variety of cutting-edge AI technologies are employed by these systems including deep neural networks, reinforcement learning systems, and explainable artificial intelligence systems. Furthermore, relevant standards and applicable regulations are discussed. Beyond reviewing state of the art standards and technologies, the book illustrates how the STAR research goes beyond the state of the art, towards enabling and showcasing human-centred technologies in production lines. Emphasis is put on dynamic human in the loop scenarios, where ethical, transparent, and trusted AI systems co-exist with human workers. The book is made available as an open access publication, which could make it broadly and freely available to the AI and smart manufacturing communities

    Supporting UAVs with Edge Computing: A Review of Opportunities and Challenges

    Full text link
    Over the last years, Unmanned Aerial Vehicles (UAVs) have seen significant advancements in sensor capabilities and computational abilities, allowing for efficient autonomous navigation and visual tracking applications. However, the demand for computationally complex tasks has increased faster than advances in battery technology. This opens up possibilities for improvements using edge computing. In edge computing, edge servers can achieve lower latency responses compared to traditional cloud servers through strategic geographic deployments. Furthermore, these servers can maintain superior computational performance compared to UAVs, as they are not limited by battery constraints. Combining these technologies by aiding UAVs with edge servers, research finds measurable improvements in task completion speed, energy efficiency, and reliability across multiple applications and industries. This systematic literature review aims to analyze the current state of research and collect, select, and extract the key areas where UAV activities can be supported and improved through edge computing
    corecore