135 research outputs found

    Distributed Control Architecture

    Get PDF
    This document describes the development and testing of a novel Distributed Control Architecture (DCA). The DCA developed during the study is an attempt to turn the components used to construct unmanned vehicles into a network of intelligent devices, connected using standard networking protocols. The architecture exists at both a hardware and software level and provides a communication channel between control modules, actuators and sensors. A single unified mechanism for connecting sensors and actuators to the control software will reduce the technical knowledge required by platform integrators and allow control systems to be rapidly constructed in a Plug and Play manner. DCA uses standard networking hardware to connect components, removing the need for custom communication channels between individual sensors and actuators. The use of a common architecture for the communication between components should make it easier for software to dynamically determine the vehicle s current capabilities and increase the range of processing platforms that can be utilised. Implementations of the architecture currently exist for Microsoft Windows, Windows Mobile 5, Linux and Microchip dsPIC30 microcontrollers. Conceptually, DCA exposes the functionality of each networked device as objects with interfaces and associated methods. Allowing each object to expose multiple interfaces allows for future upgrades without breaking existing code. In addition, the use of common interfaces should help facilitate component reuse, unit testing and make it easier to write generic reusable software

    Generic Patterns for Intrusion Detection Systems in Service-Oriented Automotive and Medical Architectures

    Get PDF
    To implement new software functions and more flexible updates in the future as well as to provide cloud-based functionality, the service-oriented architecture (SOA) paradigm is increasingly being integrated into automotive electrical and electronic architecture (E/E architectures). In addition to the automotive industry, the medical industry is also researching SOA-based solutions to increase the interoperability of devices (vendor-independent). The resulting service-oriented communication is no longer fully specified during design time, which affects information security measures. In this paper, we compare different SOA protocols for the automotive and medical fields. Furthermore, we explain the underlying communication patterns and derive features for the development of an SOA-based Intrusion Detection System (IDS)

    Towards the simulation of cooperative perception applications by leveraging distributed sensing infrastructures

    Get PDF
    With the rapid development of Automated Vehicles (AV), the boundaries of their function alities are being pushed and new challenges are being imposed. In increasingly complex and dynamic environments, it is fundamental to rely on more powerful onboard sensors and usually AI. However, there are limitations to this approach. As AVs are increasingly being integrated in several industries, expectations regarding their cooperation ability is growing, and vehicle-centric approaches to sensing and reasoning, become hard to integrate. The proposed approach is to extend perception to the environment, i.e. outside of the vehicle, by making it smarter, via the deployment of wireless sensors and actuators. This will vastly improve the perception capabilities in dynamic and unpredictable scenarios and often in a cheaper way, relying mostly in the use of lower cost sensors and embedded devices, which rely on their scale deployment instead of centralized sensing abilities. Consequently, to support the development and deployment of such cooperation actions in a seamless way, we require the usage of co-simulation frameworks, that can encompass multiple perspectives of control and communications for the AVs, the wireless sensors and actuators and other actors in the environment. In this work, we rely on ROS2 and micro-ROS as the underlying technologies for integrating several simulation tools, to construct a framework, capable of supporting the development, test and validation of such smart, cooperative environments. This endeavor was undertaken by building upon an existing simulation framework known as AuNa. We extended its capabilities to facilitate the simulation of cooperative scenarios by incorporat ing external sensors placed within the environment rather than just relying on vehicle-based sensors. Moreover, we devised a cooperative perception approach within this framework, showcasing its substantial potential and effectiveness. This will enable the demonstration of multiple cooperation scenarios and also ease the deployment phase by relying on the same software architecture.Com o rápido desenvolvimento dos Veículos Autónomos (AV), os limites das suas funcional idades estão a ser alcançados e novos desafios estão a surgir. Em ambientes complexos e dinâmicos, é fundamental a utilização de sensores de alta capacidade e, na maioria dos casos, inteligência artificial. Mas existem limitações nesta abordagem. Como os AVs estão a ser integrados em várias indústrias, as expectativas quanto à sua capacidade de cooperação estão a aumentar, e as abordagens de perceção e raciocínio centradas no veículo, tornam-se difíceis de integrar. A abordagem proposta consiste em extender a perceção para o ambiente, isto é, fora do veículo, tornando-a inteligente, através do uso de sensores e atuadores wireless. Isto irá melhorar as capacidades de perceção em cenários dinâmicos e imprevisíveis, reduzindo o custo, pois a abordagem será baseada no uso de sensores low-cost e sistemas embebidos, que dependem da sua implementação em grande escala em vez da capacidade de perceção centralizada. Consequentemente, para apoiar o desenvolvimento e implementação destas ações em cooperação, é necessária a utilização de frameworks de co-simulação, que abranjam múltiplas perspetivas de controlo e comunicação para os AVs, sensores e atuadores wireless, e outros atores no ambiente. Neste trabalho será utilizado ROS2 e micro-ROS como as tecnologias subjacentes para a integração das ferramentas de simulação, de modo a construir uma framework capaz de apoiar o desenvolvimento, teste e validação de ambientes inteligentes e cooperativos. Esta tarefa foi realizada com base numa framework de simulação denominada AuNa. Foram expandidas as suas capacidades para facilitar a simulação de cenários cooperativos através da incorporação de sensores externos colocados no ambiente, em vez de depender apenas de sensores montados nos veículos. Além disso, concebemos uma abordagem de perceção cooperativa usando a framework, demonstrando o seu potencial e eficácia. Isto irá permitir a demonstração de múltiplos cenários de cooperação e também facilitar a fase de implementação, utilizando a mesma arquitetura de software

    Integration of ROS2 with a simulation environment

    Get PDF
    Dissertação de mestrado integrado em Engenharia InformáticaCurrently, the University of Minho owns a driving simulator, from now on referred to as Driving Simulator Mockup 2-Wheeler (DSM-2W), which mimics a real driving environment for motorcycles. This simulator can reproduce diverse driving scenarios, like driving on different roads, traffic, and weather conditions, and is mostly used to test how the driver reacts to stimulus from subsystems under test in a particular scenario. The simulator has several components, namely, the Mock-up, which represents the motorcycle physically, the software responsible for the simulation environment, that is also projected on a screen, called SILAB [1] as well as several other subsystems and respective software, which all together form a complex distributed system. SILAB creates realistic graphic environments, has different models to control the behavior of other drivers and pedestrians, generates 3D sounds, and facilitates the personalization of the simulation scenario. Robot Operating System 2 (ROS2) [2] provides a set of tools and software libraries that facilitate the develop ment of robot systems and applications. With the increasing reliance on software, sensors, and actuators in the automotive domain, it makes sense to view cars [3] and motorcycles as robots. Therefore, it also makes sense to use ROS2 in the simulation domain to solve the problems at hand. This dissertation describes how ROS2, a well-known and accepted middleware for robotic applications, can also play a role in these contexts acting as a universal interface between motorcycle simulators and external subsystems and thereby significantly improving the system’s expansibility and those subsystems’ portability and reusability.A Universidade do Minho possui um simulador de motas, denominado Driving Simulator Mockup 2-Wheeler (DSM-2W), que imita um ambiente real de condução de motas. Esta ferramenta consegue reproduzir diversos cenários de condução, como conduzir em diferentes condições de estrada, tráfego, bem como em diferentes condições meteorológicas. Esta ferramenta é sobretudo usada para testar como o condutor reage a estímulos de vários sub-sistemas em teste em cenários particulares. O simulador possui diversos componentes, o Mock-up, que representa a mota fisicamente, o software responsável pela projeção do ambiente de simulação no ecrã, chamado SILAB [1], mais um conjunto de sub-sistemas e o respetivo software, que no conjunto formam um complexo sistema distribuído. O SILAB cria ambientes de simulação realistas, tem diferentes modelos para controlar o comportamento dos outros condutores e dos pedestres, gera sons 3D e facilita a personalização do cenário da simulação. O Robot Operating System 2 (ROS2) possui um conjunto de ferramentas e bibliotecas para desenvolver aplicações para robôs [2]. Com o aumento do uso de software, sensores, e atuadores no contexto automóvel, faz sentido equiparar veículos automóveis [3] e motas a robôs Portanto, também faz sentido usar o ROS2 para resolver problemas neste contexto. O objetivo desta dissertação passa por mostrar como o ROS2, um middleware bastante utilizado em aplicações para robôs, pode ter um papel importante em contextos de simulação ao atuar como uma interface universal entre sub-sistemas a testar e um simulador de motas e consequentemente melhorar a extensibilidade do simulador e a portabilidade e reusabilidade desses sub-sistemas

    Axon: A Middleware for Robotics

    Get PDF
    The area of multi-robot systems and frameworks has become, in recent years, a hot research area in the field of robotics. This is attributed to the great advances made in robotic hardware, software, and the diversity of robotic systems. The need to integrate different heterogeneous robotic components and systems has led to the birth of robotic middleware. A robotic middleware is an intricate piece of software that masks the heterogeneity of underlying components and provides high-level interfaces that enable developers to make efficient use of the components. A large number of robotic middleware programs exist today. Each one comes with its own design methodologies and complexities. Up to this moment, however, there exists no unified standard for robotic middleware. Moreover, many of the middleware in use today deal with low-level and hardware aspects. This adds unnecessary complexity in research involving robotic behavior, inter-robot collaboration, and other high-level experiments which do not require prior knowledge of low-level details. In addition, the notion of structured lightweight data transfer between robots is not emphasized in existing work. This dissertation tackles the robotic middleware problem from a different perspective. The aim of this work is to develop a robust middleware that is able to handle multiple robots and clients within a laboratory environment. In the proposed middleware, a high-level representation of robots in an environment is introduced. Also, this work introduces the notion of structured and efficient data exchange as an important issue in robotic middleware research. The middleware has been designed and developed using rigorous methodologies and leading edge technologies. Moreover, the middleware’s ability to integrate different types of robots in a seamless manner, as well as its ability to accommodate multiple robots and clients, has been tested and evaluated

    A Robot Operating System (ROS) based humanoid robot control

    Get PDF
    This thesis presents adapting techniques required to enhance the capability of a commercially available robot, namely, Robotis Bioloid Premium Humanoid Robot (BPHR). BeagleBone Black (BBB), the decision-making and implementing (intelligence providing) component, with multifunctional capabilities is used in this research. Robot operating System (ROS) and its libraries, as well as Python Script and its libraries have been developed and incorporated into the BBB. This fortified BBB intelligence providing component is then transplanted into the structure of the Robotis Bioloid humanoid robot, after removing the latter’s original decision-making and implementing component (controller). Thus, this study revitalizes the Bioloid humanoid robot by converting it into a humanoid robot with multiple features that can be inherited using ROS. This is a first of its kind approach wherein ROS is used as the development framework in conjunction with the main BBB controller and the software impregnated with Python libraries is used to integrate robotic functions. A full ROS computation is developed and a high level Application Programming Interface (API) usable by software utilizing ROS services is also developed. In this revised two-legged-humanoid robot, USB2Dynamixel connector is used to operate the Dynamixel AX-12A actuators through the Wi-Fi interface of the fortified BBB. An accelerometer sensor supports balancing of the robot, and updates data to the BBB periodically. An Infrared (IR) sensor is used to detect obstacles. This dynamic model is used to actuate the motors mounted on the robot leg thereby resulting in a swing-stance period of the legs for a stable forward movement of the robot. The maximum walking speed of the robot is 0.5 feet/second, beyond this limit the robot becomes unstable. The angle at which the robot leans is governed by the feedback from the accelerometer sensor, which is 20 degrees. If the robot tilts beyond a specific degree, then it would come back to its standstill position and stop further movement. When the robot moves forward, the IR sensors sense obstacles in front of the robot. If an obstacle is detected within 35 cm, then the robot stops moving further. Implementation of ROS on top of the BBB (by replacing CM530 controller with the BBB) and using feedback controls from the accelerometer and IR sensor to control the two-legged robotic movement are the novelties of this work
    corecore