998 research outputs found

    ARTIFICIAL INTELLIGENT REMOTE CONTROL CAR (AI MOBILE) (A.K.A. REMOTE CONTROL CAR USING MATLAB)

    Get PDF
    The primary objective of this project is to construct a working prototype of a remote controlled car (a.k.a. AI Mobile) and its ability to control from MATLAB Graphical User Interface (GUI). It involves several EE areas in microcontroller, wireless communication and MATLAB. Secondary objective will be implementing artificial intelligence (AI) in a robot to perform tasks intelligently and autonomously. The car will be enhanced with systems like obstacles detection sensors, wireless camera, wireless microphone, and speed alteration. These systems will be combined by the PIC microcontroller and controlled from the remote computer with the aid of the MS Visual Basic GUI. With these artificial intelligent systems, successful execution of manyhuman-in-loop manipulation tasks which directlydepend on the operator's skill previously can be improved to: (i) permit easy and rapid incorporation of local sensory information to augment performance, and (ii) provide variable performance (precision- and power-) assist for output motions and forces. Such AI systems have enormous potential both reduce operator error and permit integration of greater autonomy into human and robot interactions which will eventually enhance security, safety, and performance. The AI systems are built on an existing platform modified from a remote controlled car. The processor used to coordinate the AI Mobile is the microcontroller PIC16F84A. The independent subsystems for controlling the AI Mobile via Microsoft Visual Basic include the serial communication interface, switching circuit, microcontroller, RF Transmitter & Receiver and Visual Basic programming

    An Underwater Sensor Network with DBMS Concept

    Get PDF
    In this paper is a concept of  a technique of sending and receiving message below water. There are several ways of employing such communication but the most common is using hydrophones. Under water communication is difficult due to factors like multi-path propagation, time variations of the channel, small available bandwidth and strong signal attenuation, especially over long ranges. In underwater communication there are low data rates compared to terrestrial communication, since underwater communication uses acoustic waves instead of electromagnetic waves.  we present a novel platform for underwater sensor  networks to  be used  for long-term monitoring of coral reefs  and  fisheries.    The  sensor  network consists  of static and  mobile  underwater sensor  nodes.   The  nodes  communicate  point-to-point using  a novel high-speed optical  communication system  integrated into  the  TinyOS stack,   and they  broadcast using an acoustic  protocol  integrated in the TinyOS stack.    The  nodes  have  a variety of sensing  capabilities,   including cameras,  water   temperature,  and  pres- sure.    The  mobile  nodes  can  locate  and  hover  above  the static nodes for data mining  and  they  can perform  network maintenance functions such  as deployment, relocation, and recovery.   In this  paper  we describe  the  hardware and  soft- ware  architecture of this  underwater sensor  network.   We then  describe  the  optical  and  acoustic  networking protocols and  present  experimental networking and  data collected  in a pool, in rivers,  and  in the  ocean.  Finally, we describe  our experiments with  mobility for data mining  in this  network. Keywords: Mobile sensor networks, underwater networks, data minin

    Generic Project Plan for a Mobile Robotics System

    Get PDF
    This thesis discussed the mobile land robots for the robotic competitions. The topics discussed in this thesis are robotic systems, mobile land robots, robot competitions, and example of robot designs. Question-answer sections are added to help understand the requirements to build the robot. Examples include three different teams who participated in different robotic competitions to provide a context for robotic competitions. The thesis was divided into the five chapters. The first and second chapters explained the different kind of robotics systems, and opportunities. The focus of the information was the mobile land robots, which was explained under the third chapter, mobile land robots. The aim of the thesis was to guide those who want to design, build, and compete in the mobile robot competition. As a result, the information from various resources been gathered and has been given a form of thesis to help individuals or group of individuals to guide them through the robotic competitions

    The walking robot project

    Get PDF
    A walking robot was designed, analyzed, and tested as an intelligent, mobile, and a terrain adaptive system. The robot's design was an application of existing technologies. The design of the six legs modified and combines well understood mechanisms and was optimized for performance, flexibility, and simplicity. The body design incorporated two tripods for walking stability and ease of turning. The electrical hardware design used modularity and distributed processing to drive the motors. The software design used feedback to coordinate the system and simple keystrokes to give commands. The walking machine can be easily adapted to hostile environments such as high radiation zones and alien terrain. The primary goal of the leg design was to create a leg capable of supporting a robot's body and electrical hardware while walking or performing desired tasks, namely those required for planetary exploration. The leg designers intent was to study the maximum amount of flexibility and maneuverability achievable by the simplest and lightest leg design. The main constraints for the leg design were leg kinematics, ease of assembly, degrees of freedom, number of motors, overall size, and weight

    Indoor Geo-location And Tracking Of Mobile Autonomous Robot

    Get PDF
    The field of robotics has always been one of fascination right from the day of Terminator. Even though we still do not have robots that can actually replicate human action and intelligence, progress is being made in the right direction. Robotic applications range from defense to civilian, in public safety and fire fighting. With the increase in urban-warfare robot tracking inside buildings and in cities form a very important application. The numerous applications range from munitions tracking to replacing soldiers for reconnaissance information. Fire fighters use robots for survey of the affected area. Tracking robots has been limited to the local area under consideration. Decision making is inhibited due to limited local knowledge and approximations have to be made. An effective decision making would involve tracking the robot in earth co-ordinates such as latitude and longitude. GPS signal provides us sufficient and reliable data for such decision making. The main drawback of using GPS is that it is unavailable indoors and also there is signal attenuation outdoors. Indoor geolocation forms the basis of tracking robots inside buildings and other places where GPS signals are unavailable. Indoor geolocation has traditionally been the field of wireless networks using techniques such as low frequency RF signals and ultra-wideband antennas. In this thesis we propose a novel method for achieving geolocation and enable tracking. Geolocation and tracking are achieved by a combination of Gyroscope and encoders together referred to as the Inertial Navigation System (INS). Gyroscopes have been widely used in aerospace applications for stabilizing aircrafts. In our case we use gyroscope as means of determining the heading of the robot. Further, commands can be sent to the robot when it is off balance or off-track. Sensors are inherently error prone; hence the process of geolocation is complicated and limited by the imperfect mathematical modeling of input noise. We make use of Kalman Filter for processing erroneous sensor data, as it provides us a robust and stable algorithm. The error characteristics of the sensors are input to the Kalman Filter and filtered data is obtained. We have performed a large set of experiments, both indoors and outdoors to test the reliability of the system. In outdoors we have used the GPS signal to aid the INS measurements. When indoors we utilize the last known position and extrapolate to obtain the GPS co-ordinates

    Pure-Pursuit reactive Path Tracking for Nonholonomic Mobile Robots with a 2D Laser Scanner

    Get PDF
    Due to its simplicity and efficiency, the pure-pursuit path tracking method has been widely employed for planned navigation of nonholonomic ground vehicles. In this paper, we investigate the application of this technique for reactive tracking of paths that are implicitly defined by perceived environmental features. Goal points are obtained through an efficient interpretation of range data from an onboard 2D laser scanner to follow persons, corridors, and walls. Moreover, this formulation allows that a robotic mission can be composed of a combination of different types of path segments. These techniques have been successfully tested in the tracked mobile robot Auriga-α in an indoor environment

    A Multi-Sensor Fusion-Based Underwater Slam System

    Get PDF
    This dissertation addresses the problem of real-time Simultaneous Localization and Mapping (SLAM) in challenging environments. SLAM is one of the key enabling technologies for autonomous robots to navigate in unknown environments by processing information on their on-board computational units. In particular, we study the exploration of challenging GPS-denied underwater environments to enable a wide range of robotic applications, including historical studies, health monitoring of coral reefs, underwater infrastructure inspection e.g., bridges, hydroelectric dams, water supply systems, and oil rigs. Mapping underwater structures is important in several fields, such as marine archaeology, Search and Rescue (SaR), resource management, hydrogeology, and speleology. However, due to the highly unstructured nature of such environments, navigation by human divers could be extremely dangerous, tedious, and labor intensive. Hence, employing an underwater robot is an excellent fit to build the map of the environment while simultaneously localizing itself in the map. The main contribution of this dissertation is the design and development of a real-time robust SLAM algorithm for small and large scale underwater environments. SVIn – a novel tightly-coupled keyframe-based non-linear optimization framework fusing Sonar, Visual, Inertial and water depth information with robust initialization, loop-closing, and relocalization capabilities has been presented. Introducing acoustic range information to aid the visual data, shows improved reconstruction and localization. The availability of depth information from water pressure enables a robust initialization and refines the scale factor, as well as assists to reduce the drift for the tightly-coupled integration. The complementary characteristics of these sensing v modalities provide accurate and robust localization in unstructured environments with low visibility and low visual features – as such make them the ideal choice for underwater navigation. The proposed system has been successfully tested and validated in both benchmark datasets and numerous real world scenarios. It has also been used for planning for underwater robot in the presence of obstacles. Experimental results on datasets collected with a custom-made underwater sensor suite and an autonomous underwater vehicle (AUV) Aqua2 in challenging underwater environments with poor visibility, demonstrate performance never achieved before in terms of accuracy and robustness. To aid the sparse reconstruction, a contour-based reconstruction approach utilizing the well defined edges between the well lit area and darkness has been developed. In particular, low lighting conditions, or even complete absence of natural light inside caves, results in strong lighting variations, e.g., the cone of the artificial video light intersecting underwater structures and the shadow contours. The proposed method utilizes these contours to provide additional features, resulting into a denser 3D point cloud than the usual point clouds from a visual odometry system. Experimental results in an underwater cave demonstrate the performance of our system. This enables more robust navigation of autonomous underwater vehicles using the denser 3D point cloud to detect obstacles and achieve higher resolution reconstructions

    Mobile robot tank with GPU assistance

    Get PDF
    Robotic research has been costly and tremendously time consuming; this is due to the cost of sensors, motors, computational unit, physical construction, and fabrication. There are also a lot of man hours clocked in for algorithm design, software development, debugging and optimizing. Robotic vision input usually consists of 2 or more color cameras to construct a 3D virtual space. Additional cameras can also be added to enrich the detailed virtual 3D environment, however, the computational complexity increases when more cameras are added. This is due to not only the additional processing power that is required running the software but also the complexity of stitching multiple cameras together to form a sensor. Another method of creating the 3D virtual space is the utilization of range finder sensors. These types of sensors are usually relatively expensive and still require complex algorithms for calibration and correlation to real life distances. Sensing of a robot position can be facilitated by the addition of accelerometers and gyroscope sensors. A significant robotic design is robot interaction. One type of interaction is through verbal exchange. Such interaction requires an audio input receiver and transmitter on the robot. In order to achieve acceptable recognitions, different types of audio receivers may be implemented and many receivers are required to be deployed. Depending on the environment, noise cancellation hardware and software may be needed to enhance the verbal interaction performance. In this thesis different audio sensors are evaluated and implemented with Microsoft Speech Platform. Any robotic research requires a proper control algorithm and logic process. As a result, the majority of these control algorithms rely heavily on mathematics. High performance computational processing power is needed to process all the raw data in real-time. However, any high performance computation proportionally consumes more energy, so to conserve battery life on the robot, many robotic researchers implement remote computation by processing the raw data remotely. This implementation has one major drawback: in order to transmit raw data remotely, a robust communication infrastructure is needed. Without a robust communication the robot will suffers in failures due to the sheer amount of raw data in communication. I am proposing a solution to the computation problem by harvesting the General Purpose Graphic Processing Unit (GPU)’s computational power for complex mathematical raw data processing. This approach utilizes many-cores parallelism for multithreading real-time computation with a minimum of 10x the computational flops of traditional Central Processing Unit (CPU). By shifting the computation on the GPU the entire computation will be done locally on the robot itself to eliminate the need of any external communication system for remote data processing. While the GPU is used to perform image processing for the robot, the CPU is allowed to dedicate all of its processing power to run the other functions of the robot. Computer vision has been an interesting topic for a while; it utilizes complex mathematical techniques and algorithms in an attempt to achieve image processing in real-time. Open Source Computer Vision Library (OpenCV) is a library consisting of pre-programed image processing functions for several different languages, developed by Intel. This library greatly reduces the amount of development time. Microsoft Kinect for XBOX has all the sensors mentioned above in a single convenient package with first party SDK for software development. I perform the robotic implementation utilizing Microsoft Kinect for XBOX as the primary sensor, OpenCV for image processing functions and NVidia GPU to off-load complex mathematical raw data processing for the robot. This thesis’s objective is to develop and implement a low cost autonomous robot tank with Microsoft Kinect for XBOX as the primary sensor, a custom onboard Small Form Factor (SFF) High Performance Computer (HPC) with NVidia GPU assistance for the primary computation, and OpenCV library for image processing functions

    Adaptive Path Planning for Depth Constrained Bathymetric Mapping with an Autonomous Surface Vessel

    Full text link
    This paper describes the design, implementation and testing of a suite of algorithms to enable depth constrained autonomous bathymetric (underwater topography) mapping by an Autonomous Surface Vessel (ASV). Given a target depth and a bounding polygon, the ASV will find and follow the intersection of the bounding polygon and the depth contour as modeled online with a Gaussian Process (GP). This intersection, once mapped, will then be used as a boundary within which a path will be planned for coverage to build a map of the Bathymetry. Methods for sequential updates to GP's are described allowing online fitting, prediction and hyper-parameter optimisation on a small embedded PC. New algorithms are introduced for the partitioning of convex polygons to allow efficient path planning for coverage. These algorithms are tested both in simulation and in the field with a small twin hull differential thrust vessel built for the task.Comment: 21 pages, 9 Figures, 1 Table. Submitted to The Journal of Field Robotic

    Software framework implementation for a robot that uses different development boards

    Get PDF
    A software framework is a concrete or conceptual platform where common code with generic functionality can be selectively specialized or overridden by developers or users. Frameworks take the form of libraries where a well-defined application program interface is reusable anywhere within the software under development. A user can extend the framework but not modify the code. The purpose of software framework is to simplify the development environment, allowing developers to dedicate their efforts to the project requirements, rather than dealing with the framework’s mundane, repetitive functions and libraries. A differential wheeled robot is a mobile robot whose movement is based on two separately driven wheels placed on either side of the robot body. It can thus change its direction by varying the relative rate of rotation of its wheels and hence does not require an additional steering motion. If both the wheels are driven in the same direction and speed, the robot will go in a straight line. If both wheels are turned with equal speed in opposite directions, the robot will rotate about the central point of the axis. Otherwise, depending on the speed of rotation and its direction, the center of rotation may fall anywhere on the line defined by the two contact points of the wheels. The objective of this thesis, is to create a software framework for a wheeled robot, where we can change its development board, without having to make many changes in the code that is used to control the robot. The composition of the robot is: a development board, that allows the control of any electronic devices, that are connected to it; some sensors to detect obstacles; two motors to move the robot; a motor driver to power and control the motors individually; a chassis to assemble the robot; and a battery to power all the electronic devices. The most important device of the robot is the development board, which allows the control of every single electronic device connected to it through a program. In this project we use three development boards, which are: Arduino UNO rev3, NodeMCU ESP8266 v1.0 and Raspberry Pi 3 Model B+. The programming language used to control the devices and the boards is the C++ programming language, because it can be used with all of them. Since all the boards have a different external/internal design, there are some issues that we need to fix with the help of external hardware. Other important devices are the sensors to detect objects, which are: the sensor HC-SR04, which uses ultrasonic waves; and, the sensor Sharp GP2Y0A41SK0F, which uses infrared light. The framework also covers two types of servo motors: one that can continuously rotate; and the other one that only rotates about half a circle. The servo motors can be used, for instance, to rotate a range sensor. Then we need two DC motors to move the vehicle. To power up these DC motors, which are controlled with a PWM signal, we need to connect them to a device called motor driver which is connected to a battery. Finally, to assemble the robot we just need to connect all the devices to the development board and attach them to the chassis. This software framework was created with the purpose of programmatically connect every device (any sensor, motor or other device) to the development board and allow a user to do minimal code changes when he has the need to change the development board. All the devices that the framework supports have a datasheet explaining their behavior and operation, so that it was possible to develop a library to operate and control the device. By joining all these libraries together we have the framework presented here. The experimental methodology, used in two case studies, will show the features and the limitations of the framework. The first case study shows that the changes the user needs to do when changing boards are minimal but because all the development boards are different, there are some things we can’t program without having to make the user of the framework, sacrifice his GPIO connection choices. The second case study, shows that because of how the development boards work internally, there are some things that aren’t possible to program to work like they were designed as the other development boards.Um framework de software é uma plataforma conceitual em que um código comum com funcionalidade genérica pode ser seletivamente especializado ou substituído por desenvolvedores ou utilizadores. Os frameworks assumem a forma de bibliotecas em que uma interface de um programa bem definido é reutilizável em qualquer lugar dentro do software em desenvolvimento. Um utilizador pode estender a framework, mas não pode modificar o código. O objetivo do framework é simplificar o ambiente de desenvolvimento, permitindo que os desenvolvedores dediquem os seus esforços aos requisitos do projeto, em vez de lidar com as bibliotecas e funções, comuns e repetitivas, do framework. Um robô com rodas diferenciais é um robô móvel cujo o movimento é baseado em duas rodas acionadas separadamente que estão colocadas em ambos os lados do corpo do robô. Pode assim alterar a sua direção, variando a taxa relativa à rotação das rodas, e portanto, não requer um movimento de direção adicional. Se ambas as rodas forem movidas na mesma direção e velocidade, o robô irá mover-se em linha reta. Se ambas as rodas girarem com velocidade igual e em direções opostas, o robô girará em torno do ponto central do eixo. Caso contrário, dependendo da velocidade de rotação e da sua direção, o centro de rotação pode cair em qualquer lugar na linha definida pelos dois pontos de contato das rodas. O objetivo desta tese é criar um framework de software para um robô de rodas, onde podemos mudar a sua placa de desenvolvimento sem ter que fazer muitas alterações no código que é usado no controlo do robô. A composição do robô é a seguinte: uma placa de desenvolvimento, que permite o controlo de qualquer dispositivo eletrónico que esteja conectado à placa; alguns sensors para detetar obstáculos; dois motores para mover o robô; um driver para os motores, para alimentar e controlar os motores individualmente; um chassi, para montar o robô; e uma bateria, para alimentar todos os dispositivos eletrónicos. O dispositivo mais importante do robô é a placa de desenvolvimento, que permite o controlo de cada dispositivo eletrónico, que está ligado à placa, através de um programa. Neste projeto utilizamos três placas de desenvolvimento, que são: Arduino UNO rev3, NodeMCU ESP8266 v1.0 e Raspberry Pi 3 Model B +. A linguagem de programação usada para controlar os dispositivos e as placas é a linguagem de programação C ++, porque pode ser usada por todas as placas. Como todas as placas têm um design externo / interno diferente, existem alguns proble-mas que precisam de ser salvaguardados com a ajuda de hardware externo. Outros dispositivos importantes são os sensores que servem para detetar objetos, e estes são: o sensor HC-SR04, que utiliza ondas ultrassónicas; e, o sensor Sharp GP2Y0A41SK0F, que usa luz infravermelha. A framework também suporta dois tipos de servo motores: um que pode girar continuamente; e o outro só pode rodar cerca de meio círculo. Por exemplo, os servo motores podem ser utilizados para fazer rodar um sensor que deteta objetos. Também precisamos de dois motores DC para mover o robot. Para energizar estes motores, que são controlados por um sinal PWM, precisamos de liga-los a um dispositivo chamado de driver para motor que está ligado a uma bateria. Finalmente, para montar o robô precisamos apenas de ligar todos os dispositivos à placa de desenvolvimento e montar todos os dispositivos no chassi. Este framework de software foi criada com o objetivo de ligar todos os dispositivos (qualquer sensor, motor ou outro dispositivo eletrónico), em termos de programação, à placa de desenvolvimento e permitir que um utilizador faça alterações mínimas no código que controla robô quando tiver a necessidade de alterar a placa de desenvolvimento. Todos os dispositivos que a framework suporta têm uma ficha de especificações que explica o seu comportamento e como funcionam, e por este motivo foi possível desenvolver uma biblioteca para controlar estes dispositivos. Ao unir todas as bibliotecas temos a framework. A metodologia experimental usada em dois casos de estudo, mostram quais as limitações da framework. O primeiro caso de estudo mostra que as alterações que o utilizador precisa de fazer ao trocar as placas são mínimas, mas como todas as placas de desenvolvimento são diferentes existem sempre algumas coisas que não podemos programar sem ter que fazer com que o utilizador da framework, sacrifique as suas opções de ligação aos pinos GPIO. No segundo caso de estudo é mostrado que devido à forma de como as placas de desenvolvimento trabalham internamente, existem algumas coisas que não podem ser programadas para funcionar como se fossem as outras placas de desenvolvimento
    corecore