3 research outputs found

    Design and Implementation of Robotank for Room Monitoring and Exploration

    Get PDF
    A robot is a mechanical device that can perform physical tasks, either autonomously or with human control. Robots began to be used for monitoring in areas that have narrow spaces and/or dangerous areas. So that this robot must be able to carry out monitoring with a remote control system. Therefore, in this study, a robotank is designed that can perform space exploration with remote control. Robotank is designed to use a track and wheel that can pass through various terrains and it has dimensions of 11.8 x 10.8 x 9.1 cm. Robotank is equipped with a camera to monitor in real-time. Robotank can move from one point to another by controlling using a remote control system with a maximum distance of 20 meters in line of sight terrain and 16 meters in non-line of site fields, with an average speed of 0.84 m/s. Robotank can work for 1 hour 52 minutes. With this robotank, it is hoped that it can be used for exploration of areas or rooms that have small spaces and dangerous.A robot is a mechanical device that can perform physical tasks, either autonomously or with human control. Robots began to be used for monitoring in areas that have narrow spaces and/or dangerous areas. So that this robot must be able to carry out monitoring with a remote control system. Therefore, in this study, a robotank is designed that can perform space exploration with remote control. Robotank is designed to use a track and wheel that can pass through various terrains and it has dimensions of 11.8 x 10.8 x 9.1 cm. Robotank is equipped with a camera to monitor in real-time. Robotank can move from one point to another by controlling using a remote control system with a maximum distance of 20 meters in line of sight terrain and 16 meters in non-line of site fields, with an average speed of 0.84 m/s. Robotank can work for 1 hour 52 minutes. With this robotank, it is hoped that it can be used for exploration of areas or rooms that have small spaces and dangerous

    Vision-Based Autonomous Underwater Vehicle Navigation in Poor Visibility Conditions Using a Model-Free Robust Control

    No full text
    This paper presents a vision-based navigation system for an autonomous underwater vehicle in semistructured environments with poor visibility. In terrestrial and aerial applications, the use of visual systems mounted in robotic platforms as a control sensor feedback is commonplace. However, robotic vision-based tasks for underwater applications are still not widely considered as the images captured in this type of environments tend to be blurred and/or color depleted. To tackle this problem, we have adapted the lαβ color space to identify features of interest in underwater images even in extreme visibility conditions. To guarantee the stability of the vehicle at all times, a model-free robust control is used. We have validated the performance of our visual navigation system in real environments showing the feasibility of our approach

    Underwater Localization in Complex Environments

    Get PDF
    A capacidade de um veículo autónomo submarino (AUV) se localizar num ambiente complexo, bem como de extrair características relevantes do mesmo, é de grande importância para o sucesso da navegação. No entanto, esta tarefa é particularmente desafiante em ambientes subaquáticos devido à rápida atenuação sofrida pelos sinais de sistemas de posicionamento global ou outros sinais de radiofrequência, dispersão e reflexão, sendo assim necessário o uso de processos de filtragem. Ambiente complexo é definido aqui como um cenário com objetos destacados das paredes, por exemplo, o objeto pode ter uma certa variabilidade de orientação, portanto a sua posição nem sempre é conhecida. Exemplos de cenários podem ser um porto, um tanque ou mesmo uma barragem, onde existem paredes e dentro dessas paredes um AUV pode ter a necessidade de se localizar de acordo com os outros veículos na área e se posicionar em relação ao mesmo e analisá-lo. Os veículos autónomos empregam muitos tipos diferentes de sensores para localização e percepção dos seus ambientes e dependem dos computadores de bordo para realizar tarefas de direção autónoma. Para esta dissertação há um problema concreto a resolver, localizar um cabo suspenso numa coluna de água em uma região conhecida do mar e navegar de acordo com ela. Embora a posição do cabo no mundo seja bem conhecida, a dinâmica do cabo não permite saber exatamente onde ele está. Assim, para que o veículo se localize de acordo com este para que possa ser inspecionado, a localização deve ser baseada em sensores ópticos e acústicos. Este estudo explora o processamento e a análise de imagens óticas e acústicas, por meio dos dados adquiridos através de uma câmara e por um sonar de varrimento mecânico (MSIS),respetivamente, a fim de extrair características ambientais relevantes que possibilitem a estimação da localização do veículo. Os pontos de interesse extraídos de cada um dos sensores são utilizados para alimentar um estimador de posição, implementando um Filtro de Kalman Extendido (EKF), de modo a estimar a posição do cabo e através do feedback do filtro melhorar os processos de extração de pontos de interesse utilizados.The ability of an autonomous underwater vehicle (AUV) to locate itself in a complex environment as well as to detect relevant environmental features is of crucial importance for successful navigation. However, it's particularly challenging in underwater environments due to the rapid attenuation suffered by signals from global positioning systems or other radio frequency signals, dispersion and reflection thus needing a filtering process. Complex environment is defined here as a scenario with objects detached from the walls, for example the object can have a certain orientation variability therefore its position is not always known. Examples of scenarios can be a harbour, a tank or even a dam reservoir, where there are walls and within those walls an AUV may have the need to localize itself according to the other vehicles in the area and position itself relative to one to observe, analyse or scan it. Autonomous vehicles employ many different types of sensors for localization and perceiving their environments and they depend on the on-board computers to perform autonomous driving tasks. For this dissertation there is a concrete problem to solve, which is to locate a suspended cable in a water column in a known region in the sea and navigate according to it. Although the cable position in the world is well known, the cable dynamics does not allow knowing where it is exactly. So, in order to the vehicle localize itself according to it so it can be inspected, the localization has to be based on optical and acoustic sensors. This study explores the processing and analysis of optical and acoustic images, through the data acquired through a camera and by a mechanical scanning sonar (MSIS), respectively, in order to extract relevant environmental characteristics that allow the estimation of the location of the vehicle. The points of interest extracted from each of the sensors are used to feed a position estimator, by implementing an Extended Kalman Filter (EKF), in order to estimate the position of the cable and through the feedback of the filter improve the extraction processes of points of interest used
    corecore