14 research outputs found

    Efficient Embedded Hardware Architecture for Stabilised Tracking Sighting System of Armoured Fighting Vehicles

    Get PDF
    A line-of-sight stabilised sighting system, capable of target tracking and video stabilisation is a prime requirement of any armoured fighting tank vehicle for military surveillance and weapon firing. Typically, such sighting systems have three prime electro-optical sensors i.e. day camera for viewing in day conditions, thermal camera for night viewing and eye-safe laser range finder for obtaining the target range. For laser guided missile firing, additional laser target designator may be a part of sighting system. This sighting system provides necessary parameters for the fire control computer to compute ballistic offsets to fire conventional ammunition or fire missile. System demands simultaneous interactions with electro-optical sensors, servo sensors, actuators, multi-function display for man-machine interface, fire control computer, logic controller and other sub-systems of tank. Therefore, a complex embedded electronics hardware is needed to respond in real time for such system. An efficient electronics embedded hardware architecture is presented here for the development of this type of sighting system. This hardware has been developed around SHARC 21369 processor and FPGA. A performance evaluation scheme is also presented for this sighting system based on the developed hardware

    Histograma de orientación de gradientes aplicado al seguimiento múltiple de personas basado en video

    Get PDF
    El seguimiento múltiple de personas en escenas reales es un tema muy importante en el campo de Visión Computacional dada sus múltiples aplicaciones en áreas como en los sistemas de vigilancia, robótica, seguridad peatonal, marketing, etc., además de los retos inherentes que representa la identificación de personas en escenas reales como son la complejidad de la escena misma, la concurrencia de personas y la presencia de oclusiones dentro del video debido a dicha concurrencia. Existen diversas técnicas que abordan el problema de la segmentación de imágenes y en particular la identificación de personas, desde diversas perspectivas; por su parte el presente trabajo tiene por finalidad desarrollar una propuesta basada en Histograma de Orientación de Gradientes (HOG) para el seguimiento múltiple de personas basado en video. El procedimiento propuesto se descompone en las siguientes etapas: Procesamiento de Video, este proceso consiste en la captura de los frames que componen la secuencia de video, para este propósito se usa la librería OpenCV de tal manera que se pueda capturar la secuencia desde cualquier fuente; la siguiente etapa es la Clasificación de Candidatos, esta etapa se agrupa el proceso de descripción de nuestro objeto, que para el caso de este trabajo son personas y la selección de los candidatos, para esto se hace uso de la implementación del algoritmo de HOG; por último la etapa final es el Seguimiento y Asociación, mediante el uso del algoritmo de Kalman Filter, permite determinar las asociaciones de las secuencias de objetos previamente detectados. La propuesta se aplicó sobre tres conjuntos de datos, tales son: TownCentre (960x540px), TownCentre (1920x1080px) y PETS 2009, obteniéndose los resultados para precisión: 94.47%, 90.63% y 97.30% respectivamente. Los resultados obtenidos durante las experimentaciones validan la propuesta del modelo haciendo de esta una herramienta que puede encontrar múltiples campos de aplicación, además de ser una propuesta innovadora a nivel nacional dentro del campo de Vision Computacional.Tesi

    Robust Stabilised Visual Tracker for Vehicle Tracking

    Get PDF
    Visual tracking is performed in a stabilised video. If the input video to the tracker algorithm is itself destabilised, incorrect motion vectors will cause a serious drift in tracking. Therefore video stabilisation is must before tracking. A novel algorithm is developed which simultaneously takes care of video stabilisation and target tracking. Target templates in just previous frame are stored in positive and negative repositories followed by Affine mapping. Then optimised affine parameters are used to stabilise the video. Target of interest in the next frame is approximated using linear combinations of previous target templates. Proposed modified L1 minimisation method is used to solve sparse representation of target in the target template subspace. Occlusion problem is minimised using the inherent energy of coefficients. Accurate tracking results have been obtained in destabilised videos

    Real-time camera operation and tracking for the streaming of teaching activities

    Full text link
    The primary driving force of this work comes from the Lab’s urgent needs to offer students the opportunity to attend a remote event from home or anywhere in the world in real-time. The main objective of this work is to build a real-time tracker to follow the movements of the lecturer. After that we will build a framework to handle a PTZ (Pan Tilt and Zoom) camera based on the lecturer movements. That is, if the lecturer goes to the left, the camera will turn to the left. To tackle this project we will follow a project developed by Gebrehiwot, A. which involved building a real-time tracker. The problem of this tracker is that was implemented on Ubuntu and running with a very complex CNN which required the use a good GPU on our computer. As Gebrehiwot, A. rightly points out at the end of his report, not everyone has an Ubuntu partition or a GPU on their computers so we started moving the real time tracker to Windows. To achieve this objective we used Anaconda Windows which made our work much easier. After that we implemented a lightweight backbone of the tracker allowing us to run it on computers with a fewer processing power. Once that all this process was done, we put into practice the mentioned framework for handling the movement of the PTZ camera. This framework uses the implemented lightweight tracker to follow the lecturer moves and depending on these movements the camera will pan and tilt automatically. We tested this framework on streaming platforms like YouTube proving that can greatly improve the quality of online classes. Finally we draw conclusions from the work done and propose future work to improve the framework

    A Comprehensive Mapping and Real-World Evaluation of Multi-Object Tracking on Automated Vehicles

    Get PDF
    Multi-Object Tracking (MOT) is a field critical to Automated Vehicle (AV) perception systems. However, it is large, complex, spans research fields, and lacks resources for integration with real sensors and implementation on AVs. Factors such those make it difficult for new researchers and practitioners to enter the field. This thesis presents two main contributions: 1) a comprehensive mapping for the field of Multi-Object Trackers (MOTs) with a specific focus towards Automated Vehicles (AVs) and 2) a real-world evaluation of an MOT developed and tuned using COTS (Commercial Off-The-Shelf) software toolsets. The first contribution aims to give a comprehensive overview of MOTs and various MOT subfields for AVs that have not been presented as wholistically in other papers. The second contribution aims to illustrate some of the benefits of using a COTS MOT toolset and some of the difficulties associated with using real-world data. This MOT performed accurate state estimation of a target vehicle through the tracking and fusion of data from a radar and vision sensor using a Central-Level Track Processing approach and a Global Nearest Neighbors assignment algorithm. It had an 0.44 m positional Root Mean Squared Error (RMSE) over a 40 m approach test. It is the authors\u27 hope that this work provides an overview of the MOT field that will help new researchers and practitioners enter the field. Additionally, the author hopes that the evaluation section illustrates some difficulties of using real-world data and provides a good pathway for developing and deploying MOTs from software toolsets to Automated Vehicles

    Challenges and solutions for autonomous ground robot scene understanding and navigation in unstructured outdoor environments: A review

    Get PDF
    The capabilities of autonomous mobile robotic systems have been steadily improving due to recent advancements in computer science, engineering, and related disciplines such as cognitive science. In controlled environments, robots have achieved relatively high levels of autonomy. In more unstructured environments, however, the development of fully autonomous mobile robots remains challenging due to the complexity of understanding these environments. Many autonomous mobile robots use classical, learning-based or hybrid approaches for navigation. More recent learning-based methods may replace the complete navigation pipeline or selected stages of the classical approach. For effective deployment, autonomous robots must understand their external environments at a sophisticated level according to their intended applications. Therefore, in addition to robot perception, scene analysis and higher-level scene understanding (e.g., traversable/non-traversable, rough or smooth terrain, etc.) are required for autonomous robot navigation in unstructured outdoor environments. This paper provides a comprehensive review and critical analysis of these methods in the context of their applications to the problems of robot perception and scene understanding in unstructured environments and the related problems of localisation, environment mapping and path planning. State-of-the-art sensor fusion methods and multimodal scene understanding approaches are also discussed and evaluated within this context. The paper concludes with an in-depth discussion regarding the current state of the autonomous ground robot navigation challenge in unstructured outdoor environments and the most promising future research directions to overcome these challenges

    이종 센서들을 이용한 지능형 공간의 운용

    Get PDF
    학위논문 (박사)-- 서울대학교 대학원 : 전기·컴퓨터공학부, 2014. 8. 이범희.A new approach of multi-sensor operation is presented in an intelligent space, which is based on heterogeneous multiple vision sensors and robots mounted with an infrared (IR) sensor. The intelligent space system is a system that exists in task space of robots, helps missions of the robots, and can self-control the robots in a particular situation. The conventional intelligent space consists of solely static cameras. However, the adoption of multiple heterogeneous sensors and an operation technique for the sensors are required in order to extend the ability of intelligent space. First, this dissertation presents the sub-systems for each sensor group in the proposed intelligent space. The vision sensors consist of two groups: static (fixed) cameras and dynamic (pan-tilt) cameras. Each sub-system can detect and track the robots. The sub-system using static cameras localize the robot within a high degree of accuracy. In this system, a handoff method is proposed using the world-to-pixel transformation in order to interwork among the multiple static cameras. The sub-system using dynamic cameras is designed to have various views without losing the robot in view. In this system, a handoff method is proposed using the predictive positions of the robot, relationship among cameras, and relationship between the robot and the camera in order to interwork among the multiple dynamic cameras. The robots system localizes itself using an IR sensor and IR tags. The IR sensor can localize the robot even if illumination of the environment is low. For robust tracking, a sensor selection method is proposed using the advantages of these sensors under environmental change of the task space. For the selection method, we define interface protocol among the sub-systems, sensor priority, and selection criteria. The proposed method is adequate for a real-time system, which has a low computational cost than sensor fusion methods. Performance of each sensor group is verified through various experiments. In addition, multi-sensor operation using the proposed sensor selection method is experimentally verified in the environment with an occlusion and low-illumination setting.Abstracts i Contents iii List of Figures vii List of Tables xv Chapter 1 Introduction 1 1.1 Background and Motivation 1 1.2 Related Work 4 1.3 Contributions 7 1.4 Organization 10 Chapter 2 Overview of Intelligent Space 11 2.1 Original Concept of Intelligent Space 11 2.2 Related Research 13 2.3 Problem Statement and Objective 16 Chpater 3 Architecture of a Proposed Intelligent Space 18 3.1 Hardware Architecture 19 3.2.1 Metallic Structure 20 3.2.2 Static Cameras 22 3.2.3 Dynamic Cameras 24 3.2.4 Infrared (IR) Camera and Passive IR Tags 27 3.2.5 Mobile Robots 28 3.2 Software Architecture 31 Chpater 4 Localization and Tracking of Mobile Robots in a Proposed Intelligent Space 36 4.1 Localization and Tracking with an IR Sensor Mounted on Robots 36 4.1.1 Deployment of IR Tags 36 4.1.2 Localization and Tracking Using an IR Sensor 38 4.2 Localization and Tracking with Multiple Dynamic Cameras 41 4.2.1 Localization and Tracking based on the Geometry between a Robot and a Single Dynamic Camera 41 4.2.2 Proposed Predictive Handoff among Dynamic Cameras 45 4.3 Localization and Tracking with Multiple Static Cameras 53 4.3.1 Preprocess for Static Cameras 53 4.3.2 Marker-based Localization and Tracking of Multiple Robots 58 4.3.3 Proposed Reprojection-based Handoff among Static Cameras 67 Chpater 5 Operation with Heterogeneous Sensor Groups 72 5.1 Interface Protocol among Sensor Groups 72 5.2 Sensor Selection for an Operation Using Heterogeneous Sensors 84 5.3 Proposed Operation with Static Cameras and Dynamic cameras 87 5.4 Proposed Operation with the iSpace and Robots 90 Chapter 6 Experimental Results 94 6.1 Experimental Setup 94 6.2 Experimental Results of Localization 95 6.2.1 Results using Static Cameras and Dynamic Cameras 95 6.2.2 Results using the IR Sensor 102 6.3 Experimental Results of Tracking 104 6.3.1 Results using Static and Dynamic Cameras 104 6.3.2 Results using the IR Sensor 108 6.4 Experimental Results using Heterogeneous Sensors 111 6.4.1 Results in Environment with Occlusion 111 6.4.2 Results in Low-illumination Environment 115 6.5 Discussion 118 Chapter 7 Conclusions 120 Bibliography 125Docto

    Sistema de visão para aterragem automática de UAV

    Get PDF
    Dissertação para obtenção do grau de Mestre em Engenharia Electrotécnica Ramo de Automação e Electrónica IndustrialNeste estudo é proposto um sistema de visão para aterrar automaticamente um avião não tripulado (Unmanned Aerial Vehicle - UAV) comercialmente existente chamado AR4 num navio, sendo este sistema composto por uma simples câmara RGB (espectro visível). A aplicação prevê a sua colocação no convés de um navio para estimar a pose do UAV (posição 3D e orientação) durante o processo de aterragem. Ao utilizar um sistema de visão localizado no navio permite a utilização de um UAV com menos poder de processamento, reduzindo assim o seu tamanho e peso. O método proposto utiliza uma abordagem baseada no modelo 3D do objeto em que é necessária a utilização do modelo CAD 3D do UAV. A pose é estimada utilizando uma arquitetura baseada num filtro de partículas. A implementação utilizada é baseada nas estratégias de evolução presentes nos algoritmos genéticos, evitando assim perda de diversidade nas possibilidades criadas. Também é implementada filtragem temporal entre frames - filtro de Kalman unscented - por forma a obter uma melhor estimativa de pose. Os resultados mostram erros angulares e de posição compatíveis com o sistema de aterragem automática. O algoritmo é apropriado para aplicações em tempo real em standard workstations, com unidades de processamento gráfico. O UAV vai operar de um navio patrulha pertencente à Marinha de Guerra Portuguesa, o que implica a capacidade de aterrar num navio de 27 metros de comprimento, 5,9 metros de boca, com uma zona de aterragem pequena e irregular de 5x6 metros localizada na proa do navio. A implementação de um sistema completamente autónomo é muito importante em cenários reais, uma vez que estes navios têm uma guarnição limitada e os pilotos de UAV nem sempre se encontram disponíveis. Além disso, um sistema de visão é mais robusto em ambientes onde pode ocorrer empastelamento ao sinal GPS.Abstract: In this study a vision system for autonomous landing of an existing commercial aerial vehicle (UAV) named AR4 aboard a ship, based on a single standard RGB digital camera is proposed. The envisaged application is of ground-based automatic landing, where the vision system is located on the ship’s deck and is used to estimate the UAV pose (3D position and orientation) during the landing process. Using a vision system located on the ship makes it possible to use an UAV with less processing power, decreasing its size and weight. The proposed method uses a 3D model based pose estimation approach that requires the 3D CAD model of the UAV. Pose is estimated using a particle filtering framework. The implemented particle filter is inspired in the evolution strategies present in the genetic algorithms avoiding sample impoverishment. Temporal filtering is also implemented between frames – unscented Kalman filter – in order to get a better pose estimation. Results show that position and angular errors are compatible with automatic landing system requirements. The algorithm is suitable for real time implementation in standard workstations with graphical processing units. The UAV will operate from the Portuguese Navy fast patrol boats (FPB), which implies the capability of landing in 27 m length, 5.9 m breadth vessels, with a 5x6 m small and irregular landing zone located at the boat´s stern. The implementation of a completely autonomous system is very important in real scenarios,since this ships have only a small crew and UAV pilots are not usually available. Moreover a vision based system is more robust in an environment where GPS jamming can occur
    corecore