40 research outputs found

    Malicious UAV detection using integrated audio and visual features for public safety applications

    Get PDF
    RÉSUMÉ: Unmanned aerial vehicles (UAVs) have become popular in surveillance, security, and remote monitoring. However, they also pose serious security threats to public privacy. The timely detection of a malicious drone is currently an open research issue for security provisioning companies. Recently, the problem has been addressed by a plethora of schemes. However, each plan has a limitation, such as extreme weather conditions and huge dataset requirements. In this paper, we propose a novel framework consisting of the hybrid handcrafted and deep feature to detect and localize malicious drones from their sound and image information. The respective datasets include sounds and occluded images of birds, airplanes, and thunderstorms, with variations in resolution and illumination. Various kernels of the support vector machine (SVM) are applied to classify the features. Experimental results validate the improved performance of the proposed scheme compared to other related methods

    Autonomous Quadcopter Videographer

    Get PDF
    In recent years, the interest in quadcopters as a robotics platform for autonomous photography has increased. This is due to their small size and mobility, which allow them to reach places that are difficult or even impossible for humans. This thesis focuses on the design of an autonomous quadcopter videographer, i.e. a quadcopter capable of capturing good footage of a specific subject. In order to obtain this footage, the system needs to choose appropriate vantage points and control the quadcopter. Skilled human videographers can easily spot good filming locations where the subject and its actions can be seen clearly in the resulting video footage, but translating this knowledge to a robot can be complex. We present an autonomous system implemented on a commercially available quadcopter that achieves this using only the monocular information and an accelerometer. Our system has two vantage point selection strategies: 1) a reactive approach, which moves the robot to a fixed location with respect to the human and 2) the combination of the reactive approach and a POMDP planner that considers the target\u27s movement intentions. We compare the behavior of these two approaches under different target movement scenarios. The results show that the POMDP planner obtains more stable footage with less quadcopter motion

    Pendeteksian dan Pelacakan Objek Bergerak pada UAV berbasis Metode SUED

    Get PDF
    An unmanned aerial vehicle (UAV), commonly known as a drone, could be utilized to detect a moving object in real-time. However, there are some issues in detection process of moving object in UAV, called constraint uncertainty factor (UCF), such as environment, type of object, illumination, camera of UAV, and motion. One of practical problems that become concern of researcher in the past few years is motion analysis. Motion of an object in each frame carries a lot of information about the pixels of moving objects which has an important role as the image descriptor. In this paper, segmentation using edgebased dilation (SUED) algorithm is used to detect moving objects. The concept of the SUED algorithm is combining frame difference and segmentation process to obtain optimal results. The simulation results show the performance improvement of SUED algorithm using combination of wavelet and Sobel operator on edge detection: the number of frames for a true positive increased by 41 frames, then the false alarm rate decreased to 7% from 24% when only using Sobel operator. The combination of these two methods can also minimize noise region that affect detection and tracking process. The simulation results for tracking moving objects by Kalman filter show that there is decreasing of error between detection and tracking process

    Kodizajn arhitekture i algoritama za lokalizacijumobilnih robota i detekciju prepreka baziranih namodelu

    No full text
    This thesis proposes SoPC (System on a Programmable Chip) architectures for efficient embedding of vison-based localization and obstacle detection tasks in a navigational pipeline on autonomous mobile robots. The obtained results are equivalent or better in comparison to state-ofthe- art. For localization, an efficient hardware architecture that supports EKF-SLAM's local map management with seven-dimensional landmarks in real time is developed. For obstacle detection a novel method of object recognition is proposed - detection by identification framework based on single detection window scale. This framework allows adequate algorithmic precision and execution speeds on embedded hardware platforms.Ova teza bavi se dizajnom SoPC (engl. System on a Programmable Chip) arhitektura i algoritama za efikasnu implementaciju zadataka lokalizacije i detekcije prepreka baziranih na viziji u kontekstu autonomne robotske navigacije. Za lokalizaciju, razvijena je efikasna računarska arhitektura za EKF-SLAM algoritam, koja podržava skladištenje i obradu sedmodimenzionalnih orijentira lokalne mape u realnom vremenu. Za detekciju prepreka je predložena nova metoda prepoznavanja objekata u slici putem prozora detekcije fiksne dimenzije, koja omogućava veću brzinu izvršavanja algoritma detekcije na namenskim računarskim platformama

    HETEROGENEOUS MULTI-SENSOR FUSION FOR 2D AND 3D POSE ESTIMATION

    Get PDF
    Sensor fusion is a process in which data from different sensors is combined to acquire an output that cannot be obtained from individual sensors. This dissertation first considers a 2D image level real world problem from rail industry and proposes a novel solution using sensor fusion, then proceeds further to the more complicated 3D problem of multi sensor fusion for UAV pose estimation. One of the most important safety-related tasks in the rail industry is an early detection of defective rolling stock components. Railway wheels and wheel bearings are two components prone to damage due to their interactions with the brakes and railway track, which makes them a high priority when rail industry investigates improvements to current detection processes. The main contribution of this dissertation in this area is development of a computer vision method for automatically detecting the defective wheels that can potentially become a replacement for the current manual inspection procedure. The algorithm fuses images taken by wayside thermal and vision cameras and uses the outcome for the wheel defect detection. As a byproduct, the process will also include a method for detecting hot bearings from the same images. We evaluate our algorithm using simulated and real data images from UPRR in North America and it will be shown in this dissertation that using sensor fusion techniques the accuracy of the malfunction detection can be improved. After the 2D application, the more complicated 3D application is addressed. Precise, robust and consistent localization is an important subject in many areas of science such as vision-based control, path planning, and SLAM. Each of different sensors employed to estimate the pose have their strengths and weaknesses. Sensor fusion is a known approach that combines the data measured by different sensors to achieve a more accurate or complete pose estimation and to cope with sensor outages. In this dissertation, a new approach to 3D pose estimation for a UAV in an unknown GPS-denied environment is presented. The proposed algorithm fuses the data from an IMU, a camera, and a 2D LiDAR to achieve accurate localization. Among the employed sensors, LiDAR has not received proper attention in the past; mostly because a 2D LiDAR can only provide pose estimation in its scanning plane and thus it cannot obtain full pose estimation in a 3D environment. A novel method is introduced in this research that enables us to employ a 2D LiDAR to improve the full 3D pose estimation accuracy acquired from an IMU and a camera. To the best of our knowledge 2D LiDAR has never been employed for 3D localization without a prior map and it is shown in this dissertation that our method can significantly improve the precision of the localization algorithm. The proposed approach is evaluated and justified by simulation and real world experiments

    Visual and Camera Sensors

    Get PDF
    This book includes 13 papers published in Special Issue ("Visual and Camera Sensors") of the journal Sensors. The goal of this Special Issue was to invite high-quality, state-of-the-art research papers dealing with challenging issues in visual and camera sensors

    Improved terrain type classification using UAV downwash dynamic texture effect

    Get PDF
    The ability to autonomously navigate in an unknown, dynamic environment, while at the same time classifying various terrain types, are significant challenges still faced by the computer vision research community. Addressing these problems is of great interest for the development of collaborative autonomous navigation robots. For example, an Unmanned Aerial Vehicle (UAV) can be used to determine a path, while an Unmanned Surface Vehicle (USV) follows that path to reach the target destination. For the UAV to be able to determine if a path is valid or not, it must be able to identify the type of terrain it is flying over. With the help of its rotor air flow (known as downwash e↵ect), it becomes possible to extract advanced texture features, used for terrain type classification. This dissertation presents a complete analysis on the extraction of static and dynamic texture features, proposing various algorithms and analyzing their pros and cons. A UAV equipped with a single RGB camera was used to capture images and a Multilayer Neural Network was used for the automatic classification of water and non-water-type terrains by means of the downwash e↵ect created by the UAV rotors. The terrain type classification results are then merged into a georeferenced dynamic map, where it is possible to distinguish between water and non-water areas in real time. To improve the algorithms’ processing time, several sequential processes were con verted into parallel processes and executed in the UAV onboard GPU with the CUDA framework achieving speedups up to 10x. A comparison between the processing time of these two processing modes, sequential in the CPU and parallel in the GPU, is also presented in this dissertation. All the algorithms were developed using open-source libraries, and were analyzed and validated both via simulation and real environments. To evaluate the robustness of the proposed algorithms, the studied terrains were tested with and without the presence of the downwash e↵ect. It was concluded that the classifier could be improved by per forming combinations between static and dynamic features, achieving an accuracy higher than 99% in the classification of water and non-water terrain.Dotar equipamentos moveis da funcionalidade de navegação autónoma em ambientes desconhecidos e dinâmicos, ao mesmo tempo que, classificam terrenos do tipo água e não água, são desafios que se colocam atualmente a investigadores na área da visão computacional. As soluções para estes problemas são de grande interesse para a navegação autónoma e a colaboração entre robôs. Por exemplo, um veículo aéreo não tripulado (UAV) pode ser usado para determinar o caminho que um veículo terrestre não tripulado (USV) deve percorrer para alcançar o destino pretendido. Para o UAV conseguir determinar se o caminho é válido ou não, tem de ser capaz de identificar qual o tipo de terreno que está a sobrevoar. Com a ajuda do fluxo de ar gerado pelos motores (conhecido como efeito downwash), é possível extrair características de textura avançadas, que serão usadas para a classificação do tipo de terreno. Esta dissertação apresenta uma análise completa sobre extração de texturas estáticas e dinâmicas, propondo diversos algoritmos e analisando os seus prós e contras. Um UAV equipado com uma única câmera RGB foi usado para capturar as imagens. Para classi ficar automaticamente terrenos do tipo água e não água foi usada uma rede neuronal multicamada e recorreu-se ao efeito de downwash criado pelos motores do UAV. Os re sultados da classificação do tipo de terreno são depois colocados num mapa dinâmico georreferenciado, onde é possível distinguir, em tempo real, terrenos do tipo água e não água. De forma a melhorar o tempo de processamento dos algoritmos desenvolvidos, vários processos sequenciais foram convertidos em processos paralelos e executados na GPU a bordo do UAV, com a ajuda da framework CUDA, tornando o algoritmo até 10x mais rápido. Também são apresentadas nesta dissertação comparações entre o tempo de processamento destes dois modos de processamento, sequencial na CPU e paralelo na GPU. Todos os algoritmos foram desenvolvidos através de bibliotecas open-source, e foram analisados e validados, tanto através de ambientes de simulação como em ambientes reais. Para avaliar a robustez dos algoritmos propostos, os terrenos estudados foram testados com e sem a presença do efeito downwash. Concluiu-se que o classificador pode ser melhorado realizando combinações entre as características de textura estáticas e dinâmicas, alcançando uma precisão superior a 99% na classificação de terrenos do tipo água e não água

    Efficient recognition approaches for the interaction between humans and aerial robots

    Get PDF
    This project consists in a set of computer vision methods that serve as a baseline to perform HRI experiments. The methods are aimed to perform: visual marker detection, face detection and object recognition. Then we tested some of the methods by developing a demonstration scenario
    corecore