6,188 research outputs found

    Towards the development of a smart flying sensor: illustration in the field of precision agriculture

    Get PDF
    Sensing is an important element to quantify productivity, product quality and to make decisions. Applications, such as mapping, surveillance, exploration and precision agriculture, require a reliable platform for remote sensing. This paper presents the first steps towards the development of a smart flying sensor based on an unmanned aerial vehicle (UAV). The concept of smart remote sensing is illustrated and its performance tested for the task of mapping the volume of grain inside a trailer during forage harvesting. Novelty lies in: (1) the development of a position-estimation method with time delay compensation based on inertial measurement unit (IMU) sensors and image processing; (2) a method to build a 3D map using information obtained from a regular camera; and (3) the design and implementation of a path-following control algorithm using model predictive control (MPC). Experimental results on a lab-scale system validate the effectiveness of the proposed methodology

    The correlation between vehicle vertical dynamics and deep learning-based visual target state estimation:A sensitivity study

    Get PDF
    Automated vehicles will provide greater transport convenience and interconnectivity, increase mobility options to young and elderly people, and reduce traffic congestion and emissions. However, the largest obstacle towards the deployment of automated vehicles on public roads is their safety evaluation and validation. Undeniably, the role of cameras and Artificial Intelligence-based (AI) vision is vital in the perception of the driving environment and road safety. Although a significant number of studies on the detection and tracking of vehicles have been conducted, none of them focused on the role of vertical vehicle dynamics. For the first time, this paper analyzes and discusses the influence of road anomalies and vehicle suspension on the performance of detecting and tracking driving objects. To this end, we conducted an extensive road field study and validated a computational tool for performing the assessment using simulations. A parametric study revealed the cases where AI-based vision underperforms and may significantly degrade the safety performance of AV

    Hierarchical Off-Road Path Planning and Its Validation Using a Scaled Autonomous Car\u27

    Get PDF
    In the last few years. while a lot of research effort has been spent on autonomous vehicle navigation, primarily focused on on-road vehicles, off-road path planning still presents new challenges. Path planning for an autonomous ground vehicle over a large horizon in an unstructured environment when high-resolution a-priori information is available, is still very much an open problem due to the computations involved. Localization and control of an autonomous vehicle and how the control algorithms interact with the path planner is a complex task. The first part of this research details the development of a path decision support tool for off-road application implementing a novel hierarchical path planning framework and verification in a simulation environment. To mimic real world issues, like communication delay, sensor noise, modeling error, etc., it was important that we validate the framework in a real environment. In the second part of the research, development of a scaled autonomous car as part of a real experimental environment is discussed which provides a compromise between cost as well as implementation complexities compared to a full-scale car. The third part of the research, explains the development of a vehicle-in-loop (VIL) environment with demo examples to illustrate the utility of such a platform. Our proposed path planning algorithm mitigates the challenge of high computational cost to find the optimal path over a large scale high-resolution map. A global path planner runs in a centralized server and uses Dynamic Programming (DP) with coarse information to create an optimal cost grid. A local path planner utilizes Model Predictive Control (MPC), running on-board, using the cost map along with high-resolution information (available via various sensors as well as V2V communication) to generate the local optimal path. Such an approach ensures the MPC follows a global optimal path while being locally optimal. A central server efficiently creates and updates route critical information available via vehicle-to-infrastructure(V2X) communication while using the same to update the prescribed global cost grid. For localization of the scaled car, a three-axis inertial measurement unit (IMU), wheel encoders, a global positioning system (GPS) unit and a mono-camera are mounted. Drift in IMU is one of the major issues which we addressed in this research besides developing a low-level controller which helped in implementing the MPC in a constrained computational environment. Using a camera and tire edge detection algorithm we have developed an online steering angle measurement package as well as a steering angle estimation algorithm to be utilized in case of low computational resources. We wanted to study the impact of connectivity on a fleet of vehicles running in off-road terrain. It is costly as well as time consuming to run all real vehicles. Also some scenarios are difficult to recreate in real but need a simulation environment. So we have developed a vehicle-in-loop (VIL) platform using a VIL simulator, a central server and the real scaled car to combine the advantages of both real and simulation environment. As a demo example to illustrate the utility of VIL platform, we have simulated an animal crossing scenario and analyze how our obstacle avoidance algorithms performs under different conditions. In the future it will help us to analyze the impact of connectivity on platoons moving in off-road terrain. For the vehicle-in-loop environment, we have used JavaScript Object Notation (JSON) data format for information exchange using User Datagram Protocol (UDP) for implementing Vehicle-to-Vehicle (V2V) and MySQL server for Vehicle-to-Infrastructure (V2I) communication

    Terrain Classification from Body-mounted Cameras during Human Locomotion

    Get PDF
    Abstract—This paper presents a novel algorithm for terrain type classification based on monocular video captured from the viewpoint of human locomotion. A texture-based algorithm is developed to classify the path ahead into multiple groups that can be used to support terrain classification. Gait is taken into account in two ways. Firstly, for key frame selection, when regions with homogeneous texture characteristics are updated, the fre-quency variations of the textured surface are analysed and used to adaptively define filter coefficients. Secondly, it is incorporated in the parameter estimation process where probabilities of path consistency are employed to improve terrain-type estimation. When tested with multiple classes that directly affect mobility a hard surface, a soft surface and an unwalkable area- our proposed method outperforms existing methods by up to 16%, and also provides improved robustness. Index Terms—texture, classification, recursive filter, terrain classification I

    Server Based Wireless Motion Detection System Using PIR Sensor

    Get PDF
    Detecting human movement plays a vital in this industrial era where it can be effectively used in industrial sectors dealing with reactors and boilers. A server based automated system that allows only the authorized persons to enter into the hazardous zone that will avoid misguiding of the system by unknown persons. In this research article, a server based motion detection module involving PIR sensor and GSM is proposed. Here we use two modules one is digital image processing and another one is embedded system. It is used to detect any obstacle in within its field. Once obstacle will be detected camera will be on. Camera captured an upcoming image and check whether it is a human or animal, if it is a human it will compare with input database images, then easily find unauthorized person

    Collision detection for UAVs using Event Cameras

    Get PDF
    This dissertation explores the use of event cameras for collision detection in unmanned aerial vehicles (UAVs). Traditional cameras have been widely used in UAVs for obstacle avoidance and navigation, but they suffer from high latency and low dynamic range. Event cameras, on the other hand, capture only the changes in the scene and can operate at high speeds with low latency. The goal of this research is to investigate the potential of event cameras in UAVs collision detection, which is crucial for safe operation in complex and dynamic environments. The dissertation presents a review of the current state of the art in the field and evaluates a developed algorithm for event-based collision detection for UAVs. The performance of the algorithm was tested through practical experiments in which 9 sequences of events were recorded using an event camera, depicting different scenarios with stationary and moving objects as obstacles. Simultaneously, inertial measurement unit (IMU) data was collected to provide additional information about the UAV’s movement. The recorded data was then processed using the proposed event-based collision detection algorithm for UAVs, which consists of four components: ego-motion compensation, normalized mean timestamp, morphological operations, and clustering. Firstly, the ego-motion component compensates for the UAV’s motion by estimating its rotational movement using the IMU data. Next, the normalized mean timestamp component calculates the mean timestamp of each event and normalizes it, helping to reduce the noise in the event data and improving the accuracy of collision detection. The morphological operations component applies mathematical operations such as erosion and dilation to the event data to remove small noise and enhance the edges of objects. Finally, the last component uses a clustering method called DBSCAN to group the events, allowing for the detection of objects and estimation of their positions. This step provides the final output of the collision detection algorithm, which can be used for obstacle avoidance and navigation in UAVs. The algorithm was evaluated based on its accuracy, latency, and computational efficiency. The findings demonstrate that event-based collision detection has the potential to be an effective and efficient method for detecting collisions in UAVs, with high accuracy and low latency. These results suggest that event cameras could be beneficial for enhancing the safety and dependability of UAVs in challenging situations. Moreover, the datasets and algorithm developed in this research are made publicly available, facilitating the evaluation and enhancement of the algorithm for specific applications. This approach could encourage collaboration among researchers and enable further comparisons and investigations.Esta dissertação explora o uso de câmeras de eventos para deteção de colisões em veículos aéreos não tripulados (UAVs). As câmeras tradicionais têm sido amplamente utilizadas em UAVs para evitar obstáculos, mas sofrem de alguns problemas como alta latência ou baixa faixa dinâmica. As câmeras de eventos, por outro lado, capturam apenas as alterações na cena e podem operar em alta velocidade com baixa latência. O objetivo desta pesquisa é investigar o potencial de câmeras de eventos na deteção de colisões em UAVs, o que é crucial para uma operação segura em ambientes complexos e dinâmicos. A dissertação apresenta uma revisão do estado atual da arte neste tema e avalia um algoritmo desenvolvido para deteção de colisões em UAVs baseado em eventos. O desempenho do algoritmo foi avaliado através de testes práticas em que foram registadas 9 sequências de eventos utilizando uma câmera de eventos, retratando diferentes cenários com objetos estacionários e em movimento. Simultaneamente, foram capturados dados da unidade de medida inercial (IMU) para fornecer informações adicionais sobre o movimento do UAV. Os dados registados foram então processados usando o algoritmo proposto de deteção de colisões, que consiste em quatro etapas: ego-motion compensation, normalized mean timestamp, operações morfológicas e clustering. Primeiramente, o ego-motion compensation compensa o movimento do UAV estimando o seu movimento rotacional usando os dados do IMU. Em seguida, o componente de normalized mean timestamp cálcula o timestamp médio de cada evento e normaliza-o, ajudando a reduzir o ruído nos dados de eventos e melhorando a precisão da deteção de colisões. A etapa de operações morfológicas aplica operações matemáticas como erosão e dilatação nos dados dos eventos para remover pequenos ruídos. Finalmente, a última etapa utiliza um método de clustering chamado DBSCAN para agrupar os eventos, permitindo a deteção de objetos e a estimativa das suas posições. Esta etapa fornece o output final do algoritmo de deteção de colisões, que pode ser usado para evitar obstáculos em UAVs. O algoritmo foi avaliado com base na sua precisão, latência e eficiência computacional. Os resultados demonstram que a deteção de colisões baseada em eventos tem o potencial de ser um método eficaz e eficiente para a deteção de colisões em UAVs, com alta precisão e baixa latência. Estes resultados sugerem que as câmeras de eventos poderiam ser benéficas para melhorar a segurança e a confiabilidade dos UAVs em situações desafiadoras. Além disso, os conjuntos de dados e o algoritmo desenvolvido nesta pesquisa estão disponíveis online, facilitando a avaliação e o aprimoramento do algoritmo para aplicações específicas. Esta abordagem pode incentivar a colaboração entre os investigadores da área e possibilitar mais comparações e investigações
    corecore