1,034 research outputs found

    Radar/electro-optical data fusion for non-cooperative UAS sense and avoid

    Get PDF
    Abstract This paper focuses on hardware/software implementation and flight results relevant to a multi-sensor obstacle detection and tracking system based on radar/electro-optical (EO) data fusion. The sensing system was installed onboard an optionally piloted very light aircraft (VLA). Test flights with a single intruder plane of the same class were carried out to evaluate the level of achievable situational awareness and the capability to support autonomous collision avoidance. System architecture is presented and special emphasis is given to adopted solutions regarding real time integration of sensors and navigation measurements and high accuracy estimation of sensors alignment. On the basis of Global Positioning System (GPS) navigation data gathered simultaneously with multi-sensor tracking flight experiments, potential of radar/EO fusion is compared with standalone radar tracking. Flight results demonstrate a significant improvement of collision detection performance, mostly due to the change in angular rate estimation accuracy, and confirm data fusion effectiveness for facing EO detection issues. Relative sensors alignment, performance of the navigation unit, and cross-sensor cueing are found to be key factors to fully exploit the potential of multi-sensor architectures

    Automatic Adaptation of Airport Surface Surveillance to Sensor Quality

    Get PDF
    This paper describes a novel method to enhance current airport surveillance systems used in Advanced Surveillance Monitoring Guidance and Control Systems (A-SMGCS). The proposed method allows for the automatic calibration of measurement models and enhanced detection of nonideal situations, increasing surveillance products integrity. It is based on the definition of a set of observables from the surveillance processing chain and a rule based expert system aimed to change the data processing method

    Robust Multi-Object Tracking: A Labeled Random Finite Set Approach

    Get PDF
    The labeled random finite set based generalized multi-Bernoulli filter is a tractable analytic solution for the multi-object tracking problem. The robustness of this filter is dependent on certain knowledge regarding the multi-object system being available to the filter. This dissertation presents techniques for robust tracking, constructed upon the labeled random finite set framework, where complete information regarding the system is unavailable

    Track-to-track association for intelligent vehicles by preserving local track geometry

    Get PDF
    Track-to-track association (T2TA) is a challenging task in situational awareness in intelligent vehicles and surveillance systems. In this paper, the problem of track-to-track association with sensor bias (T2TASB) is considered. Traditional T2TASB algorithms only consider a statistical distance cost between local tracks from different sensors, without exploiting the geometric relationship between one track and its neighboring ones from each sensor. However, the relative geometry among neighboring local tracks is usually stable, at least for a while, and thus helpful in improving the T2TASB. In this paper, we propose a probabilistic method, called the local track geometry preservation (LTGP) algorithm, which takes advantage of the geometry of tracks. Assuming that the local tracks of one sensor are represented by Gaussian mixture model (GMM) centroids, the corresponding local tracks of the other sensor are fitted to those of the first sensor. In this regard, a geometrical descriptor connectivity matrix is constructed to exploit the relative geometry of these tracks. The track association problem is formulated as a maximum likelihood estimation problem with a local track geometry constraint, and an expectation–maximization (EM) algorithm is developed to find the solution. Simulation results demonstrate that the proposed methods offer better performance than the state-of-the-art methods.The authors gratefully acknowledge the Autonomous Vision Group for providing the KITTI dataset. The authors also would like to thank the editors and referees for the valuable comments and suggestions.The Research Funds of Chongqing Science and Technology Commission, the National Natural Science Foundation of China, the Key Project of Crossing and Emerging Area of CQUPT, the Research Fund of young-backbone university teacher in Chongqing province, Chongqing Overseas Scholars Innovation Program, Wenfeng Talents of Chongqing University of Posts and Telecommunications, Innovation Team Project of Chongqing Education Committee, the National Key Research and Development Program, the Research and Innovation of Chongqing Postgraduate Project, the Lilong Innovation and Entrepreneurship Fund of Chongqing University of Posts and Telecommunications.http://www.mdpi.com/journal/sensorsam2021Electrical, Electronic and Computer Engineerin

    Fruit Detection and Tree Segmentation for Yield Mapping in Orchards

    Get PDF
    Accurate information gathering and processing is critical for precision horticulture, as growers aim to optimise their farm management practices. An accurate inventory of the crop that details its spatial distribution along with health and maturity, can help farmers efficiently target processes such as chemical and fertiliser spraying, crop thinning, harvest management, labour planning and marketing. Growers have traditionally obtained this information by using manual sampling techniques, which tend to be labour intensive, spatially sparse, expensive, inaccurate and prone to subjective biases. Recent advances in sensing and automation for field robotics allow for key measurements to be made for individual plants throughout an orchard in a timely and accurate manner. Farmer operated machines or unmanned robotic platforms can be equipped with a range of sensors to capture a detailed representation over large areas. Robust and accurate data processing techniques are therefore required to extract high level information needed by the grower to support precision farming. This thesis focuses on yield mapping in orchards using image and light detection and ranging (LiDAR) data captured using an unmanned ground vehicle (UGV). The contribution is the framework and algorithmic components for orchard mapping and yield estimation that is applicable to different fruit types and orchard configurations. The framework includes detection of fruits in individual images and tracking them over subsequent frames. The fruit counts are then associated to individual trees, which are segmented from image and LiDAR data, resulting in a structured spatial representation of yield. The first contribution of this thesis is the development of a generic and robust fruit detection algorithm. Images captured in the outdoor environment are susceptible to highly variable external factors that lead to significant appearance variations. Specifically in orchards, variability is caused by changes in illumination, target pose, tree types, etc. The proposed techniques address these issues by using state-of-the-art feature learning approaches for image classification, while investigating the utility of orchard domain knowledge for fruit detection. Detection is performed using both pixel-wise classification of images followed instance segmentation, and bounding-box regression approaches. The experimental results illustrate the versatility of complex deep learning approaches over a multitude of fruit types. The second contribution of this thesis is a tree segmentation approach to detect the individual trees that serve as a standard unit for structured orchard information systems. The work focuses on trellised trees, which present unique challenges for segmentation algorithms due to their intertwined nature. LiDAR data are used to segment the trellis face, and to generate proposals for individual trees trunks. Additional trunk proposals are provided using pixel-wise classification of the image data. The multi-modal observations are fine-tuned by modelling trunk locations using a hidden semi-Markov model (HSMM), within which prior knowledge of tree spacing is incorporated. The final component of this thesis addresses the visual occlusion of fruit within geometrically complex canopies by using a multi-view detection and tracking approach. Single image fruit detections are tracked over a sequence of images, and associated to individual trees or farm rows, with the spatial distribution of the fruit counting forming a yield map over the farm. The results show the advantage of using multi-view imagery (instead of single view analysis) for fruit counting and yield mapping. This thesis includes extensive experimentation in almond, apple and mango orchards, with data captured by a UGV spanning a total of 5 hectares of farm area, over 30 km of vehicle traversal and more than 7,000 trees. The validation of the different processes is performed using manual annotations, which includes fruit and tree locations in image and LiDAR data respectively. Additional evaluation of yield mapping is performed by comparison against fruit counts on trees at the farm and counts made by the growers post-harvest. The framework developed in this thesis is demonstrated to be accurate compared to ground truth at all scales of the pipeline, including fruit detection and tree mapping, leading to accurate yield estimation, per tree and per row, for the different crops. Through the multitude of field experiments conducted over multiple seasons and years, the thesis presents key practical insights necessary for commercial development of an information gathering system in orchards

    Fish tracking using detection in Aquaculture: A Pilot Study

    Get PDF
    Use two different detection models and combine them with a tracking algorithm to be able to track fish that can be used in further fish welfare applications.Use two different detection models and combine them with a tracking algorithm to be able to track fish that can be used in further fish welfare applications

    Vision based strategies for implementing Sense and Avoid capabilities onboard Unmanned Aerial Systems

    Get PDF
    Current research activities are worked out to develop fully autonomous unmanned platform systems, provided with Sense and Avoid technologies in order to achieve the access to the National Airspace System (NAS), flying with manned airplanes. The TECVOl project is set in this framework, aiming at developing an autonomous prototypal Unmanned Aerial Vehicle which performs Detect Sense and Avoid functionalities, by means of an integrated sensors package, composed by a pulsed radar and four electro-optical cameras, two visible and two Infra-Red. This project is carried out by the Italian Aerospace Research Center in collaboration with the Department of Aerospace Engineering of the University of Naples “Federico II”, which has been involved in the developing of the Obstacle Detection and IDentification system. Thus, this thesis concerns the image processing technique customized for the Sense and Avoid applications in the TECVOL project, where the EO system has an auxiliary role to radar, which is the main sensor. In particular, the panchromatic camera performs the aiding function of object detection, in order to increase accuracy and data rate performance of radar system. Therefore, the thesis describes the implemented steps to evaluate the most suitable panchromatic camera image processing technique for our applications, the test strategies adopted to study its performance and the analysis conducted to optimize it in terms of false alarms, missed detections and detection range. Finally, results from the tests will be explained, and they will demonstrate that the Electro-Optical sensor is beneficial to the overall Detect Sense and Avoid system; in fact it is able to improve upon it, in terms of object detection and tracking performance

    Exploring space situational awareness using neuromorphic event-based cameras

    Get PDF
    The orbits around earth are a limited natural resource and one that hosts a vast range of vital space-based systems that support international systems use by both commercial industries, civil organisations, and national defence. The availability of this space resource is rapidly depleting due to the ever-growing presence of space debris and rampant overcrowding, especially in the limited and highly desirable slots in geosynchronous orbit. The field of Space Situational Awareness encompasses tasks aimed at mitigating these hazards to on-orbit systems through the monitoring of satellite traffic. Essential to this task is the collection of accurate and timely observation data. This thesis explores the use of a novel sensor paradigm to optically collect and process sensor data to enhance and improve space situational awareness tasks. Solving this issue is critical to ensure that we can continue to utilise the space environment in a sustainable way. However, these tasks pose significant engineering challenges that involve the detection and characterisation of faint, highly distant, and high-speed targets. Recent advances in neuromorphic engineering have led to the availability of high-quality neuromorphic event-based cameras that provide a promising alternative to the conventional cameras used in space imaging. These cameras offer the potential to improve the capabilities of existing space tracking systems and have been shown to detect and track satellites or ‘Resident Space Objects’ at low data rates, high temporal resolutions, and in conditions typically unsuitable for conventional optical cameras. This thesis presents a thorough exploration of neuromorphic event-based cameras for space situational awareness tasks and establishes a rigorous foundation for event-based space imaging. The work conducted in this project demonstrates how to enable event-based space imaging systems that serve the goals of space situational awareness by providing accurate and timely information on the space domain. By developing and implementing event-based processing techniques, the asynchronous operation, high temporal resolution, and dynamic range of these novel sensors are leveraged to provide low latency target acquisition and rapid reaction to challenging satellite tracking scenarios. The algorithms and experiments developed in this thesis successfully study the properties and trade-offs of event-based space imaging and provide comparisons with traditional observing methods and conventional frame-based sensors. The outcomes of this thesis demonstrate the viability of event-based cameras for use in tracking and space imaging tasks and therefore contribute to the growing efforts of the international space situational awareness community and the development of the event-based technology in astronomy and space science applications
    • 

    corecore