217 research outputs found

    Integrating Millimeter Wave Radar with a Monocular Vision Sensor for On-Road Obstacle Detection Applications

    Get PDF
    This paper presents a systematic scheme for fusing millimeter wave (MMW) radar and a monocular vision sensor for on-road obstacle detection. As a whole, a three-level fusion strategy based on visual attention mechanism and driver’s visual consciousness is provided for MMW radar and monocular vision fusion so as to obtain better comprehensive performance. Then an experimental method for radar-vision point alignment for easy operation with no reflection intensity of radar and special tool requirements is put forward. Furthermore, a region searching approach for potential target detection is derived in order to decrease the image processing time. An adaptive thresholding algorithm based on a new understanding of shadows in the image is adopted for obstacle detection, and edge detection is used to assist in determining the boundary of obstacles. The proposed fusion approach is verified through real experimental examples of on-road vehicle/pedestrian detection. In the end, the experimental results show that the proposed method is simple and feasible

    A unified approach to cooperative and non-cooperative sense-and-avoid

    Get PDF
    Cooperative and non-cooperative Sense-and-Avoid (SAA) capabilities are key enablers for Unmanned Aircraft Vehicle (UAV) to safely and routinely access all classes of airspace. In this paper state-of-the-art cooperative and non-cooperative SAA sensor/system technologies for small-to-medium size UAV are identified and the associated multi-sensor data fusion techniques are introduced. A reference SAA system architecture is presented based on Boolean Decision Logics (BDL) for selecting and sorting non-cooperative and cooperative sensors/systems including both passive and active Forward Looking Sensors (FLS), Traffic Collision Avoidance System (TCAS) and Automatic Dependent Surveillance - Broadcast (ADS-B). After elaborating the SAA system processes, the key mathematical models associated with both non-cooperative and cooperative SAA functions are presented. The analytical models adopted to compute the overall uncertainty volume in the airspace surrounding an intruder are described. Based on these mathematical models, the SAA Unified Method (SUM) for cooperative and non-cooperative SAA is presented. In this unified approach, navigation and tracking errors affecting the measurements are considered and translated to unified range and bearing uncertainty descriptors, which apply both to cooperative and non-cooperative scenarios. Simulation case studies are carried out to evaluate the performance of the proposed SAA approach on a representative host platform (AEROSONDE UAV) and various intruder platforms. Results corroborate the validity of the proposed approach and demonstrate the impact of SUM towards providing a cohesive logical framework for the development of an airworthy SAA capability, which provides a pathway for manned/unmanned aircraft coexistence in all classes of airspace

    Toward 3D reconstruction of outdoor scenes using an MMW radar and a monocular vision sensor

    Get PDF
    International audienceIn this paper, we introduce a geometric method for 3D reconstruction of the exterior environment using a panoramic microwave radar and a camera. We rely on the complementarity of these two sensors considering the robustness to the environmental conditions and depth detection ability of the radar, on the one hand, and the high spatial resolution of a vision sensor, on the other. Firstly, geometric modeling of each sensor and of the entire system is presented. Secondly, we address the global calibration problem, which consists of finding the exact transformation between the sensors' coordinate systems. Two implementation methods are proposed and compared, based on the optimization of a non-linear criterion obtained from a set of radar-to-image target correspondences. Unlike existing methods, no special configuration of the 3D points is required for calibration. This makes the methods flexible and easy to use by a non-expert operator. Finally, we present a very simple, yet robust 3D reconstruction method based on the sensors' geometry. This method enables one to reconstruct observed features in 3D using one acquisition (static sensor), which is not always met in the state of the art for outdoor scene reconstruction.The proposed methods have been validated with synthetic and real data

    Avionics sensor fusion for small size unmanned aircraft Sense-and-Avoid

    Get PDF
    Cooperative and non-cooperative Sense-and-Avoid (SAA) systems are key enablers for Unmanned Aircraft (UA) to routinely access non-segregated airspace. In this paper some state-of-the-art cooperative and non-cooperative sensor and system technologies are investigated for small size UA applications, and the associated multisensor data fusion techniques are discussed. Non-cooperative sensors including both passive and active Forward Looking Sensors (FLS) and cooperative systems including Traffic Collision Avoidance System (TCAS), Automatic Dependent Surveillance - Broadcast (ADS-B) system and/or Mode C transponders are part of the proposed SAA architecture. After introducing the SAA system processes, the key mathematical models for data fusion are presented. The Interacting Multiple Model (IMM) algorithm is used to estimate the state vector of the intruders and this is propagated to predict the future trajectories using a probabilistic model. Adopting these mathematical models, conflict detection and resolution strategies for both cooperative and un-cooperative intruders are identified. Additionally, a detailed error analysis is performed to determine the overall uncertainty volume in the airspace surrounding the intruder tracks. This is accomplished by considering both the navigation and the tracking errors affecting the measurements and translating them to unified range and bearing uncertainty descriptors, which apply both to cooperative and non-cooperative scenarios. Detailed simulation case studies are carried out to evaluate the performance of the proposed SAA approach on a representative host platform (AEROSONDE UA) and various intruder platforms, including large transport aircraft and other UA. Results show that the required safe separation distance is always maintained when the SAA process is performed from ranges in excess of 500 metres

    An electromagnetic imaging system for metallic object detection and classification

    Get PDF
    PhD ThesisElectromagnetic imaging currently plays a vital role in various disciplines, from engineering to medical applications and is based upon the characteristics of electromagnetic fields and their interaction with the properties of materials. The detection and characterisation of metallic objects which pose a threat to safety is of great interest in relation to public and homeland security worldwide. Inspections are conducted under the prerequisite that is divested of all metallic objects. These inspection conditions are problematic in terms of the disruption of the movement of people and produce a soft target for terrorist attack. Thus, there is a need for a new generation of detection systems and information technologies which can provide an enhanced characterisation and discrimination capabilities. This thesis proposes an automatic metallic object detection and classification system. Two related topics have been addressed: to design and implement a new metallic object detection system; and to develop an appropriate signal processing algorithm to classify the targeted signatures. The new detection system uses an array of sensors in conjunction with pulsed excitation. The contributions of this research can be summarised as follows: (1) investigating the possibility of using magneto-resistance sensors for metallic object detection; (2) evaluating the proposed system by generating a database consisting of 12 real handguns with more than 20 objects used in daily life; (3) extracted features from the system outcomes using four feature categories referring to the objects’ shape, material composition, time-frequency signal analysis and transient pulse response; and (4) applying two classification methods to classify the objects into threats and non-threats, giving a successful classification rate of more than 92% using the feature combination and classification framework of the new system. The study concludes that novel magnetic field imaging system and their signal outputs can be used to detect, identify and classify metallic objects. In comparison with conventional induction-based walk-through metal detectors, the magneto-resistance sensor array-based system shows great potential for object identification and discrimination. This novel system design and signal processing achievement may be able to produce significant improvements in automatic threat object detection and classification applications.Iraqi Cultural Attaché, Londo

    Civilian Target Recognition using Hierarchical Fusion

    Get PDF
    The growth of computer vision technology has been marked by attempts to imitate human behavior to impart robustness and confidence to the decision making process of automated systems. Examples of disciplines in computer vision that have been targets of such efforts are Automatic Target Recognition (ATR) and fusion. ATR is the process of aided or unaided target detection and recognition using data from different sensors. Usually, it is synonymous with its military application of recognizing battlefield targets using imaging sensors. Fusion is the process of integrating information from different sources at the data or decision levels so as to provide a single robust decision as opposed to multiple individual results. This thesis combines these two research areas to provide improved classification accuracy in recognizing civilian targets. The results obtained reaffirm that fusion techniques tend to improve the recognition rates of ATR systems. Previous work in ATR has mainly dealt with military targets and single level of data fusion. Expensive sensors and time-consuming algorithms are generally used to improve system performance. In this thesis, civilian target recognition, which is considered to be harder than military target recognition, is performed. Inexpensive sensors are used to keep the system cost low. In order to compensate for the reduced system ability, fusion is performed at two different levels of the ATR system { event level and sensor level. Only preliminary image processing and pattern recognition techniques have been used so as to maintain low operation times. High classification rates are obtained using data fusion techniques alone. Another contribution of this thesis is the provision of a single framework to perform all operations from target data acquisition to the final decision making. The Sensor Fusion Testbed (SFTB) designed by Northrop Grumman Systems has been used by the Night Vision & Electronic Sensors Directorate to obtain images of seven different types of civilian targets. Image segmentation is performed using background subtraction. The seven invariant moments are extracted from the segmented image and basic classification is performed using k Nearest Neighbor method. Cross-validation is used to provide a better idea of the classification ability of the system. Temporal fusion at the event level is performed using majority voting and sensor level fusion is done using Behavior-Knowledge Space method. Two separate databases were used. The first database uses seven targets (2 cars, 2 SUVs, 2 trucks and 1 stake body light truck). Individual frame, temporal fusion and BKS fusion results are around 65%, 70% and 77% respectively. The second database has three targets (cars, SUVs and trucks) formed by combining classes from the first database. Higher classification accuracies are observed here. 75%, 90% and 95% recognition rates are obtained at frame, event and sensor levels. It can be seen that, on an average, recognition accuracy improves with increasing levels of fusion. Also, distance-based classification was performed to study the variation of system performance with the distance of the target from the cameras. The results are along expected lines and indicate the efficacy of fusion techniques for the ATR problem. Future work using more complex image processing and pattern recognition routines can further improve the classification performance of the system. The SFTB can be equipped with these algorithms and field-tested to check real-time performance

    Adaptive Multi-sensor Perception for Driving Automation in Outdoor Contexts

    Get PDF
    In this research, adaptive perception for driving automation is discussed so as to enable a vehicle to automatically detect driveable areas and obstacles in the scene. It is especially designed for outdoor contexts where conventional perception systems that rely on a priori knowledge of the terrain's geometric properties, appearance properties, or both, is prone to fail, due to the variability in the terrain properties and environmental conditions. In contrast, the proposed framework uses a self-learning approach to build a model of the ground class that is continuously adjusted online to reflect the latest ground appearance. The system also features high flexibility, as it can work using a single sensor modality or a multi-sensor combination. In the context of this research, different embodiments have been demonstrated using range data coming from either a radar or a stereo camera, and adopting self-supervised strategies where monocular vision is automatically trained by radar or stereo vision. A comprehensive set of experimental results, obtained with different ground vehicles operating in the field, are presented to validate and assess the performance of the system

    Proceedings of the Augmented VIsual Display (AVID) Research Workshop

    Get PDF
    The papers, abstracts, and presentations were presented at a three day workshop focused on sensor modeling and simulation, and image enhancement, processing, and fusion. The technical sessions emphasized how sensor technology can be used to create visual imagery adequate for aircraft control and operations. Participants from industry, government, and academic laboratories contributed to panels on Sensor Systems, Sensor Modeling, Sensor Fusion, Image Processing (Computer and Human Vision), and Image Evaluation and Metrics
    • …
    corecore