4,449 research outputs found

    Civilian Target Recognition using Hierarchical Fusion

    Get PDF
    The growth of computer vision technology has been marked by attempts to imitate human behavior to impart robustness and confidence to the decision making process of automated systems. Examples of disciplines in computer vision that have been targets of such efforts are Automatic Target Recognition (ATR) and fusion. ATR is the process of aided or unaided target detection and recognition using data from different sensors. Usually, it is synonymous with its military application of recognizing battlefield targets using imaging sensors. Fusion is the process of integrating information from different sources at the data or decision levels so as to provide a single robust decision as opposed to multiple individual results. This thesis combines these two research areas to provide improved classification accuracy in recognizing civilian targets. The results obtained reaffirm that fusion techniques tend to improve the recognition rates of ATR systems. Previous work in ATR has mainly dealt with military targets and single level of data fusion. Expensive sensors and time-consuming algorithms are generally used to improve system performance. In this thesis, civilian target recognition, which is considered to be harder than military target recognition, is performed. Inexpensive sensors are used to keep the system cost low. In order to compensate for the reduced system ability, fusion is performed at two different levels of the ATR system { event level and sensor level. Only preliminary image processing and pattern recognition techniques have been used so as to maintain low operation times. High classification rates are obtained using data fusion techniques alone. Another contribution of this thesis is the provision of a single framework to perform all operations from target data acquisition to the final decision making. The Sensor Fusion Testbed (SFTB) designed by Northrop Grumman Systems has been used by the Night Vision & Electronic Sensors Directorate to obtain images of seven different types of civilian targets. Image segmentation is performed using background subtraction. The seven invariant moments are extracted from the segmented image and basic classification is performed using k Nearest Neighbor method. Cross-validation is used to provide a better idea of the classification ability of the system. Temporal fusion at the event level is performed using majority voting and sensor level fusion is done using Behavior-Knowledge Space method. Two separate databases were used. The first database uses seven targets (2 cars, 2 SUVs, 2 trucks and 1 stake body light truck). Individual frame, temporal fusion and BKS fusion results are around 65%, 70% and 77% respectively. The second database has three targets (cars, SUVs and trucks) formed by combining classes from the first database. Higher classification accuracies are observed here. 75%, 90% and 95% recognition rates are obtained at frame, event and sensor levels. It can be seen that, on an average, recognition accuracy improves with increasing levels of fusion. Also, distance-based classification was performed to study the variation of system performance with the distance of the target from the cameras. The results are along expected lines and indicate the efficacy of fusion techniques for the ATR problem. Future work using more complex image processing and pattern recognition routines can further improve the classification performance of the system. The SFTB can be equipped with these algorithms and field-tested to check real-time performance

    Multi-Target Tracking in Distributed Sensor Networks using Particle PHD Filters

    Full text link
    Multi-target tracking is an important problem in civilian and military applications. This paper investigates multi-target tracking in distributed sensor networks. Data association, which arises particularly in multi-object scenarios, can be tackled by various solutions. We consider sequential Monte Carlo implementations of the Probability Hypothesis Density (PHD) filter based on random finite sets. This approach circumvents the data association issue by jointly estimating all targets in the region of interest. To this end, we develop the Diffusion Particle PHD Filter (D-PPHDF) as well as a centralized version, called the Multi-Sensor Particle PHD Filter (MS-PPHDF). Their performance is evaluated in terms of the Optimal Subpattern Assignment (OSPA) metric, benchmarked against a distributed extension of the Posterior Cram\'er-Rao Lower Bound (PCRLB), and compared to the performance of an existing distributed PHD Particle Filter. Furthermore, the robustness of the proposed tracking algorithms against outliers and their performance with respect to different amounts of clutter is investigated.Comment: 27 pages, 6 figure

    Implementation of UAV Coordination Based on a Hierarchical Multi-UAV Simulation Platform

    Full text link
    In this paper, a hierarchical multi-UAV simulation platform,called XTDrone, is designed for UAV swarms, which is completely open-source 4 . There are six layers in XTDrone: communication, simulator,low-level control, high-level control, coordination, and human interac-tion layers. XTDrone has three advantages. Firstly, the simulation speedcan be adjusted to match the computer performance, based on the lock-step mode. Thus, the simulations can be conducted on a work stationor on a personal laptop, for different purposes. Secondly, a simplifiedsimulator is also developed which enables quick algorithm designing sothat the approximated behavior of UAV swarms can be observed inadvance. Thirdly, XTDrone is based on ROS, Gazebo, and PX4, andhence the codes in simulations can be easily transplanted to embeddedsystems. Note that XTDrone can support various types of multi-UAVmissions, and we provide two important demos in this paper: one is aground-station-based multi-UAV cooperative search, and the other is adistributed UAV formation flight, including consensus-based formationcontrol, task assignment, and obstacle avoidance.Comment: 12 pages, 10 figures. And for the, see https://gitee.com/robin_shaun/XTDron

    A Multistage Procedure of Mobile Vehicle Acoustic Identification for Single-Sensor Embedded Device

    Get PDF
    Mobile vehicle identification has a wide application field for both civilian and military uses. Vehicle identification may be achieved by incorporating single or multiple sensor solutions and through data fusion. This paper considers a single-sensor multistage hierarchical algorithm of acoustic signal analysis and pattern recognition for the identification of mobile vehicles in an open environment. The algorithm applies several standalone techniques to enable complex decision-making during event identification. Computationally inexpensive procedures are specifically chosen in order to provide real-time operation capability. The algorithm is tested on pre-recorded audio signals of civilian vehicles passing the measurement point and shows promising classification accuracy. Implementation on a specific embedded device is also presented and the capability of real-time operation on this device is demonstrated

    High Accuracy Distributed Target Detection and Classification in Sensor Networks Based on Mobile Agent Framework

    Get PDF
    High-accuracy distributed information exploitation plays an important role in sensor networks. This dissertation describes a mobile-agent-based framework for target detection and classification in sensor networks. Specifically, we tackle the challenging problems of multiple- target detection, high-fidelity target classification, and unknown-target identification. In this dissertation, we present a progressive multiple-target detection approach to estimate the number of targets sequentially and implement it using a mobile-agent framework. To further improve the performance, we present a cluster-based distributed approach where the estimated results from different clusters are fused. Experimental results show that the distributed scheme with the Bayesian fusion method have better performance in the sense that they have the highest detection probability and the most stable performance. In addition, the progressive intra-cluster estimation can reduce data transmission by 83.22% and conserve energy by 81.64% compared to the centralized scheme. For collaborative target classification, we develop a general purpose multi-modality, multi-sensor fusion hierarchy for information integration in sensor networks. The hierarchy is com- posed of four levels of enabling algorithms: local signal processing, temporal fusion, multi-modality fusion, and multi-sensor fusion using a mobile-agent-based framework. The fusion hierarchy ensures fault tolerance and thus generates robust results. In the meanwhile, it also takes into account energy efficiency. Experimental results based on two field demos show constant improvement of classification accuracy over different levels of the hierarchy. Unknown target identification in sensor networks corresponds to the capability of detecting targets without any a priori information, and of modifying the knowledge base dynamically. In this dissertation, we present a collaborative method to solve this problem among multiple sensors. When applied to the military vehicles data set collected in a field demo, about 80% unknown target samples can be recognized correctly, while the known target classification ac- curacy stays above 95%

    Decentralized kalman filter approach for multi-sensor multi-target tracking problems

    Get PDF
    06.03.2018 tarihli ve 30352 sayılı Resmi Gazetede yayımlanan “Yükseköğretim Kanunu İle Bazı Kanun Ve Kanun Hükmünde Kararnamelerde Değişiklik Yapılması Hakkında Kanun” ile 18.06.2018 tarihli “Lisansüstü Tezlerin Elektronik Ortamda Toplanması, Düzenlenmesi ve Erişime Açılmasına İlişkin Yönerge” gereğince tam metin erişime açılmıştır.Doğru pozisyon ve hedeflerin sayısı hava trafik kontrol ve füze savunması için çok önemli bilgilerdir. Bu çalışma, çoklu sensorlü çoklu hedef takibi sistemlerindeki veri füzyonu ve durum tahmini problemlerı için dağıtık Kalman Filtreleme Algoritması sunmaktadır. Problem, radar olarak her biri kendi veri işleme birimine sahip aktif sensörlerin hedef alanını gözlemlemesini esas almaktadır. Bu durumda her bir sistemin iz sayısı olacaktır. Çalışmada önerilen dağıtık Kalman Filtresi, başta füze sistemleri olmak üzere savunma sistemlerinde hareketli hedeflerin farklı sensörlerle izlerini kestirmek ve farklı hedefleri ayrıd etmek için kullanmaktır. Önerilen teknik, çoklu sensör sisteminden gelen verileri işleyen iki aşamalı veri işleme yaklaşımını içermektedir. İlk aşamada, her yerel işlemci kendi verilerini ve standart Kalman filtresi ise en iyi kestirimi yapmak için kullanılmaktadır. Sonraki aşamada bu kestirimler en iyi küresel bir kestirimi yapmak amacıyla dağıtık işlem modunda elde edilir. Bu çalışmada iki radar sistemi iki yerel Kalman filtresi ile uçakların pozisyonunu kestirmek amacıyla kullanılmakta, ardından bu kestirimler merkez işlemciye iletilmektedir. Merkez işlemci doğrulama maksadıyla bu bilgileri birleştirip küresel bir kestirim üretmektedir. Önerilen model uygulama olarak dört senaryo üzerinde test edildi. İlk senaryoda, tek bir hedef iki sensor tarafından izlenirken, ikincisinde, iki hedeften oluşan uzay herhangi bir sensor tarafından izlenmekte, üçüncüsünde, iki hedefin de herhangi bir sensor tarafından aynı anda izlenmesi, son olarak ise iki sensörden her birinin toplam üç hedeften herhangi ikisini izlediği senaryo göz önüne alınmıştır. Önerilen tekniğin performansı hata kovaryans matrisi kullanılarak değerlendirildi ve yüksek doğruluk ve optimal kestirim elde edildi. Uygulama sonuçları önerilen tekniğin yeteneğinin, yerel sensörlerce belirlenen ortak hedeflerin merkezi sistem tarafından ayırd edilebildiğini göstermiştir.For air traffic control and missile defense, the accurate position and the numbers of targets are the most important information needed. This thesis presents a decentralized kalman filtering algorithm (DKF) for data fusion and state estimation problems in multi-sensor multi-target tracking system. The problem arises when several sensors carry out surveillance over a certain area and each sensor has its own data processing system. In this situation, each system has a number of tracks. The DKF is used to estimate and separate the tracks from different sensors represent the targets, when the ability to track targets is essential in missile defense. The proposed technique is a two stage data processing technique which processes data from multi sensor system. In the first stage, each local processor uses its own data to make the best local estimation using standard kalman filter and then these estimations are then obtained in parallel processing mode to make best global estimation. In this work, two radar systems are used as sensors with two local Kalman filters to estimate the position of an aircraft and then they transmit these estimations to a central processor, which combines this information to produce a global estimation. The proposed model is tested on four scenarios, firstly, when there is one target and the two sensors are tracking the same target, secondly, when there are two targets and any sensor is tracking one of them, thirdly, when there are two targets and any sensor is tracking both of them and finally, when two sensors are used to track three targets and any sensor tracks any two of them. The performance of the proposed technique is evaluated using measures such as the error covariance matrix and it gave high accuracy and optimal estimation. The experimental results showed that the proposed method has the ability to separate the joint targets detected by the local sensors

    Real-time Accurate Runway Detection based on Airborne Multi-sensors Fusion

    Get PDF
    Existing methods of runway detection are more focused on image processing for remote sensing images based on computer vision techniques. However, these algorithms are too complicated and time-consuming to meet the demand for real-time airborne application. This paper proposes a novel runway detection method based on airborne multi-sensors data fusion which works in a coarse-to-fine hierarchical architecture. At the coarse layer, a vision projection model from world coordinate system to image coordinate system is built by fusing airborne navigation data and forward-looking sensing images, then a runway region of interest (ROI) is extracted from a whole image by the model. Furthermore, EDLines which is a real-time line segments detector is applied to extract straight line segments from ROI at the fine layer, and fragmented line segments generated by EDLines are linked into two long runway lines. Finally, some unique runway features (e.g. vanishing point and runway direction) are used to recognise airport runway. The proposed method is tested on an image dataset provided by a flight simulation system. The experimental results show that the method has advantages in terms of speed, recognition rate and false alarm rate
    corecore