244 research outputs found

    Fuzzy Logic

    Get PDF
    Fuzzy Logic is becoming an essential method of solving problems in all domains. It gives tremendous impact on the design of autonomous intelligent systems. The purpose of this book is to introduce Hybrid Algorithms, Techniques, and Implementations of Fuzzy Logic. The book consists of thirteen chapters highlighting models and principles of fuzzy logic and issues on its techniques and implementations. The intended readers of this book are engineers, researchers, and graduate students interested in fuzzy logic systems

    Civilian Target Recognition using Hierarchical Fusion

    Get PDF
    The growth of computer vision technology has been marked by attempts to imitate human behavior to impart robustness and confidence to the decision making process of automated systems. Examples of disciplines in computer vision that have been targets of such efforts are Automatic Target Recognition (ATR) and fusion. ATR is the process of aided or unaided target detection and recognition using data from different sensors. Usually, it is synonymous with its military application of recognizing battlefield targets using imaging sensors. Fusion is the process of integrating information from different sources at the data or decision levels so as to provide a single robust decision as opposed to multiple individual results. This thesis combines these two research areas to provide improved classification accuracy in recognizing civilian targets. The results obtained reaffirm that fusion techniques tend to improve the recognition rates of ATR systems. Previous work in ATR has mainly dealt with military targets and single level of data fusion. Expensive sensors and time-consuming algorithms are generally used to improve system performance. In this thesis, civilian target recognition, which is considered to be harder than military target recognition, is performed. Inexpensive sensors are used to keep the system cost low. In order to compensate for the reduced system ability, fusion is performed at two different levels of the ATR system { event level and sensor level. Only preliminary image processing and pattern recognition techniques have been used so as to maintain low operation times. High classification rates are obtained using data fusion techniques alone. Another contribution of this thesis is the provision of a single framework to perform all operations from target data acquisition to the final decision making. The Sensor Fusion Testbed (SFTB) designed by Northrop Grumman Systems has been used by the Night Vision & Electronic Sensors Directorate to obtain images of seven different types of civilian targets. Image segmentation is performed using background subtraction. The seven invariant moments are extracted from the segmented image and basic classification is performed using k Nearest Neighbor method. Cross-validation is used to provide a better idea of the classification ability of the system. Temporal fusion at the event level is performed using majority voting and sensor level fusion is done using Behavior-Knowledge Space method. Two separate databases were used. The first database uses seven targets (2 cars, 2 SUVs, 2 trucks and 1 stake body light truck). Individual frame, temporal fusion and BKS fusion results are around 65%, 70% and 77% respectively. The second database has three targets (cars, SUVs and trucks) formed by combining classes from the first database. Higher classification accuracies are observed here. 75%, 90% and 95% recognition rates are obtained at frame, event and sensor levels. It can be seen that, on an average, recognition accuracy improves with increasing levels of fusion. Also, distance-based classification was performed to study the variation of system performance with the distance of the target from the cameras. The results are along expected lines and indicate the efficacy of fusion techniques for the ATR problem. Future work using more complex image processing and pattern recognition routines can further improve the classification performance of the system. The SFTB can be equipped with these algorithms and field-tested to check real-time performance

    Occlusion Handler Density Networks for 3D Multimodal Joint Location of Hand Pose Hypothesis

    Get PDF
    Predicting the pose parameters during the hand pose estimation (HPE) process is an ill-posed challenge. This is due to severe self-occluded joints of the hand. The existing approaches for predicting pose parameters of the hand, utilize a single-value mapping of an input image to generate final pose output. This way makes it difficult to handle occlusion especially when it comes from the multimodal pose hypothesis. This paper introduces an effective method of handling multimodal joint occlusion using the negative log-likelihood of a multimodal mixture-of-Gaussians through a hybrid hierarchical mixture density network (HHMDN). The proposed approach generates multiple feasible hypotheses of 3D poses with visibility, unimodal and multimodal distribution units to locate joint visibility. The visible features are extracted and fed into the Convolutional Neural Networks (CNN) layer of the HHMDN for feature learning. Finally, the effectiveness of the proposed method is proved on ICVL, NYU, and BigHand public hand pose datasets. The imperative results show that the proposed method in this paper is effective as it achieves a visibility error of 30.3mm, which is less error compared to many state-of-the-art approaches that use different distributions of visible and occluded joints
    • …
    corecore