975 research outputs found

    Methods for Multisource Data Analysis in Remote Sensing

    Get PDF
    Methods for classifying remotely sensed data from multiple data sources are considered. Special interest is in general methods for multisource classification and three such approaches are considered: Dempster-Shafer theory, fuzzy set theory and statistical multisource analysis. Statistical multisource analysis is investigated further. To apply this method successfully it is necessary to characterize the reliability of each data source. Separability measures and classification accuracy are used to measure the reliability. These reliability measures are then associated with reliability factors included in the statistical multisource analysis. Experimental results are given for the application of statistical multisource analysis to multispectral scanner data where different segments of the electromagnetic spectrum are treated as different sources. Finally, a discussion is included concerning future directions for investigating reliability measures

    Fuzzy decision-making fuser (FDMF) for integrating human-machine autonomous (HMA) systems with adaptive evidence sources

    Full text link
    © 2017 Liu, Pal, Marathe, Wang and Lin. A brain-computer interface (BCI) creates a direct communication pathway between the human brain and an external device or system. In contrast to patient-oriented BCIs, which are intended to restore inoperative or malfunctioning aspects of the nervous system, a growing number of BCI studies focus on designing auxiliary systems that are intended for everyday use. The goal of building these BCIs is to provide capabilities that augment existing intact physical and mental capabilities. However, a key challenge to BCI research is human variability; factors such as fatigue, inattention, and stress vary both across different individuals and for the same individual over time. If these issues are addressed, autonomous systems may provide additional benefits that enhance system performance and prevent problems introduced by individual human variability. This study proposes a human-machine autonomous (HMA) system that simultaneously aggregates human and machine knowledge to recognize targets in a rapid serial visual presentation (RSVP) task. The HMA focuses on integrating an RSVP BCI with computer vision techniques in an image-labeling domain. A fuzzy decision-making fuser (FDMF) is then applied in the HMA system to provide a natural adaptive framework for evidence-based inference by incorporating an integrated summary of the available evidence (i.e., human and machine decisions) and associated uncertainty. Consequently, the HMA system dynamically aggregates decisions involving uncertainties from both human and autonomous agents. The collaborative decisions made by an HMA system can achieve and maintain superior performance more efficiently than either the human or autonomous agents can achieve independently. The experimental results shown in this study suggest that the proposed HMA system with the FDMF can effectively fuse decisions from human brain activities and the computer vision techniques to improve overall performance on the RSVP recognition task. This conclusion demonstrates the potential benefits of integrating autonomous systems with BCI systems

    Vibration condition monitoring of planetary gears based on decision level data fusion using Dempster-Shafer theory of evidence

    Get PDF
    In recent years, due to increasing requirement for reliability of industrial machines, fault diagnosis using data fusion methods has become widely applied. To recognize crucial faults of mechanical systems with high confidence, indubitably decision level fusion techniques are the foremost procedure among other data fusion methods. Therefore, in this paper in order to improve the fault diagnosis accuracy of planetary gearbox, we proposed a representative data fusion approach which exploits Support Vector Machine (SVM) and Artificial Neural Network (ANN) classifiers and Dempster-Shafer (D-S) evidence theory for classifier fusion. We assumed the SVM and ANN classifiers as fault diagnosis subsystems as well. Then output values of the subsystems were regarded as input values of decision fusion level module. First, vibration signals of a planetary gearbox were captured for four different conditions of gear. Obtained signals were transmitted from time domain to time-frequency domain using wavelet transform. In next step, some statistical features of time-frequency domain signals were extracted which were used as classifiers input. The gained results of every fault diagnosis subsystem were considered as basic probability assignment (BPA) of D-S evidence theory. Classification accuracy for the SVM and ANN subsystems was determined as 80.5 % and 74.6 % respectively. Then, by using the D-S theory rules for classifier fusion, ultimate fault diagnosis accuracy was gained as 94.8 %. Results show that proposed method for vibration condition monitoring of planetary gearbox based on D-S theory provided a much better accuracy. Furthermore, an increase of more than 14 % accuracy demonstrates the strength of D-S theory method in decision fusion level fault diagnosis

    A methodology for the selection of a paradigm of reasoning under uncertainty in expert system development

    Get PDF
    The aim of this thesis is to develop a methodology for the selection of a paradigm of reasoning under uncertainty for the expert system developer. This is important since practical information on how to select a paradigm of reasoning under uncertainty is not generally available. The thesis explores the role of uncertainty in an expert system and considers the process of reasoning under uncertainty. The possible sources of uncertainty are investigated and prove to be crucial to some aspects of the methodology. A variety of Uncertainty Management Techniques (UMTs) are considered, including numeric, symbolic and hybrid methods. Considerably more information is found in the literature on numeric methods, than the latter two. Methods that have been proposed for comparing UMTs are studied and comparisons reported in the literature are summarised. Again this concentrates on numeric methods, since there is more literature available. The requirements of a methodology for the selection of a UMT are considered. A manual approach to the selection process is developed. The possibility of extending the boundaries of knowledge stored in the expert system by including meta-data to describe the handling of uncertainty in an expert system is then considered. This is followed by suggestions taken from the literature for automating the process of selection. Finally consideration is given to whether the objectives of the research have been met and recommendations are made for the next stage in researching a methodology for the selection of a paradigm of reasoning under uncertainty in expert system development

    Generalized Probabilistic Reasoning and Empirical Studies on Computational Efficiency and Scalability

    Get PDF
    Expert Systems are tools that can be very useful for diagnostic purposes, however current methods of storing and reasoning with knowledge have significant limitations. One set of limitations involves how to store and manipulate uncertain knowledge: much of the knowledge we are dealing with has some degree of uncertainty. These limitations include lack of complete information, not being able to model cyclic information and limitations on the size and complexity of the problems to be solved. If expert systems are ever going to be able to tackle significant real world problems then these deficiencies must be corrected. This paper describes a new method of reasoning with uncertain knowledge which improves the computational efficiency as well as scalability over current methods. The cornerstone of this method involves incorporating and exploiting information about the structure of the knowledge representation to reduce the problem size and complexity. Additionally, a new knowledge representation is discussed that will further increase the capability of expert systems to model a wider variety of real world problems. Finally, benchmarking studies of the new algorithm against the old have led to insights into the graph structure of very large knowledge bases
    • …
    corecore