2,625 research outputs found

    Application of expert systems in project management decision aiding

    Get PDF
    The feasibility of developing an expert systems-based project management decision aid to enhance the performance of NASA project managers was assessed. The research effort included extensive literature reviews in the areas of project management, project management decision aiding, expert systems technology, and human-computer interface engineering. Literature reviews were augmented by focused interviews with NASA managers. Time estimation for project scheduling was identified as the target activity for decision augmentation, and a design was developed for an Integrated NASA System for Intelligent Time Estimation (INSITE). The proposed INSITE design was judged feasible with a low level of risk. A partial proof-of-concept experiment was performed and was successful. Specific conclusions drawn from the research and analyses are included. The INSITE concept is potentially applicable in any management sphere, commercial or government, where time estimation is required for project scheduling. As project scheduling is a nearly universal management activity, the range of possibilities is considerable. The INSITE concept also holds potential for enhancing other management tasks, especially in areas such as cost estimation, where estimation-by-analogy is already a proven method

    AUTOMATED INTERPRETATION OF THE BACKGROUND EEG USING FUZZY LOGIC

    Get PDF
    A new framework is described for managing uncertainty and for deahng with artefact corruption to introduce objectivity in the interpretation of the electroencephalogram (EEG). Conventionally, EEG interpretation is time consuming and subjective, and is known to show significant inter- and intra-personnel variation. A need thus exists to automate the interpretation of the EEG to provide a more consistent and efficient assessment. However, automated analysis of EEGs by computers is complicated by two major factors. The difficulty of adequately capturing in machine form, the skills and subjective expertise of the experienced electroencephalbgrapher, and the lack of a reliable means of dealing with the range of EEG artefacts (signal contamination). In this thesis, a new framework is described which introduces objectivity in two important outcomes of clinical evaluation of the EEG, namely, the clinical factual report and the clinical 'conclusion', by capturing the subjective expertise of the electroencephalographer and dealing with the problem of artefact corruption. The framework is separated into two stages .to assist piecewise optimisation and to cater for different requirements. The first stage, 'quantitative analysis', relies on novel digital signal processing algorithms and cluster analysis techniques to reduce data and identify and describe background activities in the EEG. To deal with artefact corruption, an artefact removal strategy, based on new reUable techniques for artefact identification is used to ensure that artefact-free activities only are used in the analysis. The outcome is a quantitative analysis, which efficiently describes the background activity in the record, and can support future clinical investigations in neurophysiology. In clinical practice, many of the EEG features are described by the clinicians in natural language terms, such as very high, extremely irregular, somewhat abnormal etc. The second stage of the framework, 'qualitative analysis', captures the subjectivity and linguistic uncertainty expressed.by the clinical experts, using novel, intelligent models, based on fuzzy logic, to provide an analysis closely comparable to the clinical interpretation made in practice. The outcome of this stage is an EEG report with qualitative descriptions to complement the quantitative analysis. The system was evaluated using EEG records from 1 patient with Alzheimer's disease and 2 age-matched normal controls for the factual report, and 3 patients with Alzheimer's disease and 7 age-matched nonnal controls for the 'conclusion'. Good agreement was found between factual reports produced by the system and factual reports produced by qualified clinicians. Further, the 'conclusion' produced by the system achieved 100% discrimination between the two subject groups. After a thorough evaluation, the system should significantly aid the process of EEG interpretation and diagnosis

    AI Solutions for MDS: Artificial Intelligence Techniques for Misuse Detection and Localisation in Telecommunication Environments

    Get PDF
    This report considers the application of Articial Intelligence (AI) techniques to the problem of misuse detection and misuse localisation within telecommunications environments. A broad survey of techniques is provided, that covers inter alia rule based systems, model-based systems, case based reasoning, pattern matching, clustering and feature extraction, articial neural networks, genetic algorithms, arti cial immune systems, agent based systems, data mining and a variety of hybrid approaches. The report then considers the central issue of event correlation, that is at the heart of many misuse detection and localisation systems. The notion of being able to infer misuse by the correlation of individual temporally distributed events within a multiple data stream environment is explored, and a range of techniques, covering model based approaches, `programmed' AI and machine learning paradigms. It is found that, in general, correlation is best achieved via rule based approaches, but that these suffer from a number of drawbacks, such as the difculty of developing and maintaining an appropriate knowledge base, and the lack of ability to generalise from known misuses to new unseen misuses. Two distinct approaches are evident. One attempts to encode knowledge of known misuses, typically within rules, and use this to screen events. This approach cannot generally detect misuses for which it has not been programmed, i.e. it is prone to issuing false negatives. The other attempts to `learn' the features of event patterns that constitute normal behaviour, and, by observing patterns that do not match expected behaviour, detect when a misuse has occurred. This approach is prone to issuing false positives, i.e. inferring misuse from innocent patterns of behaviour that the system was not trained to recognise. Contemporary approaches are seen to favour hybridisation, often combining detection or localisation mechanisms for both abnormal and normal behaviour, the former to capture known cases of misuse, the latter to capture unknown cases. In some systems, these mechanisms even work together to update each other to increase detection rates and lower false positive rates. It is concluded that hybridisation offers the most promising future direction, but that a rule or state based component is likely to remain, being the most natural approach to the correlation of complex events. The challenge, then, is to mitigate the weaknesses of canonical programmed systems such that learning, generalisation and adaptation are more readily facilitated

    CFLCA: High Performance based Heart disease Prediction System using Fuzzy Learning with Neural Networks

    Get PDF
    Human Diseases are increasing rapidly in today’s generation mainly due to the life style of people like poor diet, lack of exercises, drugs and alcohol consumption etc. But the most spreading disease that is commonly around 80% of people death direct and indirectly heart disease basis. In future (approximately after 10 years) maximum number of people may expire cause of heart diseases. Due to these reasons, many of researchers providing enormous remedy, data analysis in various proposed technologies for diagnosing heart diseases with plenty of medical data which is related to heart disease. In field of Medicine regularly receives very wide range of medical data in the form of text, image, audio, video, signal pockets, etc. This database contains raw dataset which consist of inconsistent and redundant data. The health care system is no doubt very rich in aspect of storing data but at the same time very poor in fetching knowledge. Data mining (DM) methods can help in extracting a valuable knowledge by applying DM terminologies like clustering, regression, segmentation, classification etc. After the collection of data when the dataset becomes larger and more complex than data mining algorithms and clustering algorithms (D-Tree, Neural Networks, K-means, etc.) are used. To get accuracy and precision values improved with proposed method of Cognitive Fuzzy Learning based Clustering Algorithm (CFLCA) method. CFLCA methodology creates advanced meta indexing for n-dimensional unstructured data. The heart disease dataset used after data enrichment and feature engineering with UCI machine learning algorithm, attain high level accurate and prediction rate. Through this proposed CFLCA algorithm is having high accuracy, precision and recall values of data analysis for heart diseases detection

    Context-Specific Preference Learning of One Dimensional Quantitative Geospatial Attributes Using a Neuro-Fuzzy Approach

    Get PDF
    Change detection is a topic of great importance for modern geospatial information systems. Digital aerial imagery provides an excellent medium to capture geospatial information. Rapidly evolving environments, and the availability of increasing amounts of diverse, multiresolutional imagery bring forward the need for frequent updates of these datasets. Analysis and query of spatial data using potentially outdated data may yield results that are sometimes invalid. Due to measurement errors (systematic, random) and incomplete knowledge of information (uncertainty) it is ambiguous if a change in a spatial dataset has really occurred. Therefore we need to develop reliable, fast, and automated procedures that will effectively report, based on information from a new image, if a change has actually occurred or this change is simply the result of uncertainty. This thesis introduces a novel methodology for change detection in spatial objects using aerial digital imagery. The uncertainty of the extraction is used as a quality estimate in order to determine whether change has occurred. For this goal, we develop a fuzzy-logic system to estimate uncertainty values fiom the results of automated object extraction using active contour models (a.k.a. snakes). The differential snakes change detection algorithm is an extension of traditional snakes that incorporates previous information (i.e., shape of object and uncertainty of extraction) as energy functionals. This process is followed by a procedure in which we examine the improvement of the uncertainty at the absence of change (versioning). Also, we introduce a post-extraction method for improving the object extraction accuracy. In addition to linear objects, in this thesis we extend differential snakes to track deformations of areal objects (e.g., lake flooding, oil spills). From the polygonal description of a spatial object we can track its trajectory and areal changes. Differential snakes can also be used as the basis for similarity indices for areal objects. These indices are based on areal moments that are invariant under general affine transformation. Experimental results of the differential snakes change detection algorithm demonstrate their performance. More specifically, we show that the differential snakes minimize the false positives in change detection and track reliably object deformations

    Intelligent Anomaly Detection of Machine Tools based on Mean Shift Clustering

    Get PDF
    For a fault detection of machine tools, fixed intervention thresholds are usually necessary. In order to provide an autonomous anomaly detection without the need for fixed limits, recurring patterns must be detected in the signal data. This paper presents an approach for online pattern recognition on NC Code based on mean shift clustering that will be matched with drive signals. The intelligent fault detection system learns individual intervention thresholds based on the prevailing machining patterns. Using a self-organizing map, data captured during the machine’s operation are assigned to a normal or malfunction state

    Quantification of uncertainty of geometallurgical variables for mine planning optimisation

    Get PDF
    Interest in geometallurgy has increased significantly over the past 15 years or so because of the benefits it brings to mine planning and operation. Its use and integration into design, planning and operation is becoming increasingly critical especially in the context of declining ore grades and increasing mining and processing costs. This thesis, comprising four papers, offers methodologies and methods to quantify geometallurgical uncertainty and enrich the block model with geometallurgical variables, which contribute to improved optimisation of mining operations. This enhanced block model is termed a geometallurgical block model. Bootstrapped non-linear regression models by projection pursuit were built to predict grindability indices and recovery, and quantify model uncertainty. These models are useful for populating the geometallurgical block model with response attributes. New multi-objective optimisation formulations for block caving mining were formulated and solved by a meta-heuristics solver focussing on maximising the project revenue and, at the same time, minimising several risk measures. A novel clustering method, which is able to use both continuous and categorical attributes and incorporate expert knowledge, was also developed for geometallurgical domaining which characterises the deposit according to its metallurgical response. The concept of geometallurgical dilution was formulated and used for optimising production scheduling in an open-pit case study.Thesis (Ph.D.) (Research by Publication) -- University of Adelaide, School of Civil, Environmental and Mining Engineering, 201

    A Survey of Data Mining Techniques for Steganalysis

    Get PDF

    An overview of decision table literature 1982-1995.

    Get PDF
    This report gives an overview of the literature on decision tables over the past 15 years. As much as possible, for each reference, an author supplied abstract, a number of keywords and a classification are provided. In some cases own comments are added. The purpose of these comments is to show where, how and why decision tables are used. The literature is classified according to application area, theoretical versus practical character, year of publication, country or origin (not necessarily country of publication) and the language of the document. After a description of the scope of the interview, classification results and the classification by topic are presented. The main body of the paper is the ordered list of publications with abstract, classification and comments.
    • …
    corecore