243 research outputs found

    Neuro-Fuzzy Prediction for Brain-Computer Interface Applications

    Get PDF

    Epileptic Seizure Detection And Prediction From Electroencephalogram Using Neuro-Fuzzy Algorithms

    Get PDF
    This dissertation presents innovative approaches based on fuzzy logic in epileptic seizure detection and prediction from Electroencephalogram (EEG). The fuzzy rule-based algorithms were developed with the aim to improve quality of life of epilepsy patients by utilizing intelligent methods. An adaptive fuzzy logic system was developed to detect seizure onset in a patient specific way. Fuzzy if-then rules were developed to mimic the human reasoning and taking advantage of the combination in spatial-temporal domain. Fuzzy c-means clustering technique was utilized for optimizing the membership functions for varying patterns in the feature domain. In addition, application of the adaptive neuro-fuzzy inference system (ANFIS) is presented for efficient classification of several commonly arising artifacts from EEG. Finally, we present a neuro-fuzzy approach of seizure prediction by applying the ANFIS. Patient specific ANFIS classifier was constructed to forecast a seizure followed by postprocessing methods. Three nonlinear seizure predictive features were used to characterize changes prior to seizure. The nonlinear features used in this study were similarity index, phase synchronization, and nonlinear interdependence. The ANFIS classifier was constructed based on these features as inputs. Fuzzy if-then rules were generated by the ANFIS classifier using the complex relationship of feature space provided during training. In this dissertation, the application of the neuro-fuzzy algorithms in epilepsy diagnosis and treatment was demonstrated by applying the methods on different datasets. Several performance measures such as detection delay, sensitivity and specificity were calculated and compared with results reported in literature. The proposed algorithms have potentials to be used in diagnostics and therapeutic applications as they can be implemented in an implantable medical device to detect a seizure, forecast a seizure, and initiate neurostimulation therapy for the purpose of seizure prevention or abortion

    A comparative study among ANFIS, ANNs, and SONFIS for volatile time series

    Get PDF
    Este artículo presenta una comparación entre ANFIS, ANNs y un Sistema de Inferencia Neuro Difuso Autoorganizado (SONFIS) para la predicción de series de tiempo. La serie del índice de acciones de Turquía (ISE) se analiza utilizando los tres métodos, se realiza un anålisis estadístico de los residuos y se discuten las ventajas / desventajas por método.This paper presents a comparison among ANFIS, ANNs, and a Self Organized Neuro Fuzzy Inference System (SONFIS) for time series prediction. The Turkish stock index (ISE) series is analyzed using the three methods, a statistical analysis of the residuals per method is performed, and the advantages/disadvantages per method are discussed

    Development of soft computing and applications in agricultural and biological engineering

    Get PDF
    Soft computing is a set of “inexact” computing techniques, which are able to model and analyze very complex problems. For these complex problems, more conventional methods have not been able to produce cost-effective, analytical, or complete solutions. Soft computing has been extensively studied and applied in the last three decades for scientific research and engineering computing. In agricultural and biological engineering, researchers and engineers have developed methods of fuzzy logic, artificial neural networks, genetic algorithms, decision trees, and support vector machines to study soil and water regimes related to crop growth, analyze the operation of food processing, and support decision-making in precision farming. This paper reviews the development of soft computing techniques. With the concepts and methods, applications of soft computing in the field of agricultural and biological engineering are presented, especially in the soil and water context for crop management and decision support in precision agriculture. The future of development and application of soft computing in agricultural and biological engineering is discussed

    Advanced Signal Processing and Control in Anaesthesia

    Get PDF
    This thesis comprises three major stages: classification of depth of anaesthesia (DOA); modelling a typical patient’s behaviour during a surgical procedure; and control of DOAwith simultaneous administration of propofol and remifentanil. Clinical data gathered in theoperating theatre was used in this project. Multiresolution wavelet analysis was used to extract meaningful features from the auditory evoked potentials (AEP). These features were classified into different DOA levels using a fuzzy relational classifier (FRC). The FRC uses fuzzy clustering and fuzzy relational composition. The FRC had a good performance and was able to distinguish between the DOA levels. A hybrid patient model was developed for the induction and maintenance phase of anaesthesia. An adaptive network-based fuzzy inference system was used to adapt Takagi-Sugeno-Kang (TSK) fuzzy models relating systolic arterial pressure (SAP), heart rate (HR), and the wavelet extracted AEP features with the effect concentrations of propofol and remifentanil. The effect of surgical stimuli on SAP and HR, and the analgesic properties of remifentanil were described by Mamdani fuzzy models, constructed with anaesthetist cooperation. The model proved to be adequate, reflecting the effect of drugs and surgical stimuli. A multivariable fuzzy controller was developed for the simultaneous administration of propofol and remifentanil. The controller is based on linguistic rules that interact with three decision tables, one of which represents a fuzzy PI controller. The infusion rates of the two drugs are determined according to the DOA level and surgical stimulus. Remifentanil is titrated according to the required analgesia level and its synergistic interaction with propofol. The controller was able to adequately achieve and maintain the target DOA level, under different conditions. Overall, it was possible to model the interaction between propofol and remifentanil, and to successfully use this model to develop a closed-loop system in anaesthesia

    Neuro-fuzzy modeling of multi-field surface neuroprostheses for hand grasp

    Get PDF
    154 p.Las neuroprĂłtesis aplican pulsos elĂ©ctricos a los nervios perifĂ©ricos con el objetivo de sustituir funciones motrices/sensoriales perdidas, dando asistencia e influyendo positivamente en la rehabilitaciĂłn motriz de personas con disfunciones motrices causadas por trastornos neurolĂłgicos. La complejidad de la neuroanatomĂ­a del antebrazo y la mano, su dimensionalidad, las diversas tareas no-cĂ­clicas, la variabilidad de movimientos entre sujetos y la reducida selectividad de las neuroprĂłtesis superficiales, ha dado lugar al diseño de un nĂșmero reducido de neuroprĂłtesis orientadas a agarres bĂĄsicos. La posibilidad de hacer mĂĄs selectiva la estimulaciĂłn mediante los electrodos multi-campo, junto con el conocimiento sobre la incomodidad y los movimientos que genera la aplicaciĂłn de la estimulaciĂłn elĂ©ctrica funcional (FES por sus siglas en inglĂ©s) en miembro superior, podrĂ­an ser base fundamental para el desarrollo de neuroprĂłtesis de agarre mĂĄs avanzadas. La presente tesis describe un anĂĄlisis de incomodidad como resultado de FES en el miembro superior, y propone modelos neuro-difusos para neuroprĂłtesis de agarre tanto para personas sanas como para personas con trastornos neurolĂłgicos. El conocimiento generado respecto a la incomodidad puede ser utilizado como guĂ­a para desarrollar aplicaciones de FES de miembro superior mĂĄs cĂłmodas. Del mismo modo, los modelos propuestos en esta tesis pueden ser utilizados para apoyar el diseño y la validaciĂłn de sistemas de control avanzados en neuroprĂłtesis dirigidas a la funciĂłn de agarre.Tecnalia; Intelligent Control Research Grou

    Intelligent data mining using artificial neural networks and genetic algorithms : techniques and applications

    Get PDF
    Data Mining (DM) refers to the analysis of observational datasets to find relationships and to summarize the data in ways that are both understandable and useful. Many DM techniques exist. Compared with other DM techniques, Intelligent Systems (ISs) based approaches, which include Artificial Neural Networks (ANNs), fuzzy set theory, approximate reasoning, and derivative-free optimization methods such as Genetic Algorithms (GAs), are tolerant of imprecision, uncertainty, partial truth, and approximation. They provide flexible information processing capability for handling real-life situations. This thesis is concerned with the ideas behind design, implementation, testing and application of a novel ISs based DM technique. The unique contribution of this thesis is in the implementation of a hybrid IS DM technique (Genetic Neural Mathematical Method, GNMM) for solving novel practical problems, the detailed description of this technique, and the illustrations of several applications solved by this novel technique. GNMM consists of three steps: (1) GA-based input variable selection, (2) Multi- Layer Perceptron (MLP) modelling, and (3) mathematical programming based rule extraction. In the first step, GAs are used to evolve an optimal set of MLP inputs. An adaptive method based on the average fitness of successive generations is used to adjust the mutation rate, and hence the exploration/exploitation balance. In addition, GNMM uses the elite group and appearance percentage to minimize the randomness associated with GAs. In the second step, MLP modelling serves as the core DM engine in performing classification/prediction tasks. An Independent Component Analysis (ICA) based weight initialization algorithm is used to determine optimal weights before the commencement of training algorithms. The Levenberg-Marquardt (LM) algorithm is used to achieve a second-order speedup compared to conventional Back-Propagation (BP) training. In the third step, mathematical programming based rule extraction is not only used to identify the premises of multivariate polynomial rules, but also to explore features from the extracted rules based on data samples associated with each rule. Therefore, the methodology can provide regression rules and features not only in the polyhedrons with data instances, but also in the polyhedrons without data instances. A total of six datasets from environmental and medical disciplines were used as case study applications. These datasets involve the prediction of longitudinal dispersion coefficient, classification of electrocorticography (ECoG)/Electroencephalogram (EEG) data, eye bacteria Multisensor Data Fusion (MDF), and diabetes classification (denoted by Data I through to Data VI). GNMM was applied to all these six datasets to explore its effectiveness, but the emphasis is different for different datasets. For example, the emphasis of Data I and II was to give a detailed illustration of how GNMM works; Data III and IV aimed to show how to deal with difficult classification problems; the aim of Data V was to illustrate the averaging effect of GNMM; and finally Data VI was concerned with the GA parameter selection and benchmarking GNMM with other IS DM techniques such as Adaptive Neuro-Fuzzy Inference System (ANFIS), Evolving Fuzzy Neural Network (EFuNN), Fuzzy ARTMAP, and Cartesian Genetic Programming (CGP). In addition, datasets obtained from published works (i.e. Data II & III) or public domains (i.e. Data VI) where previous results were present in the literature were also used to benchmark GNMM’s effectiveness. As a closely integrated system GNMM has the merit that it needs little human interaction. With some predefined parameters, such as GA’s crossover probability and the shape of ANNs’ activation functions, GNMM is able to process raw data until some human-interpretable rules being extracted. This is an important feature in terms of practice as quite often users of a DM system have little or no need to fully understand the internal components of such a system. Through case study applications, it has been shown that the GA-based variable selection stage is capable of: filtering out irrelevant and noisy variables, improving the accuracy of the model; making the ANN structure less complex and easier to understand; and reducing the computational complexity and memory requirements. Furthermore, rule extraction ensures that the MLP training results are easily understandable and transferrable

    BIM-based software for construction waste analytics using artificial intelligence hybrid models

    Get PDF
    The Construction industry generates about 30% of the total waste in the UK. Current high landfill cost and severe environmental impact of waste reveals the need to reduce waste generated from construction activities. Although literature reveals that the best approach to Construction Waste (CW) management is minimization at the design stage, current tools are not robust enough to support architects and design engineers. Review of extant literature reveals that the key limitations of existing CW management tools are that they are not integrated with the design process and that they lack Building Information Modelling (BIM) compliance. This is because the tools are external to design BIM tools used by architects and design engineers. This study therefore investigates BIM-based strategies for CW management and develops Artificial Intelligent (AI) hybrid models to predict CW at the design stage. The model was then integrated into Autodesk Revit as an add-in (BIMWaste) to provide CW analytics. Based on a critical realism paradigm, the study adopts exploratory sequential mixed methods, which combines both qualitative and quantitative methods into a single study. The study starts with the review of extant literature and (FGIs) with industry practitioners. The transcripts of the FGIs were subjected to thematic analysis to identify prevalent themes from the quotations. The factors from literature review and FGIs were then combined and put together in a questionnaire survey and distributed to industry practitioners. The questionnaire responses were subjected to rigorous statistical process to identify key strategies for BIM-based approach to waste efficient design coordination. Results of factor analysis revealed five groups of BIM strategies for CW management, which are: (i)improved collaboration for waste management, (ii)waste-driven design process and solutions, (iii)lifecycle waste analytics, (iv) Innovative technologies for waste intelligence and analytics, and (v)improved documentation for waste management. The results improve the understanding of BIM functionalities and how they could improve the effectiveness of existing CW management tools. Thereafter, the key strategies were developed into a holistic BIM framework for CW management. This was done to incorporate industrial and technological requirements for BIM enabled waste management into an integrated system.The framework guided the development of AI hybrid models and BIM based tool for CW management. Adaptive Neuro-Fuzzy Inference System (ANFIS) model was developed for CW prediction and mathematical models were developed for CW minimisation. Based on historical Construction Waste Record (CWR) from 117 building projects, the model development reveals that two key predictors of CW are “GFA” and “Construction Type”. The final models were then incorporated into Autodesk Revit to enable the prediction of CW from building designs. The performance of the final tool was tested using a test plan and two test cases. The results show that the tool performs well and that it predicts CW according to waste types, element types, and building levels. The study generated several implications that would be of interest to several stakeholders in the construction industry. Particularly, the study provides a clear direction on how CW management strategies could be integrated into BIM platform to streamline the CW analytics

    The foundation of capability modelling : a study of the impact and utilisation of human resources

    Get PDF
    This research aims at finding a foundation for assessment of capabilities and applying the concept in a human resource selection. The research identifies a common ground for assessing individuals’ applied capability in a given job based on literature review of various disciplines in engineering, human sciences and economics. A set of criteria is found to be common and appropriate to be used as the basis of this assessment. Applied Capability is then described in this research as the impact of the person in fulfilling job requirements and also their level of usage from their resources with regards to the identified criteria. In other words how their available resources (abilities, skills, value sets, personal attributes and previous performance records) can be used in completing a job. Translation of the person’s resources and task requirements using the proposed criteria is done through a novel algorithm and two prevalent statistical inference techniques (OLS regression and Fuzzy) are used to estimate quantitative levels of impact and utilisation. A survey on post graduate students is conducted to estimate their applied capabilities in a given job. Moreover, expert academics are surveyed on their views on key applied capability assessment criteria, and how different levels of match between job requirement and person’s resources in those criteria might affect the impact levels. The results from both surveys were mathematically modelled and the predictive ability of the conceptual and mathematical developments were compared and further contrasted with the observed data. The models were tested for robustness using experimental data and the results for both estimation methods in both surveys are close to one another with the regression models being closer to observations. It is believed that this research has provided sound conceptual and mathematical platforms which can satisfactorily predict individuals’ applied capability in a given job. This research has contributed to the current knowledge and practice by a) providing a comparison of capability definitions and uses in different disciplines, b) defining criteria for applied capability assessment, c) developing an algorithm to capture applied capabilities, d) quantification of an existing parallel model and finally e) estimating impact and utilisation indices using mathematical methods.EThOS - Electronic Theses Online ServiceGBUnited Kingdo
    • 

    corecore