1,112 research outputs found
Designing an Interval Type-2 Fuzzy Logic System for Handling Uncertainty Effects in Brain–Computer Interface Classification of Motor Imagery Induced EEG Patterns
One of the urgent challenges in the automated analysis and interpretation of electrical brain activity is the effective handling of uncertainties associated with the complexity and variability of brain dynamics, reflected in the nonstationary nature of brain signals such as electroencephalogram (EEG). This poses a severe problem for existing approaches to the classification task within brain–computer interface (BCI) systems. Recently emerged type-2 fuzzy logic (T2FL) methodology has shown a remarkable potential in dealing with uncertain information given limited insight into the nature of the data generating mechanism. The objective of this work is thus to examine the applicability of T2FL approach to the problem of EEG pattern recognition. In particular, the focus is two-fold: i) the design methodology for the interval T2FL system (IT2FLS) that can robustly deal with inter-session as well as within-session manifestations of nonstationary spectral EEG correlates of motor imagery (MI), and ii) the comprehensive examination of the proposed fuzzy classifier in both off-line and on-line EEG classification case studies. The on-line evaluation of the IT2FLS-controlled real-time neurofeedback over multiple recording sessions holds special importance for EEG-based BCI technology. In addition, a retrospective comparative analysis accounting for other popular BCI classifiers such as linear discriminant analysis (LDA), kernel Fisher discriminant (KFD) and support vector machines (SVMs) as well as a conventional type-1 FLS (T1FLS), simulated off-line on the recorded EEGs, has demonstrated the enhanced potential of the proposed IT2FLS approach to robustly handle uncertainty effects in BCI classification
A fast algorithm to initialize cluster centroids in fuzzy clustering applications
The goal of partitioning clustering analysis is to divide a dataset into a predetermined number of homogeneous clusters. The quality of final clusters from a prototype-based partitioning algorithm is highly affected by the initially chosen centroids. In this paper, we propose the InoFrep, a novel data-dependent initialization algorithm for improving computational efficiency and robustness in prototype-based hard and fuzzy clustering. The InoFrep is a single-pass algorithm using the frequency polygon data of the feature with the highest peaks count in a dataset. By using the Fuzzy C-means (FCM) clustering algorithm, we empirically compare the performance of the InoFrep on one synthetic and six real datasets to those of two common initialization methods: Random sampling of data points and K-means++. Our results show that the InoFrep algorithm significantly reduces the number of iterations and the computing time required by the FCM algorithm. Additionally, it can be applied to multidimensional large datasets because of its shorter initialization time and independence from dimensionality due to working with only one feature with the highest number of peaks
Identifying diachronic topic-based research communities by clustering shared research trajectories
Communities of academic authors are usually identified by means of standard community detection algorithms, which exploit ‘static’ relations, such as co-authorship or citation networks. In contrast with these approaches, here we focus on diachronic topic-based communities –i.e., communities of people who appear to work on semantically related topics at the same time. These communities are interesting because their analysis allows us to make sense of the dynamics of the research world –e.g., migration of researchers from one topic to another, new communities being spawn by older ones, communities splitting, merging, ceasing to exist, etc. To this purpose, we are interested in developing clustering methods that are able to handle correctly the dynamic aspects of topic-based community formation, prioritizing the relationship between researchers who appear to follow the same research trajectories. We thus present a novel approach called Temporal Semantic Topic-Based Clustering (TST), which exploits a novel metric for clustering researchers according to their research trajectories, defined as distributions of semantic topics over time. The approach has been evaluated through an empirical study involving 25 experts from the Semantic Web and Human-Computer Interaction areas. The evaluation shows that TST exhibits a performance comparable to the one achieved by human experts
A Novel Robust Algorithm for Information Security Risk Evaluation
Abstract As computer becomes popular and internet advances rapidly, informatio
Dynamic non-linear system modelling using wavelet-based soft computing techniques
The enormous number of complex systems results in the necessity of high-level and cost-efficient
modelling structures for the operators and system designers. Model-based approaches offer a very
challenging way to integrate a priori knowledge into the procedure. Soft computing based models
in particular, can successfully be applied in cases of highly nonlinear problems. A further reason
for dealing with so called soft computational model based techniques is that in real-world cases,
many times only partial, uncertain and/or inaccurate data is available.
Wavelet-Based soft computing techniques are considered, as one of the latest trends in system
identification/modelling. This thesis provides a comprehensive synopsis of the main wavelet-based
approaches to model the non-linear dynamical systems in real world problems in conjunction with
possible twists and novelties aiming for more accurate and less complex modelling structure.
Initially, an on-line structure and parameter design has been considered in an adaptive Neuro-
Fuzzy (NF) scheme. The problem of redundant membership functions and consequently fuzzy
rules is circumvented by applying an adaptive structure. The growth of a special type of Fungus
(Monascus ruber van Tieghem) is examined against several other approaches for further
justification of the proposed methodology.
By extending the line of research, two Morlet Wavelet Neural Network (WNN) structures have
been introduced. Increasing the accuracy and decreasing the computational cost are both the
primary targets of proposed novelties. Modifying the synoptic weights by replacing them with
Linear Combination Weights (LCW) and also imposing a Hybrid Learning Algorithm (HLA)
comprising of Gradient Descent (GD) and Recursive Least Square (RLS), are the tools utilised for
the above challenges. These two models differ from the point of view of structure while they share
the same HLA scheme. The second approach contains an additional Multiplication layer, plus its
hidden layer contains several sub-WNNs for each input dimension. The practical superiority of
these extensions is demonstrated by simulation and experimental results on real non-linear
dynamic system; Listeria Monocytogenes survival curves in Ultra-High Temperature (UHT)
whole milk, and consolidated with comprehensive comparison with other suggested schemes.
At the next stage, the extended clustering-based fuzzy version of the proposed WNN schemes, is
presented as the ultimate structure in this thesis. The proposed Fuzzy Wavelet Neural network
(FWNN) benefitted from Gaussian Mixture Models (GMMs) clustering feature, updated by a
modified Expectation-Maximization (EM) algorithm. One of the main aims of this thesis is to illustrate how the GMM-EM scheme could be used not only for detecting useful knowledge from
the data by building accurate regression, but also for the identification of complex systems.
The structure of FWNN is based on the basis of fuzzy rules including wavelet functions in the
consequent parts of rules. In order to improve the function approximation accuracy and general
capability of the FWNN system, an efficient hybrid learning approach is used to adjust the
parameters of dilation, translation, weights, and membership. Extended Kalman Filter (EKF) is
employed for wavelet parameters adjustment together with Weighted Least Square (WLS) which
is dedicated for the Linear Combination Weights fine-tuning. The results of a real-world
application of Short Time Load Forecasting (STLF) further re-enforced the plausibility of the
above technique
Recommended from our members
Automatic sound synthesizer programming: techniques and applications
The aim of this thesis is to investigate techniques for, and applications of automatic sound synthesizer programming. An automatic sound synthesizer programmer is a system which removes the requirement to explicitly specify parameter settings for a sound synthesis algorithm from the user. Two forms of these systems are discussed in this thesis:
tone matching programmers and synthesis space explorers. A tone matching programmer takes at its input a sound synthesis algorithm and a desired target sound. At its output it produces a configuration for the sound synthesis algorithm which causes it to emit a
similar sound to the target. The techniques for achieving this that are investigated are
genetic algorithms, neural networks, hill climbers and data driven approaches. A synthesis
space explorer provides a user with a representation of the space of possible sounds
that a synthesizer can produce and allows them to interactively explore this space. The
applications of automatic sound synthesizer programming that are investigated include
studio tools, an autonomous musical agent and a self-reprogramming drum machine. The
research employs several methodologies: the development of novel software frameworks
and tools, the examination of existing software at the source code and performance levels
and user trials of the tools and software. The main contributions made are: a method
for visualisation of sound synthesis space and low dimensional control of sound synthesizers; a general purpose framework for the deployment and testing of sound synthesis and optimisation algorithms in the SuperCollider language sclang; a comparison of a variety of optimisation techniques for sound synthesizer programming; an analysis of sound synthesizer error surfaces; a general purpose sound synthesizer programmer compatible with industry standard tools; an automatic improviser which passes a loose equivalent of the Turing test for Jazz musicians, i.e. being half of a man-machine duet which was rated as one of the best sessions of 2009 on the BBC's 'Jazz on 3' programme
An exploration of evolutionary computation applied to frequency modulation audio synthesis parameter optimisation
With the ever-increasing complexity of sound synthesisers, there is a growing demand for automated parameter estimation and sound space navigation techniques. This thesis explores the potential for evolutionary computation to automatically map known sound qualities onto the parameters of frequency modulation synthesis. Within this exploration are original contributions in the domain of synthesis parameter estimation and, within the developed system, evolutionary computation, in the form of the evolutionary algorithms that drive the underlying optimisation process. Based upon the requirement for the parameter estimation system to deliver multiple search space solutions, existing evolutionary algorithmic architectures are augmented to enable niching, while maintaining the strengths of the original algorithms. Two novel evolutionary algorithms are proposed in which cluster analysis is used to identify and maintain species within the evolving populations. A conventional evolution strategy and cooperative coevolution strategy are defined, with cluster-orientated operators that enable the simultaneous optimisation of multiple search space solutions at distinct optima. A test methodology is developed that enables components of the synthesis matching problem to be identified and isolated, enabling the performance of different optimisation techniques to be compared quantitatively. A system is consequently developed that evolves sound matches using conventional frequency modulation synthesis models, and the effectiveness of different evolutionary algorithms is assessed and compared in application to both static and timevarying sound matching problems. Performance of the system is then evaluated by interview with expert listeners. The thesis is closed with a reflection on the algorithms and systems which have been developed, discussing possibilities for the future of automated synthesis parameter estimation techniques, and how they might be employed
- …