35 research outputs found

    Neuro-Fuzzy Based Intelligent Approaches to Nonlinear System Identification and Forecasting

    Get PDF
    Nearly three decades back nonlinear system identification consisted of several ad-hoc approaches, which were restricted to a very limited class of systems. However, with the advent of the various soft computing methodologies like neural networks and the fuzzy logic combined with optimization techniques, a wider class of systems can be handled at present. Complex systems may be of diverse characteristics and nature. These systems may be linear or nonlinear, continuous or discrete, time varying or time invariant, static or dynamic, short term or long term, central or distributed, predictable or unpredictable, ill or well defined. Neurofuzzy hybrid modelling approaches have been developed as an ideal technique for utilising linguistic values and numerical data. This Thesis is focused on the development of advanced neurofuzzy modelling architectures and their application to real case studies. Three potential requirements have been identified as desirable characteristics for such design: A model needs to have minimum number of rules; a model needs to be generic acting either as Multi-Input-Single-Output (MISO) or Multi-Input-Multi-Output (MIMO) identification model; a model needs to have a versatile nonlinear membership function. Initially, a MIMO Adaptive Fuzzy Logic System (AFLS) model which incorporates a prototype defuzzification scheme, while utilising an efficient, compared to the Takagi–Sugeno–Kang (TSK) based systems, fuzzification layer has been developed for the detection of meat spoilage using Fourier transform infrared (FTIR) spectroscopy. The identification strategy involved not only the classification of beef fillet samples in their respective quality class (i.e. fresh, semi-fresh and spoiled), but also the simultaneous prediction of their associated microbiological population directly from FTIR spectra. In the case of AFLS, the number of memberships for each input variable was directly associated to the number of rules, hence, the “curse of dimensionality” problem was significantly reduced. Results confirmed the advantage of the proposed scheme against Adaptive Neurofuzzy Inference System (ANFIS), Multilayer Perceptron (MLP) and Partial Least Squares (PLS) techniques used in the same case study. In the case of MISO systems, the TSK based structure, has been utilized in many neurofuzzy systems, like ANFIS. At the next stage of research, an Adaptive Fuzzy Inference Neural Network (AFINN) has been developed for the monitoring the spoilage of minced beef utilising multispectral imaging information. This model, which follows the TSK structure, incorporates a clustering pre-processing stage for the definition of fuzzy rules, while its final fuzzy rule base is determined by competitive learning. In this specific case study, AFINN model was also able to predict for the first time in the literature, the beef’s temperature directly from imaging information. Results again proved the superiority of the adopted model. By extending the line of research and adopting specific design concepts from the previous case studies, the Asymmetric Gaussian Fuzzy Inference Neural Network (AGFINN) architecture has been developed. This architecture has been designed based on the above design principles. A clustering preprocessing scheme has been applied to minimise the number of fuzzy rules. AGFINN incorporates features from the AFLS concept, by having the same number of rules as well as fuzzy memberships. In spite of the extensive use of the standard symmetric Gaussian membership functions, AGFINN utilizes an asymmetric function acting as input linguistic node. Since the asymmetric Gaussian membership function’s variability and flexibility are higher than the traditional one, it can partition the input space more effectively. AGFINN can be built either as an MISO or as an MIMO system. In the MISO case, a TSK defuzzification scheme has been implemented, while two different learning algorithms have been implemented. AGFINN has been tested on real datasets related to electricity price forecasting for the ISO New England Power Distribution System. Its performance was compared against a number of alternative models, including ANFIS, AFLS, MLP and Wavelet Neural Network (WNN), and proved to be superior. The concept of asymmetric functions proved to be a valid hypothesis and certainly it can find application to other architectures, such as in Fuzzy Wavelet Neural Network models, by designing a suitable flexible wavelet membership function. AGFINN’s MIMO characteristics also make the proposed architecture suitable for a larger range of applications/problems

    The development of in-process surface roughness prediction systems in turning operation using accelerometer

    Get PDF
    Three in-process surface roughness prediction (ISRP) systems using linear multiple regression, fuzzy logic, and fuzzy nets algorisms, respectively, were developed to allow the prediction of real time surface roughness of a work piece on a turning operation. The surface roughness is predicted from feed rate, spindle speed, depth of cut, and machining vibration that is detected and collected by an accelerometer.;Two groups of data were collected for two cutters with nose radii of 0.016 and 0.031 inches, respective. A total of 162 training data sets and 54 testing data sets for each cutter were applied to train and test the system. While the multiple-regression-based system applied the linear relationships of the dependent variables and the dependent variable for the prediction, the fuzzy-logic-based and the fuzzy-nets-based systems relied on fuzzy theory for the prediction. The fuzzy rule banks employed in the fuzzy-logic-based system was generated with expert\u27s experiences as well as observation results from the experiments. Whereas, the rule banks employed in the fuzz-nets-system were rule banks self-extracted from the training data by the fuzzy-nets self-learning algorithm.;The predicted surface roughness values were compared with corresponding measured values. The average prediction accuracy with the three algorithms, linear multiple regression, fuzzy logic, and fuzzy nets algorisms, was 92.78%, 89.06%, and 95.70%, respectively. The use of the accelerometer was found valuable in increasing the prediction The Fuzzy-nets-based In-process Surface Roughness Prediction System was considered the best among the three tested systems. This conclusion relies on not only the best average prediction accuracy achieved, but also the self-learning ability of the fuzzy nets algorism

    Automatic Signature Verification: The State of the Art

    Full text link

    Building a Strong Undergraduate Research Culture in African Universities

    Get PDF
    Africa had a late start in the race to setting up and obtaining universities with research quality fundamentals. According to Mamdani [5], the first colonial universities were few and far between: Makerere in East Africa, Ibadan and Legon in West Africa. This last place in the race, compared to other continents, has had tremendous implications in the development plans for the continent. For Africa, the race has been difficult from a late start to an insurmountable litany of problems that include difficulty in equipment acquisition, lack of capacity, limited research and development resources and lack of investments in local universities. In fact most of these universities are very recent with many less than 50 years in business except a few. To help reduce the labor costs incurred by the colonial masters of shipping Europeans to Africa to do mere clerical jobs, they started training ―workshops‖ calling them technical or business colleges. According to Mamdani, meeting colonial needs was to be achieved while avoiding the ―Indian disease‖ in Africa -- that is, the development of an educated middle class, a group most likely to carry the virus of nationalism. Upon independence, most of these ―workshops‖ were turned into national ―universities‖, but with no clear role in national development. These national ―universities‖ were catering for children of the new African political elites. Through the seventies and eighties, most African universities were still without development agendas and were still doing business as usual. Meanwhile, governments strapped with lack of money saw no need of putting more scarce resources into big white elephants. By mid-eighties, even the UN and IMF were calling for a limit on funding African universities. In today‘s African university, the traditional curiosity driven research model has been replaced by a market-driven model dominated by a consultancy culture according to Mamdani (Mamdani, Mail and Guardian Online). The prevailing research culture as intellectual life in universities has been reduced to bare-bones classroom activity, seminars and workshops have migrated to hotels and workshop attendance going with transport allowances and per diems (Mamdani, Mail and Guardian Online). There is need to remedy this situation and that is the focus of this paper

    Multi-feature approach for writer-independent offline signature verification

    Get PDF
    Some of the fundamental problems facing handwritten signature verification are the large number of users, the large number of features, the limited number of reference signatures for training, the high intra-personal variability of the signatures and the unavailability of forgeries as counterexamples. This research first presents a survey of offline signature verification techniques, focusing on the feature extraction and verification strategies. The goal is to present the most important advances, as well as the current challenges in this field. Of particular interest are the techniques that allow for designing a signature verification system based on a limited amount of data. Next is presented a novel offline signature verification system based on multiple feature extraction techniques, dichotomy transformation and boosting feature selection. Using multiple feature extraction techniques increases the diversity of information extracted from the signature, thereby producing features that mitigate intra-personal variability, while dichotomy transformation ensures writer-independent classification, thus relieving the verification system from the burden of a large number of users. Finally, using boosting feature selection allows for a low cost writer-independent verification system that selects features while learning. As such, the proposed system provides a practical framework to explore and learn from problems with numerous potential features. Comparison of simulation results from systems found in literature confirms the viability of the proposed system, even when only a single reference signature is available. The proposed system provides an efficient solution to a wide range problems (eg. biometric authentication) with limited training samples, new training samples emerging during operations, numerous classes, and few or no counterexamples

    Multi-classifier systems for off-line signature verification

    Get PDF
    Handwritten signatures are behavioural biometric traits that are known to incorporate a considerable amount of intra-class variability. The Hidden Markov Model (HMM) has been successfully employed in many off-line signature verification (SV) systems due to the sequential nature and variable size of the signature data. In particular, the left-to-right topology of HMMs is well adapted to the dynamic characteristics of occidental handwriting, in which the hand movements are always from left to right. As with most generative classifiers, HMMs require a considerable amount of training data to achieve a high level of generalization performance. Unfortunately, the number of signature samples available to train an off-line SV system is very limited in practice. Moreover, only random forgeries are employed to train the system, which must in turn to discriminate between genuine samples and random, simple and skilled forgeries during operations. These last two forgery types are not available during the training phase. The approaches proposed in this Thesis employ the concept of multi-classifier systems (MCS) based on HMMs to learn signatures at several levels of perception. By extracting a high number of features, a pool of diversified classifiers can be generated using random subspaces, which overcomes the problem of having a limited amount of training data. Based on the multi-hypotheses principle, a new approach for combining classifiers in the ROC space is proposed. A technique to repair concavities in ROC curves allows for overcoming the problem of having a limited amount of genuine samples, and, especially, for evaluating performance of biometric systems more accurately. A second important contribution is the proposal of a hybrid generative-discriminative classification architecture. The use of HMMs as feature extractors in the generative stage followed by Support Vector Machines (SVMs) as classifiers in the discriminative stage allows for a better design not only of the genuine class, but also of the impostor class. Moreover, this approach provides a more robust learning than a traditional HMM-based approach when a limited amount of training data is available. The last contribution of this Thesis is the proposal of two new strategies for the dynamic selection (DS) of ensemble of classifiers. Experiments performed with the PUCPR and GPDS signature databases indicate that the proposed DS strategies achieve a higher level of performance in off-line SV than other reference DS and static selection (SS) strategies from literature

    WEATHER LORE VALIDATION TOOL USING FUZZY COGNITIVE MAPS BASED ON COMPUTER VISION

    Get PDF
    Published ThesisThe creation of scientific weather forecasts is troubled by many technological challenges (Stern & Easterling, 1999) while their utilization is generally dismal. Consequently, the majority of small-scale farmers in Africa continue to consult some forms of weather lore to reach various cropping decisions (Baliscan, 2001). Weather lore is a body of informal folklore (Enock, 2013), associated with the prediction of the weather, and based on indigenous knowledge and human observation of the environment. As such, it tends to be more holistic, and more localized to the farmers’ context. However, weather lore has limitations; for instance, it has an inability to offer forecasts beyond a season. Different types of weather lore exist, utilizing almost all available human senses (feel, smell, sight and hearing). Out of all the types of weather lore in existence, it is the visual or observed weather lore that is mostly used by indigenous societies, to come up with weather predictions. On the other hand, meteorologists continue to treat this knowledge as superstition, partly because there is no means to scientifically evaluate and validate it. The visualization and characterization of visual sky objects (such as moon, clouds, stars, and rainbows) in forecasting weather are significant subjects of research. To realize the integration of visual weather lore in modern weather forecasting systems, there is a need to represent and scientifically substantiate this form of knowledge. This research was aimed at developing a method for verifying visual weather lore that is used by traditional communities to predict weather conditions. To realize this verification, fuzzy cognitive mapping was used to model and represent causal relationships between selected visual weather lore concepts and weather conditions. The traditional knowledge used to produce these maps was attained through case studies of two communities (in Kenya and South Africa).These case studies were aimed at understanding the weather lore domain as well as the causal effects between metrological and visual weather lore. In this study, common astronomical weather lore factors related to cloud physics were identified as: bright stars, dispersed clouds, dry weather, dull stars, feathery clouds, gathering clouds, grey clouds, high clouds, layered clouds, low clouds, stars, medium clouds, and rounded clouds. Relationships between the concepts were also identified and formally represented using fuzzy cognitive maps. On implementing the verification tool, machine vision was used to recognize sky objects captured using a sky camera, while pattern recognition was employed in benchmarking and scoring the objects. A wireless weather station was used to capture real-time weather parameters. The visualization tool was then designed and realized in a form of software artefact, which integrated both computer vision and fuzzy cognitive mapping for experimenting visual weather lore, and verification using various statistical forecast skills and metrics. The tool consists of four main sub-components: (1) Machine vision that recognizes sky objects using support vector machine classifiers using shape-based feature descriptors; (2) Pattern recognition–to benchmark and score objects using pixel orientations, Euclidean distance, canny and grey-level concurrence matrix; (3) Fuzzy cognitive mapping that was used to represent knowledge (i.e. active hebbian learning algorithm was used to learn until convergence); and (4) A statistical computing component was used for verifications and forecast skills including brier score and contingency tables for deterministic forecasts. Rigorous evaluation of the verification tool was carried out using independent (not used in the training and testing phases) real-time images from Bloemfontein, South Africa, and Voi-Kenya. The real-time images were captured using a sky camera with GPS location services. The results of the implementation were tested for the selected weather conditions (for example, rain, heat, cold, and dry conditions), and found to be acceptable (the verified prediction accuracies were over 80%). The recommendation in this study is to apply the implemented method for processing tasks, towards verifying all other types of visual weather lore. In addition, the use of the method developed also requires the implementation of modules for processing and verifying other types of weather lore, such as sounds, and symbols of nature. Since time immemorial, from Australia to Asia, Africa to Latin America, local communities have continued to rely on weather lore observations to predict seasonal weather as well as its effects on their livelihoods (Alcock, 2014). This is mainly based on many years of personal experiences in observing weather conditions. However, when it comes to predictions for longer lead-times (i.e. over a season), weather lore is uncertain (Hornidge & Antweiler, 2012). This uncertainty has partly contributed to the current status where meteorologists and other scientists continue to treat weather lore as superstition (United-Nations, 2004), and not capable of predicting weather. One of the problems in testing the confidence in weather lore in predicting weather is due to wide varieties of weather lore that are found in the details of indigenous sayings, which are tightly coupled to locality and pattern variations(Oviedo et al., 2008). This traditional knowledge is entrenched within the day-to-day socio-economic activities of the communities using it and is not globally available for comparison and validation (Huntington, Callaghan, Fox, & Krupnik, 2004). Further, this knowledge is based on local experience that lacks benchmarking techniques; so that harmonizing and integrating it within the science-based weather forecasting systems is a daunting task (Hornidge & Antweiler, 2012). It is partly for this reason that the question of validation of weather lore has not yet been substantially investigated. Sufficient expanded processes of gathering weather observations, combined with comparison and validation, can produce some useful information. Since forecasting weather accurately is a challenge even with the latest supercomputers (BBC News Magazine, 2013), validated weather lore can be useful if it is incorporated into modern weather prediction systems. Validation of traditional knowledge is a necessary step in the management of building integrated knowledge-based systems. Traditional knowledge incorporated into knowledge-based systems has to be verified for enhancing systems’ reliability. Weather lore knowledge exists in different forms as identified by traditional communities; hence it needs to be tied together for comparison and validation. The development of a weather lore validation tool that can integrate a framework for acquiring weather data and methods of representing the weather lore in verifiable forms can be a significant step in the validation of weather lore against actual weather records using conventional weather-observing instruments. The success of validating weather lore could stimulate the opportunity for integrating acceptable weather lore with modern systems of weather prediction to improve actionable information for decision making that relies on seasonal weather prediction. In this study a hybrid method is developed that includes computer vision and fuzzy cognitive mapping techniques for verifying visual weather lore. The verification tool was designed with forecasting based on mimicking visual perception, and fuzzy thinking based on the cognitive knowledge of humans. The method provides meaning to humanly perceivable sky objects so that computers can understand, interpret, and approximate visual weather outcomes. Questionnaires were administered in two case study locations (KwaZulu-Natal province in South Africa, and Taita-Taveta County in Kenya), between the months of March and July 2015. The two case studies were conducted by interviewing respondents on how visual astronomical and meteorological weather concepts cause weather outcomes. The two case studies were used to identify causal effects of visual astronomical and meteorological objects to weather conditions. This was followed by finding variations and comparisons, between the visual weather lore knowledge in the two case studies. The results from the two case studies were aggregated in terms of seasonal knowledge. The causal links between visual weather concepts were investigated using these two case studies; results were compared and aggregated to build up common knowledge. The joint averages of the majority of responses from the case studies were determined for each set of interacting concepts. The modelling of the weather lore verification tool consists of input, processing components and output. The input data to the system are sky image scenes and actual weather observations from wireless weather sensors. The image recognition component performs three sub-tasks, including: detection of objects (concepts) from image scenes, extraction of detected objects, and approximation of the presence of the concepts by comparing extracted objects to ideal objects. The prediction process involves the use of approximated concepts generated in the recognition component to simulate scenarios using the knowledge represented in the fuzzy cognitive maps. The verification component evaluates the variation between the predictions and actual weather observations to determine prediction errors and accuracy. To evaluate the tool, daily system simulations were run to predict and record probabilities of weather outcomes (i.e. rain, heat index/hotness, dry, cold index). Weather observations were captured periodically using a wireless weather station. This process was repeated several times until there was sufficient data to use for the verification process. To match the range of the predicted weather outcomes, the actual weather observations (measurement) were transformed and normalized to a range [0, 1].In the verification process, comparisons were made between the actual observations and weather outcome prediction values by computing residuals (error values) from the observations. The error values and the squared error were used to compute the Mean Squared Error (MSE), and the Root Mean Squared Error (RMSE), for each predicted weather outcome. Finally, the validity of the visual weather lore verification model was assessed using data from a different geographical location. Actual data in the form of daily sky scenes and weather parameters were acquired from Voi, Kenya, from December 2015 to January 2016.The results on the use of hybrid techniques for verification of weather lore is expected to provide an incentive in integrating indigenous knowledge on weather with modern numerical weather prediction systems for accurate and downscaled weather forecasts

    Development of advanced autonomous learning algorithms for nonlinear system identification and control

    Full text link
    Identification of nonlinear dynamical systems, data stream analysis, etc. is usually handled by autonomous learning algorithms like evolving fuzzy and evolving neuro-fuzzy systems (ENFSs). They are characterized by the single-pass learning mode and open structure-property. Such features enable their effective handling of fast and rapidly changing natures of data streams. The underlying bottleneck of ENFSs lies in its design principle, which involves a high number of free parameters (rule premise and rule consequent) to be adapted in the training process. This figure can even double in the case of the type-2 fuzzy system. From this literature gap, a novel ENFS, namely Parsimonious Learning Machine (PALM) is proposed in this thesis. To reduce the number of network parameters significantly, PALM features utilization of a new type of fuzzy rule based on the concept of hyperplane clustering, where it has no rule premise parameters. PALM is proposed in both type-1 and type-2 fuzzy systems where all of them characterize a fully dynamic rule-based system. Thus, it is capable of automatically generating, merging, and tuning the hyperplane-based fuzzy rule in a single-pass manner. Moreover, an extension of PALM, namely recurrent PALM (rPALM), is proposed and adopts the concept of teacher-forcing mechanism in the deep learning literature. The efficacy of both PALM and rPALM have been evaluated through numerical study with data streams and to identify nonlinear unmanned aerial vehicle system. The proposed models showcase significant improvements in terms of computational complexity and the number of required parameters against several renowned ENFSs while attaining comparable and often better predictive accuracy. The ENFSs have also been utilized to develop three autonomous intelligent controllers (AICons) in this thesis. They are namely Generic (G) controller, Parsimonious controller (PAC), and Reduced Parsimonious Controller (RedPAC). All these controllers start operating from scratch with an empty set of fuzzy rules, and no offline training is required. To cope with the dynamic behavior of the plant, these controllers can add, merge or prune the rules on demand. Among three AICons, the G-controller is built by utilizing an advanced incremental learning machine, namely Generic Evolving Neuro-Fuzzy Inference System. The integration of generalized adaptive resonance theory provides a compact structure of the G-controller. Consequently, the faster evolution of structure is witnessed, which lowers its computational cost. Another AICon namely, PAC is rooted with PALM's architecture. Since PALM has a dependency on user-defined thresholds to adapt the structure, these thresholds are replaced with the concept of bias- variance trade-off in PAC. In RedPAC, the network parameters have further reduced in contrast with PALM-based PAC, where the number of consequent parameters has reduced to one parameter per rule. These AICons work with very minor expert domain knowledge and developed by incorporating the sliding mode control technique. In G-controller and RedPAC, the control law and adaptation laws for the consequent parameters are derived from the SMC algorithm to establish a stable closed-loop system, where the stability of these controllers are guaranteed by using the Lyapunov function and the uniform asymptotic convergence of tracking error to zero is witnessed through the implication of an auxiliary robustifying control term. While using PAC, the boundedness and convergence of the closed-loop control system's tracking error and the controller's consequent parameters are confirmed by utilizing the LaSalle-Yoshizawa theorem. Their efficacy is evaluated by observing various trajectory tracking performance of unmanned aerial vehicles. The accuracy of these controllers is comparable or better than the benchmark controllers where the proposed controllers incur significantly fewer parameters to attain similar or better tracking performance

    A survey of the application of soft computing to investment and financial trading

    Get PDF

    Advances in fuzzy rule-based system for pattern classification

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH
    corecore