6,569 research outputs found
Application of Computational Intelligence Techniques to Process Industry Problems
In the last two decades there has been a large progress in the computational
intelligence research field. The fruits of the effort spent on the research in the discussed
field are powerful techniques for pattern recognition, data mining, data modelling, etc.
These techniques achieve high performance on traditional data sets like the UCI
machine learning database. Unfortunately, this kind of data sources usually represent
clean data without any problems like data outliers, missing values, feature co-linearity,
etc. common to real-life industrial data. The presence of faulty data samples can have
very harmful effects on the models, for example if presented during the training of the
models, it can either cause sub-optimal performance of the trained model or in the worst
case destroy the so far learnt knowledge of the model. For these reasons the application
of present modelling techniques to industrial problems has developed into a research
field on its own. Based on the discussion of the properties and issues of the data and the
state-of-the-art modelling techniques in the process industry, in this paper a novel
unified approach to the development of predictive models in the process industry is
presented
Scalable approximate FRNN-OWA classification
Fuzzy Rough Nearest Neighbour classification with Ordered Weighted Averaging operators (FRNN-OWA) is an algorithm that classifies unseen instances according to their membership in the fuzzy upper and lower approximations of the decision classes. Previous research has shown that the use of OWA operators increases the robustness of this model. However, calculating membership in an approximation requires a nearest neighbour search. In practice, the query time complexity of exact nearest neighbour search algorithms in more than a handful of dimensions is near-linear, which limits the scalability of FRNN-OWA. Therefore, we propose approximate FRNN-OWA, a modified model that calculates upper and lower approximations of decision classes using the approximate nearest neighbours returned by Hierarchical Navigable Small Worlds (HNSW), a recent approximative nearest neighbour search algorithm with logarithmic query time complexity at constant near-100% accuracy. We demonstrate that approximate FRNN-OWA is sufficiently robust to match the classification accuracy of exact FRNN-OWA while scaling much more efficiently. We test four parameter configurations of HNSW, and evaluate their performance by measuring classification accuracy and construction and query times for samples of various sizes from three large datasets. We find that with two of the parameter configurations, approximate FRNN-OWA achieves near-identical accuracy to exact FRNN-OWA for most sample sizes within query times that are up to several orders of magnitude faster
Enhancing Big Data Feature Selection Using a Hybrid Correlation-Based Feature Selection
This study proposes an alternate data extraction method that combines three well-known
feature selection methods for handling large and problematic datasets: the correlation-based feature
selection (CFS), best first search (BFS), and dominance-based rough set approach (DRSA) methods.
This study aims to enhance the classifier’s performance in decision analysis by eliminating uncorrelated and inconsistent data values. The proposed method, named CFS-DRSA, comprises several
phases executed in sequence, with the main phases incorporating two crucial feature extraction tasks.
Data reduction is first, which implements a CFS method with a BFS algorithm. Secondly, a data selection process applies a DRSA to generate the optimized dataset. Therefore, this study aims to solve
the computational time complexity and increase the classification accuracy. Several datasets with
various characteristics and volumes were used in the experimental process to evaluate the proposed
method’s credibility. The method’s performance was validated using standard evaluation measures
and benchmarked with other established methods such as deep learning (DL). Overall, the proposed
work proved that it could assist the classifier in returning a significant result, with an accuracy rate
of 82.1% for the neural network (NN) classifier, compared to the support vector machine (SVM),
which returned 66.5% and 49.96% for DL. The one-way analysis of variance (ANOVA) statistical
result indicates that the proposed method is an alternative extraction tool for those with difficulties
acquiring expensive big data analysis tools and those who are new to the data analysis field.Ministry of Higher Education under the Fundamental Research Grant Scheme (FRGS/1/2018/ICT04/UTM/01/1)Universiti Teknologi Malaysia (UTM) under Research University Grant Vot-20H04, Malaysia Research University Network (MRUN) Vot 4L876SPEV project, University of Hradec Kralove, Faculty
of Informatics and Management, Czech Republic (ID: 2102–2021), “Smart Solutions in Ubiquitous
Computing Environments
Computing fuzzy rough approximations in large scale information systems
Rough set theory is a popular and powerful machine learning tool. It is especially suitable for dealing with information systems that exhibit inconsistencies, i.e. objects that have the same values for the conditional attributes but a different value for the decision attribute. In line with the emerging granular computing paradigm, rough set theory groups objects together based on the indiscernibility of their attribute values. Fuzzy rough set theory extends rough set theory to data with continuous attributes, and detects degrees of inconsistency in the data. Key to this is turning the indiscernibility relation into a gradual relation, acknowledging that objects can be similar to a certain extent. In very large datasets with millions of objects, computing the gradual indiscernibility relation (or in other words, the soft granules) is very demanding, both in terms of runtime and in terms of memory. It is however required for the computation of the lower and upper approximations of concepts in the fuzzy rough set analysis pipeline. Current non-distributed implementations in R are limited by memory capacity. For example, we found that a state of the art non-distributed implementation in R could not handle 30,000 rows and 10 attributes on a node with 62GB of memory. This is clearly insufficient to scale fuzzy rough set analysis to massive datasets. In this paper we present a parallel and distributed solution based on Message Passing Interface (MPI) to compute fuzzy rough approximations in very large information systems. Our results show that our parallel approach scales with problem size to information systems with millions of objects. To the best of our knowledge, no other parallel and distributed solutions have been proposed so far in the literature for this problem
Nature-Inspired Adaptive Architecture for Soft Sensor Modelling
This paper gives a general overview of the challenges present in the research field of Soft Sensor
building and proposes a novel architecture for building of Soft Sensors, which copes with the identified challenges. The
architecture is inspired and making use of nature-related techniques for computational intelligence. Another aspect,
which is addressed by the proposed architecture, are the identified characteristics of the process industry data. The data
recorded in the process industry consist usually of certain amount of missing values or sample exceeding meaningful
values of the measurements, called data outliers. Other process industry data properties causing problems for the
modelling are the collinearity of the data, drifting data and the different sampling rates of the particular hardware
sensors. It is these characteristics which are the source of the need for an adaptive behaviour of Soft Sensors. The
architecture reflects this need and provides mechanisms for the adaptation and evolution of the Soft Sensor at different
levels. The adaptation capabilities are provided by maintaining a variety of rather simple models. These particular
models, called paths in terms of the architecture, can for example focus on different partition of the input data space, or
provide different adaptation speeds to changes in the data. The actual modelling techniques involved into the
architecture are data-driven computational learning approaches like artificial neural networks, principal component
regression, etc
- …