2 research outputs found
Comparison of different T-norm operators in classification problems
Fuzzy rule based classification systems are one of the most popular fuzzy
modeling systems used in pattern classification problems. This paper
investigates the effect of applying nine different T-norms in fuzzy rule based
classification systems. In the recent researches, fuzzy versions of confidence
and support merits from the field of data mining have been widely used for both
rules selecting and weighting in the construction of fuzzy rule based
classification systems. For calculating these merits the product has been
usually used as a T-norm. In this paper different T-norms have been used for
calculating the confidence and support measures. Therefore, the calculations in
rule selection and rule weighting steps (in the process of constructing the
fuzzy rule based classification systems) are modified by employing these
T-norms. Consequently, these changes in calculation results in altering the
overall accuracy of rule based classification systems. Experimental results
obtained on some well-known data sets show that the best performance is
produced by employing the Aczel-Alsina operator in terms of the classification
accuracy, the second best operator is Dubois-Prade and the third best operator
is Dombi. In experiments, we have used 12 data sets with numerical attributes
from the University of California, Irvine machine learning repository (UCI).Comment: 6 pages, 1 figure, 4 tables; International Journal of Fuzzy Logic
Systems (IJFLS) Vol.2, No.3, July 201
Edge-centric queries stream management based on an ensemble model
The Internet of things (IoT) involves numerous devices that can interact with each other or with their environment to collect and process data. The collected data streams are guided to the cloud for further processing and the production of analytics. However, any processing in the cloud, even if it is supported by improved computational resources, suffers from an increased latency. The data should travel to the cloud infrastructure as well as the provided analytics back to end users or devices. For minimizing the latency, we can perform data processing at the edge of the network, i.e., at the edge nodes. The aim is to deliver analytics and build knowledge close to end users and devices minimizing the required time for realizing responses. Edge nodes are transformed into distributed processing points where analytics queries can be served. In this paper, we deal with the problem of allocating queries, defined for producing knowledge, to a number of edge nodes. The aim is to further reduce the latency by allocating queries to nodes that exhibit low load (the current and the estimated); thus, they can provide the final response in the minimum time. However, before the allocation, we should decide the computational burden that a query will cause. The allocation is concluded by the assistance of an ensemble similarity scheme responsible to deliver the complexity class for each query. The complexity class, thus, can be matched against the current load of every edge node. We discuss our scheme, and through a large set of simulations and the adoption of benchmarking queries, we reveal the potentials of the proposed model supported by numerical results