20 research outputs found

    A Direct Approach for Determining the Switch Points in the Karnik–Mendel Algorithm

    Get PDF
    The Karnik-Mendel algorithm is used to compute the centroid of interval type-2 fuzzy sets, determining the switch points needed for the lower and upper bounds of the centroid, through an iterative process. It is commonly acknowledged that there is no closed-form solution for determining such switch points. Many enhanced algorithms have been proposed to improve the computational efficiency of the Karnik-Mendel algorithm. However, all of these algorithms are still based on iterative procedures. In this paper, a direct approach based on derivatives for determining the switch points without multiple iterations has been proposed, together with mathematical proof that these switch points are correctly determining the lower and upper bounds of the centroid. Experimental simulations show that the direct approach obtains the same switch points, but is more computationally efficient than any of the existing (iterative) algorithms. Thus, we propose that this algorithm should be used in any application of interval type-2 fuzzy sets in which the centroid is required

    AN INTERVAL TYPE 2 FUZZY EVIDENTIAL REASONING APPROACH TO PERSONNEL RECRUITMENT

    Get PDF
    Recruitment process is a procedure of selecting an ideal candidate amongst different applicants who suit the qualifications required by the given institution in the best way. Due to the multi criteria nature of the recruitment process, it involves contradictory, numerous and incommensurable criteria that are based on quantitative and qualitative measurements. Quantitative criteria evaluation are not always dependent on the judgement of the expert, they are expressed in either monetary terms or engineering measurements, meanwhile qualitative criteria evaluation depend on the subjective judgement of the decision maker, human evaluation which is often characterized with subjectivity and uncertainties in decision making. Given the uncertain, ambiguous, and vague nature of recruitment process there is need for an applicable methodology that could resolve various inherent uncertainties of human evaluation during the decision making process. This work thus proposes an interval type 2 fuzzy evidential reasoning approach to recruitment process. The approach is in three phases; in the first phase in order to capture word uncertainty an interval type 2(IT2) fuzzy set Hao and Mendel Approach (HMA) is proposed to model the qualification requirement for recruitment process. This approach will cater for both intra and inter uncertainty in decision makers’judgments and demonstrates agreements by all subjects (decision makers) for the regular overlap of subject data intervals and the manner in which data intervals are collectively classified into their respective footprint of uncertainty. In the second phase the Intervaltype 2 fuzzy Analytical hierarchical process was employed as the weighting model to determine the weight of each criterion gotten from the decision makers. In the third phase the interval type 2 fuzzy was hybridized with the ranking evidential reasoning algorithm to evaluate each applicant to determine their final score in order to choose the most ideal candidate for recruitment.The implementation tool for phase two and three is Java programming language. Application of this proposed approach in recruitment process will resolve both intra and inter uncertainty in decision maker’s judgement and give room for consistent ranking even in place of incomplete requirement

    Study on sensible beginning divided-search enhanced Karnik-Mendel algorithms for centroid type-reduction of general type-2 fuzzy logic systems

    Get PDF
    General type-2 fuzzy logic systems (GT2 FLSs) on the basis of alpha-plane representation of GT2 fuzzy sets (FSs) have attracted considerable attention in recent years. For the kernel type-reduction (TR) block of GT2 FLSs, the enhanced Karnik-Mendel (EKM) algorithm is the most popular approach. This paper proposes the sensible beginning divided-search EKM (SBDEKM) algorithms for completing the centroid TR of GT2 FLSs. Computer simulations are provided to show the performances of the SBDEKM algorithms. Compared with EKM algorithms and sensible beginning EKM (SBEKM) algorithms, the SBDEKM algorithms have almost the same accuracies and better computational efficiency

    Circumventing the fuzzy type reduction for autonomous vehicle controller

    Get PDF
    Fuzzy type-2 controllers can easily deal with systems nonlinearity and utilise humans’ expertise to solve many complex control problems; they are also very good at processing uncertainty, which exists in many robotic systems, such as autonomous vehicles. However, their computational cost is high, especially at the type reduction stage. In this research, it is aimed to reduce the computation cost of the type reduction stage, thus to facilitate faster performance speed and increase the number of actions able to be operated in one microprocessor. Proposed here are adaptive integration principles with a binary successive search technique to locate the straight or semi-straight segments of a fuzzy set, thus to use them in achieving faster weighted average computation. This computation is very important because it runs frequently in many type reductions. A variable adaptation rate is suggested during the type reduction iterations to reduce the computation cost further. The influence of the proposed approaches on the fuzzy type-2 controller’s error has been mathematically analysed and then experimentally measured using a wall-following behaviour, which is the most important action for many autonomous vehicles. The resultant execution time-gain of the proposed technique has reached to 200%. This evaluated with respect to the execution time of the original, unmodified, type reduction procedure. This study develops a new accelerated version of the enhanced Karnik-Mendel type reducer by using better initialisations and better indexing scheme. The resulting performance time-gain reached 170%, with respect to the original version. A further cut in the type reduction time is achieved by proposing a One-Go type reduction procedure. This technique can reduce multiple sets altogether in one pass, thus eliminating much of the redundant calculations needed to carry out the reduction individually. All the proposed type reduction enhancements were evaluated in terms of their execution time-gain and performance error using every possible fuzzy firing level combination. Tests were then performed using a real autonomous vehicle, navigates in a relatively complex arena field with acute, right, obtuse, and reflex angled corners, to assure evaluating wide variety of operation conditions. A simplified state hold technique using Schmitt-trigger principles and dynamic sense pattern control was suggested and implemented to assure small rule base size and to obtain more accurate evaluation of the type reduction stages

    Online Mixed Packing and Covering

    Full text link
    In many problems, the inputs arrive over time, and must be dealt with irrevocably when they arrive. Such problems are online problems. A common method of solving online problems is to first solve the corresponding linear program, and then round the fractional solution online to obtain an integral solution. We give algorithms for solving linear programs with mixed packing and covering constraints online. We first consider mixed packing and covering linear programs, where packing constraints are given offline and covering constraints are received online. The objective is to minimize the maximum multiplicative factor by which any packing constraint is violated, while satisfying the covering constraints. No prior sublinear competitive algorithms are known for this problem. We give the first such --- a polylogarithmic-competitive algorithm for solving mixed packing and covering linear programs online. We also show a nearly tight lower bound. Our techniques for the upper bound use an exponential penalty function in conjunction with multiplicative updates. While exponential penalty functions are used previously to solve linear programs offline approximately, offline algorithms know the constraints beforehand and can optimize greedily. In contrast, when constraints arrive online, updates need to be more complex. We apply our techniques to solve two online fixed-charge problems with congestion. These problems are motivated by applications in machine scheduling and facility location. The linear program for these problems is more complicated than mixed packing and covering, and presents unique challenges. We show that our techniques combined with a randomized rounding procedure give polylogarithmic-competitive integral solutions. These problems generalize online set-cover, for which there is a polylogarithmic lower bound. Hence, our results are close to tight

    K-means for massive data

    Get PDF
    145 p.The K-means algorithm is undoubtedly one of the most popular clustering analysis techniques, due to its easiness in the implementation, straightforward parallelizability and competitive computational complexity, when compared to more sophisticated clustering alternatives. Unfortunately, the progressive growth of the amount of data that needs to be analyzed, in a wide variety of scientific fields, represents a significant challenge for the K-means algorithm, since its time complexity is dominated by the number of distance computations, which is linear with respect to both the number of instances and dimensionality of the problem. This fact difficults its scalability on such massive data sets. Another major drawback of the K-means algorithm corresponds to its high dependency on the initial conditions, which not only may affect the quality of the obtained solution, but that may also have major impact on its computational load, as for instance, a poor initialization could lead to an exponential running time in the worst case scenario.In this dissertation we tackle all these difficulties. Initially, we propose an approximation to the K-means problem, the Recursive Partition-based K-means algorithm (RPKM). This approach consists of recursively applying a weighted version of K-means algorithm over a sequence of spatial-based partitions of the data set. From one iteration to the next, a more refined partition is constructed and the process is repeated using the optimal set of centroids, obtained at the previous iteration, as initialization. From practical stand point, such a process reduces the computational load of K-means algorithm as the number of representatives, at each iteration, is meant to be much smaller than the number of instances of the data set. On the other hand, both phases of the algorithm are embarrasingly parallel. From the theoretical standpoint, and in spite of the selected partition strategy, one can guarantee the non-repetition of the clusterings generated at each RPKM iteration, which ultimately implies the reduction of the total amount of K-means algorithm iterations, as well as leading, in most of the cases, to a monotone decrease of the overall error function. Afterwards, we report on a RPKM-type approach, the Boundary Weighted K-means algorithm (BWKM). For this technique the data set partition is based on an adaptative mesh, that adjusts the size of each grid cell to maximize the chances of each cell to have only instances of the same cluster. The goal is to focus most of the computational resources on those regions where it is harder to determine the correct cluster assignment of the original instances (which is the main source of error for our approximation). For such a construction, it can be proved that if all the cells of a spatial partition are well assigned (have instances of the same cluster) at the end of a BWKM step, then the obtained clustering is actually a fixed point of the K-means algorithm over the entire data set, which is generated after using only a small number of representatives in comparison to the actual size of the data set. Furthermore, if, for a certain step of BWKM, this property can be verified at consecutive weighted Lloyds iterations, then the error of our approximation also decreases monotonically. From the practical stand point, BWKM was compared to the state-of-the-art: K-means++, Forgy K-means, Markov Chain Monte Carlo K-means and Minibatch K-means. The obtained results show that BWKM commonly converged to solutions, with a relative error of under 1% with respect to the considered methods, while using a much smaller amount of distance computations (up to 7 orders of magnitude lower). Even when the computational cost of BWKM is linear with respect to the dimensionality, its error quality guarantees are mainly related to the diagonal length of the grid cells, meaning that, as we increase the dimensionality of the problem, it will be harder for BWKM to have such a competitive performance. Taking this into consideration, we developed a fully-parellelizable feature selection technique intended for the K-means algorithm, the Bounded Dimensional Distributed K-means algorithm (BDDKM). This approach consists of applying any heuristic for the K-means problem over multiple subsets of dimensions (each of which is bounded by a predefined constant, m<<d) and using the obtained clusterings to upper-bound the increase in the K-means error when deleting a given feature. We then select the features with the m largest error increase. Not only can each step of BDDKM be simply parallelized, but its computational cost is dominated by that of the selected heuristic (on m dimensions), which makes it a suitable dimensionality reduction alternative for BWKM on large data sets. Besides providing a theoretical bound for the obtained solution, via BDDKM, with respect the optimal K-means clustering, we analyze its performance in comparison to well-known feature selection and feature extraction techniques. Such an analysis shows BDDKM to consistently obtain results with lower K-means error than all the considered feature selection techniques: Laplacian scores, maximum variance and random selection, while also requiring similar or lower computational times than these approaches. Even more interesting, BDDKM, when compared to feature extraction techniques, such as Random Projections, also shows a noticeable improvement in both error and computational time. As a response to the high dependency of K-means algorithm to its initialization, we finally introduce a cheap Split-Merge step that can be used to re-start the K-means algorithm after reaching a fixed point, Split-Merge K-means (SMKM). Under some settings, one can show that this approach reduces the error of the given fixed point without requiring any further iteration of the K-means algorithm. Moreover, experimental results show that this strategy is able to generate approximations with an associated error that is hard to reach for different multi-start methods, such as multi-start Forgy K-means, K-means++ and Hartigan K-means. In particular,SMKM consistently generated the local minima with the lowest K-means error, reducing, on average, over 1 and 2 orders of magnitude of relative error with respect to K-means++ and Hartigan K-means and Forgy K-means, respectively. Not only does the quality of the solution obtained by SMKM tend to be much lower than the previously commented methods, but, in terms of computational resources, SMKM also required a much lower number of distance computations (about an order of magnitude less) to reach the lowest error that they achieved.bcam:basque center for applied mathematics Excelencia Severo Ocho
    corecore