3 research outputs found
Kaba küme tabanlı çok kriterli karar verme yöntemi ve uygulaması
06.03.2018 tarihli ve 30352 sayılı Resmi Gazetede yayımlanan “Yükseköğretim Kanunu İle Bazı Kanun Ve Kanun Hükmünde Kararnamelerde Değişiklik Yapılması Hakkında Kanun” ile 18.06.2018 tarihli “Lisansüstü Tezlerin Elektronik Ortamda Toplanması, Düzenlenmesi ve Erişime Açılmasına İlişkin Yönerge” gereğince tam metin erişime açılmıştır.Çok kriterli karar verme problemi, çağımız yöneticilerinin sıklıkla başvurmuş olduğu yöntemlerden birisidir. Verilerin belirsiz ya da eksik olması durumunda, mevcut olan çok kriterli karar verme yöntemleri yetersiz kalırken, önermiş olduğumuz kaba küme tabanlı çok kriterli karar verme algoritması, bu eksikliği gidermede en büyük yardımcı olarak karşımıza çıkmaktadır. Bununla birlikte, hızla artan veri trafiğinde, mevcut verilerin verimli bir şekilde kullanılması da beraberinde önemli bir durumu ortaya çıkartmaktadır. 1982 yılında ilk olarak Pawlak[1] tarafından önerilen kaba küme kavramı, büyük veri tabanlarını kullanarak gerekli olan bilginin keşfini sağlayan önemli bir araç olarak kullanılmaktadır. Kaba küme kavramı, çok kriterli karar verme problemlerinde kullanılmak üzere, kesin olmayan yapıların analizi için bulanık mantık yaklaşımından türetilmiştir. Kaba küme teorisi, kural indirgeme ve sınıflandırma yaklaşım özellikleri ile büyük verilerin analiz işleminin yanı sıra çok kriterli karar verme problemlerinde de kullanılabilmektedir. Kaba küme teorisi bulanık küme teorisinin bir alt kolu olarak geliştirilmiştir. Eksik, belirsiz verilerin değerlendirilmesi sürecinde, alt ve üst yaklaşımlar kullanılarak, veriler analiz edilmektedir. Bulanık kümeler gibi kesin sınırlamaları içermeyen bir yapıya sahiptir. Eksik bilgi analizi, bilgi tabanı indirgemesi yöntemleri kullanılarak, verilerdeki belirsizlik en aza indirgenmeye çalışılmaktadır. Tutarsız, eksik bilgi içeren veri yapılarından kural çıkarımı ve sınıflandırma konusunda kaba küme teorisi ilerleyen zamanlarda daha fazla tercih edilecek bir yöntem olarak çıkabilecektir. Bu çalışmada kaba kümeleme teorisine ait temel kavramlar kaba küme tabanlı bilgi keşfi ve kaba küme kavramı dikkate alınarak geliştirilen algoritma ile birlikte, çok kriterli karar verme probleminin çözümüne yönelik algoritma geliştirilmiştir ve diğer ÇKKV algoritmaları ile karşılaştırılmıştır. Anahtar kelimeler:Kaba Küme Teorisi, Çok Kriterli Karar Verme EntropiThe multi-criteria decision-making problem is one of the methods that preffered and applied by the managers. Multi criteria decision making data set may include the uncertain or incomplete data, in this situation, decision is getting difficult and impossible, the suggested rough set based multi criteria decision making algorithm can able to solve this manner problem. However, in the rapidly increasing data traffic, the efficient use of existing data also brings about an important situation. The rough set concept firstly proposed by Pawlak in 1982[1] that is used as an important tool for the discovery of the necessary information by using large databases. In the case of multi-criteria decision-making problems, the concept of rough set theory is derived from the fuzzy logic approach to perform the analysis of uncertain structures. The rough set theory also has the property of being able to be used in multi-criteria decision-making problems with the rules of rule reduction and classification during the analysis of large data. Rough set theory has a structure that does not contain definite limitations, such as fuzzy sets. Therefore, the rough set approach can able to analysis of the incomplete, inadequate and ambiguous information suitable for data analysis, uses incomplete information analysis, knowledge base reduction methods during this process. Rough set theory can be used as a natural method that deals with inconsistent and incomplete information, which is the basic problem of rule extraction and classification. In this study, the basic concepts of rough set theory is given. The algorithm for solving multi-criteria decision making has been developed by considering the rough set based knowledge discovery and rough set concept. Keywords: Rough Set Theory, Multi Criteria Decision Making Entrop
Recommended from our members
A knowledge based machine tool maintenance planning system Using case-based reasoning techniques
In advanced manufacturing systems, Computer Numerical Control (CNC) machine tools are important equipment to manufacture product components of high precision, whilst from equipment maintenance point of view, they are regarded as the ‘products’ provided by machine tool manufacturers. Therefore, the reliability of CNC machine tools affects not only the quality of the components they manufacture, but also the reputation and profits of equipment suppliers. This paper presents a novel knowledge-based maintenance planning system to facilitate information and knowledge sharing between all stakeholders including machine tool manufacturers, users (manufacturing systems), maintenance service providers and part suppliers (for machine tools), in the emerging ‘Product-Service’ business model. Case Based Reasoning principles have been implemented to improve the efficiency of maintenance planning. Ontologies were adopted to represent field knowledge using adaptation guided retrievals based on semantic similarity and correlation. The adaption algorithm has been developed based on the Casual Theory and the dependence relationship to generate the solution for required maintenance problems. The proposed system was implemented using Content Management technologies, which proved to have advantages over traditional database systems in managing engineering knowledge, and has been verified using an example CNC machine tool. The results were commented by industrial collaborators as very promising and further exploitation in industry was recommended
Enhancing Big Data Feature Selection Using a Hybrid Correlation-Based Feature Selection
This study proposes an alternate data extraction method that combines three well-known
feature selection methods for handling large and problematic datasets: the correlation-based feature
selection (CFS), best first search (BFS), and dominance-based rough set approach (DRSA) methods.
This study aims to enhance the classifier’s performance in decision analysis by eliminating uncorrelated and inconsistent data values. The proposed method, named CFS-DRSA, comprises several
phases executed in sequence, with the main phases incorporating two crucial feature extraction tasks.
Data reduction is first, which implements a CFS method with a BFS algorithm. Secondly, a data selection process applies a DRSA to generate the optimized dataset. Therefore, this study aims to solve
the computational time complexity and increase the classification accuracy. Several datasets with
various characteristics and volumes were used in the experimental process to evaluate the proposed
method’s credibility. The method’s performance was validated using standard evaluation measures
and benchmarked with other established methods such as deep learning (DL). Overall, the proposed
work proved that it could assist the classifier in returning a significant result, with an accuracy rate
of 82.1% for the neural network (NN) classifier, compared to the support vector machine (SVM),
which returned 66.5% and 49.96% for DL. The one-way analysis of variance (ANOVA) statistical
result indicates that the proposed method is an alternative extraction tool for those with difficulties
acquiring expensive big data analysis tools and those who are new to the data analysis field.Ministry of Higher Education under the Fundamental Research Grant Scheme (FRGS/1/2018/ICT04/UTM/01/1)Universiti Teknologi Malaysia (UTM) under Research University Grant Vot-20H04, Malaysia Research University Network (MRUN) Vot 4L876SPEV project, University of Hradec Kralove, Faculty
of Informatics and Management, Czech Republic (ID: 2102–2021), “Smart Solutions in Ubiquitous
Computing Environments