2,518 research outputs found

    Clustering sustainable suppliers in the plastics industry: A fuzzy equivalence relation approach

    Get PDF
    Nowadays, pure economic supply chain management is not commonly contemplated among companies (especially buyers), as recently novel dimensions of supply chains, e.g., environmental, sustainability, and risk, play significant roles. In addition, since companies prefer buying their needs from a group of suppliers, the problem of supplier selection is not solely choosing or qualifying a supplier from among others. Buyers, hence, commonly assemble a portfolio of suppliers by looking at the multi-dimensional pre-determined selection criteria. Since sustainable supplier selection criteria are often assessed by linguistic terms, an appropriate clustering approach is required. This paper presents an innovative way to implement fuzzy equivalence relation to clustering sustainable suppliers through developing a comprehensive taxonomy of sustainable supplier selection criteria, including supply chain risk. Fifteen experts participated in this study to evaluate 20 suppliers and cluster them in the plastics industry. Findings reveal that the best partitioning occurs when the suppliers are divided into two clusters, with 4 (20%) and 16 (80%) suppliers, respectively. The four suppliers in cluster one are performing better in terms of the capability of supplier/delivery, service, risk, and sustainability criteria such as environment protection/management, and green innovation. These factors are critical in clustering and selecting sustainable suppliers. The originality of this study lies in developing an all-inclusive set of criteria for clustering sustainable suppliers and adding risk factors to the conventional supplier selection criteria. In addition to partitioning the suppliers and determining the best-performing ones, this study also highlights the most influential factors by analysing the suppliers in the best cluster

    Class Association Rules Mining based Rough Set Method

    Full text link
    This paper investigates the mining of class association rules with rough set approach. In data mining, an association occurs between two set of elements when one element set happen together with another. A class association rule set (CARs) is a subset of association rules with classes specified as their consequences. We present an efficient algorithm for mining the finest class rule set inspired form Apriori algorithm, where the support and confidence are computed based on the elementary set of lower approximation included in the property of rough set theory. Our proposed approach has been shown very effective, where the rough set approach for class association discovery is much simpler than the classic association method.Comment: 10 pages, 2 figure

    Fuzzy clustering of homogeneous decision making units with common weights in data envelopment analysis

    Get PDF
    Data Envelopment Analysis (DEA) is the most popular mathematical approach to assess efficiency of decision-making units (DMUs). In complex organizations, DMUs face a heterogeneous condition regarding environmental factors which affect their efficiencies. When there are a large number of objects, non-homogeneity of DMUs significantly influences their efficiency scores that leads to unfair ranking of DMUs. The aim of this study is to deal with non-homogeneous DMUs by implementing a clustering technique for further efficiency analysis. This paper proposes a common set of weights (CSW) model with ideal point method to develop an identical weight vector for all DMUs. This study proposes a framework to measuring efficiency of complex organizations, such as banks, that have several operational styles or various objectives. The proposed framework helps managers and decision makers (1) to identify environmental components influencing the efficiency of DMUs, (2) to use a fuzzy equivalence relation approach proposed here to cluster the DMUs to homogenized groups, (3) to produce a common set of weights (CSWs) for all DMUs with the model developed here that considers fuzzy data within each cluster, and finally (4) to calculate the efficiency score and overall ranking of DMUs within each cluster

    Risk prediction of product-harm events using rough sets and multiple classifier fusion:an experimental study of listed companies in China

    Get PDF
    With the increasing of frequency and destructiveness of product-harm events, study on enterprise crisis management becomes essentially important, but little literature thoroughly explores the risk prediction method of product-harm event. In this study, an initial index system for risk prediction was built based on the analysis of the key drivers of the product-harm event's evolution; ultimately, nine risk-forecasting indexes were obtained using rough set attribute reduction. With the four indexes of cumulative abnormal returns as the input, fuzzy clustering was used to classify the risk level of a product-harm event into four grades. In order to control the uncertainty and instability of single classifiers in risk prediction, multiple classifier fusion was introduced and combined with self-organising data mining (SODM). Further, an SODM-based multiple classifier fusion (SB-MCF) model was presented for the risk prediction related to a product-harm event. The experimental results based on 165 Chinese listed companies indicated that the SB-MCF model improved the average predictive accuracy and reduced variation degree simultaneously. The statistical analysis demonstrated that the SB-MCF model significantly outperformed six widely used single classification models (e.g. neural networks, support vector machine, and case-based reasoning) and other six commonly used multiple classifier fusion methods (e.g. majority voting, Bayesian method, and genetic algorithm)

    A Latent Dirichlet Allocation and Fuzzy Clustering Based Machine Learning Model for Text Thesaurus

    Get PDF
    It is not quite possible to use manual methods to process the huge amount of structured and semi-structured data. This study aims to solve the problem of processing huge data through machine learning algorithms. We collected the text data of the company’s public opinion through crawlers, and use Latent Dirichlet Allocation (LDA) algorithm to extract the keywords of the text, and uses fuzzy clustering to cluster the keywords to form different topics. The topic keywords will be used as a seed dictionary for new word discovery. In order to verify the efficiency of machine learning in new word discovery, algorithms based on association rules, N-Gram, PMI, andWord2vec were used for comparative testing of new word discovery. The experimental results show that the Word2vec algorithm based on machine learning model has the highest accuracy, recall and F-value indicators

    Internet-based solutions to support distributed manufacturing

    Get PDF
    With the globalisation and constant changes in the marketplace, enterprises are adapting themselves to face new challenges. Therefore, strategic corporate alliances to share knowledge, expertise and resources represent an advantage in an increasing competitive world. This has led the integration of companies, customers, suppliers and partners using networked environments. This thesis presents three novel solutions in the tooling area, developed for Seco tools Ltd, UK. These approaches implement a proposed distributed computing architecture using Internet technologies to assist geographically dispersed tooling engineers in process planning tasks. The systems are summarised as follows. TTS is a Web-based system to support engineers and technical staff in the task of providing technical advice to clients. Seco sales engineers access the system from remote machining sites and submit/retrieve/update the required tooling data located in databases at the company headquarters. The communication platform used for this system provides an effective mechanism to share information nationwide. This system implements efficient methods, such as data relaxation techniques, confidence score and importance levels of attributes, to help the user in finding the closest solutions when specific requirements are not fully matched In the database. Cluster-F has been developed to assist engineers and clients in the assessment of cutting parameters for the tooling process. In this approach the Internet acts as a vehicle to transport the data between users and the database. Cluster-F is a KD approach that makes use of clustering and fuzzy set techniques. The novel proposal In this system is the implementation of fuzzy set concepts to obtain the proximity matrix that will lead the classification of the data. Then hierarchical clustering methods are applied on these data to link the closest objects. A general KD methodology applying rough set concepts Is proposed In this research. This covers aspects of data redundancy, Identification of relevant attributes, detection of data inconsistency, and generation of knowledge rules. R-sets, the third proposed solution, has been developed using this KD methodology. This system evaluates the variables of the tooling database to analyse known and unknown relationships in the data generated after the execution of technical trials. The aim is to discover cause-effect patterns from selected attributes contained In the database. A fourth system was also developed. It is called DBManager and was conceived to administrate the systems users accounts, sales engineers’ accounts and tool trial monitoring process of the data. This supports the implementation of the proposed distributed architecture and the maintenance of the users' accounts for the access restrictions to the system running under this architecture

    A Method Non-Deterministic and Computationally Viable for Detecting Outliers in Large Datasets

    Get PDF
    This paper presents an outlier detection method that is based on a Variable Precision Rough Set Model (VPRSM). This method generalizes the standard set inclusion relation, which is the foundation of the Rough Sets Basic Model (RSBM). The main contribution of this research is an improvement in the quality of detection because this generalization allows us to classify when there is some degree of uncertainty. From the proposed method, a computationally viable algorithm for large volumes of data is also introduced. The experiments performed in a real scenario and a comparison of the results with the RSBM-based method demonstrate the efficiency of both the method and the algorithm in diverse contexts that involve large volumes of data.This work has been supported by grant TIN2016-78103-C2-2-R, and University of Alicante projects GRE14-02 and Smart University

    Neutrosophic Sets and Systems, Vol. 39, 2021

    Get PDF
    • …
    corecore