22 research outputs found

    Review and prioritization of investment projects in the Waste Management organization of Tabriz Municipality with a Rough Sets Theory approach

    Get PDF
    Purpose: Prioritization of investment projects is a key step in the process of planning the investment activities of organizations. Choosing the suitable projects has a direct impact on the profitability and other strategic goals of organizations. Factors affecting the prioritization of investment projects are complex and the use of traditional methods alone cannot be useful, so there is a need to use a suitable model for prioritizing projects and investment plans. The purpose of this study is to prioritize projects and investment methods for projects (10 projects) considered by the Waste Management Organization of Tabriz Municipality. Methodology: The method of analysis used is the theory of rough, so that first the important investment projects in the field of waste management were determined using the research background and opinion of experts and the weight and priority of the projects were obtained using the Rough Sets Theory. Then, the priority of appropriate investment methods (out of 6 methods) of each project was obtained using Rough numbers, the opinion of experts and other aspects. Findings: The result of the research has been that construction project of a specialized recycling town, plastic recycling project, and recycled tire recycling project are three priority projects of Tabriz Municipality Waste Management Organization, respectively. Three investment methods, civil partnership agreements, BOT, and BOO can be used for them. Originality/Value: Tabriz Municipality Waste Management is an important and influential organization in the activities of the city, in which the investment methods in its projects are mostly based on common contracts and are performed in the same way for all projects. This research offers new methods for projects and their diversity according to Rough Sets technique

    Fast attribute selection based on the rough set boundary region

    Get PDF
    The problem of clustering exists in numerous fields such as bioinformatics, data mining, and the recognition of patterns. The function of techniques is to suitably select the best attribute from numerous contending attribute(s). RST-based approaches for definite data has gained significant attention, but cannot select clustering attributes for optimum performance. In this paper, the focus is on the processes that exhibit a similar degree of results to an identical attribute value. First, the MIA algorithm was identified as the supplement to the MSA algorithm, which experiences set approximation. Second, the proposition that MIA accomplishes lesser computational complexity through the indiscernibility relation measurement was highlighted. This observation is ascribed to the relationship between various attributes, which is markedly similar to those induced by others. Based on the fact that the size of the attribute domain is relatively small, the selection of such an attribute under such circumstances is problematic. Failure to choose the most suitable clustering attribute is challenging and the set is defined rather than computing the relative mean where it can only be implemented with a distinctive category of the information system, as illustrated with an example. Lastly, a substitute method for selecting a clustering attribute-based RST using Mean Dependency degree attribute(s) (MMD) was proposed. This involved selecting the maximum value of a mean attribute(s) as a clustering attribute through a considerable targeting procedure for the rapid selection of an attribute to settle the instability in selecting clustering attributes. Thus, the comparative performance of the selected clustering attributes-based RST techniques MSA and MIA was conducted

    Positive region: An enhancement of partitioning attribute based rough set for categorical data

    Get PDF
    Datasets containing multi-value attributes are often involved in several domains, like pattern recognition, machine learning and data mining. Data partition is required in such cases. Partitioning attributes is the clustering process for the whole data set which is specified for further processing. Recently, there are already existing prominent rough set-based approaches available for group objects and for handling uncertainty data that use indiscernibility attribute and mean roughness measure to perform attribute partitioning. Nevertheless, most of the partitioning attribute methods for selecting partitioning attribute algorithm for categorical data in clustering datasets are incapable of optimal partitioning. This indiscernibility and mean roughness measures, however, require the calculation of the lower approximation, which has less accuracy and it is an expensive task to compute. This reduces the growth of the set of attributes and neglects the data found within the boundary region. This paper presents a new concept called the "Positive Region Based Mean Dependency (PRD)”, that calculates the attribute dependency. In order to determine the mean dependency of the attributes, that is acceptable for categorical datasets, using a positive region-based mean dependency measure, PRD defines the method. By avoiding the lower approximation, PRD is an optimal substitute for the conventional dependency measure in partitioning attribute selection. Contrary to traditional RST partitioning methods, the proposed method can be employed as a measure of data output uncertainty and as a tailback for larger and multiple data clustering. The performance of the method presented is evaluated and compared with the algorithmes of Information-Theoretical Dependence Roughness (ITDR) and Maximum Indiscernible Attribute (MIA)

    Advancing ensemble learning performance through data transformation and classifiers fusion in granular computing context

    Get PDF
    Classification is a special type of machine learning tasks, which is essentially achieved by training a classifier that can be used to classify new instances. In order to train a high performance classifier, it is crucial to extract representative features from raw data, such as text and images. In reality, instances could be highly diverse even if they belong to the same class, which indicates different instances of the same class could represent very different characteristics. For example, in a facial expression recognition task, some instances may be better described by Histogram of Oriented Gradients features, while others may be better presented by Local Binary Patterns features. From this point of view, it is necessary to adopt ensemble learning to train different classifiers on different feature sets and to fuse these classifiers towards more accurate classification of each instance. On the other hand, different algorithms are likely to show different suitability for training classifiers on different feature sets. It shows again the necessity to adopt ensemble learning towards advances in the classification performance. Furthermore, a multi-class classification task would become increasingly more complex when the number of classes is increased, i.e. it would lead to the increased difficulty in terms of discriminating different classes. In this paper, we propose an ensemble learning framework that involves transforming a multi-class classification task into a number of binary classification tasks and fusion of classifiers trained on different feature sets by using different learning algorithms. We report experimental studies on a UCI data set on Sonar and the CK+ data set on facial expression recognition. The results show that our proposed ensemble learning approach leads to considerable advances in classification performance, in comparison with popular learning approaches including decision tree ensembles and deep neural networks. In practice, the proposed approach can be used effectively to build an ensemble of ensembles acting as a group of expert systems, which show the capability to achieve more stable performance of pattern recognition, in comparison with building a single classifier that acts as a single expert system

    ARTIFICIAL INTELLIGENCE IN TACKLING CORONAVIRUS AND FUTURE PANDEMICS

    Get PDF
    SARS-COV-2 (Severe Acute Respiratory Syndrome Coronavirus 2) was initially tested in Wuhan City, China, in December 2019 and had a devastating impact worldwide, exterminating more than 6 million people as of September 2022. It became the biggest worldwide health crisis since the 1918 influenza outbreak. Viruses generally mutate randomly, so predicting how SARS-CoV-2 will transform over the next few months or years and which forms will predominate is impossible. The possibilities for virus mutation, in theory, are practically endless. Enabling researchers to determine which antibodies have the potential to be most effective against existing and future variations could help machine learning to assist in drug discovery. In the COVID-19 pandemic, AI has benefited four key areas: diagnosis, clinical decision-making for public health, virtual assistance, and therapeutic research. This study conducted a discourse analysis and textual evaluation of AI (deep learning and machine learning) concerning the COVID-19 outbreak. Further, this study also discusses the latest inventions that can be very helpful in future pandemic detection. COVID-19 has already changed our lives, and in the future, we might be able to deal with pandemics like this with the help of AI. This review has also emphasized the legal implications of AI in the battle against COVID-19

    A survey on automated detection and classification of acute leukemia and WBCs in microscopic blood cells

    Full text link
    Leukemia (blood cancer) is an unusual spread of White Blood Cells or Leukocytes (WBCs) in the bone marrow and blood. Pathologists can diagnose leukemia by looking at a person's blood sample under a microscope. They identify and categorize leukemia by counting various blood cells and morphological features. This technique is time-consuming for the prediction of leukemia. The pathologist's professional skills and experiences may be affecting this procedure, too. In computer vision, traditional machine learning and deep learning techniques are practical roadmaps that increase the accuracy and speed in diagnosing and classifying medical images such as microscopic blood cells. This paper provides a comprehensive analysis of the detection and classification of acute leukemia and WBCs in the microscopic blood cells. First, we have divided the previous works into six categories based on the output of the models. Then, we describe various steps of detection and classification of acute leukemia and WBCs, including Data Augmentation, Preprocessing, Segmentation, Feature Extraction, Feature Selection (Reduction), Classification, and focus on classification step in the methods. Finally, we divide automated detection and classification of acute leukemia and WBCs into three categories, including traditional, Deep Neural Network (DNN), and mixture (traditional and DNN) methods based on the type of classifier in the classification step and analyze them. The results of this study show that in the diagnosis and classification of acute leukemia and WBCs, the Support Vector Machine (SVM) classifier in traditional machine learning models and Convolutional Neural Network (CNN) classifier in deep learning models have widely employed. The performance metrics of the models that use these classifiers compared to the others model are higher

    Seleção de atributos utilizando a Análise Lógica de Dados Inconsistentes (LAID)

    Get PDF
    O tratamento de conjuntos de dados de grande dimensão é uma questão que é recorrente nos dias de hoje e cuja tarefa não é simples, dadas as limitações computacionais, ainda, existentes. Uma das abordagens possíveis passa por realizar uma seleção de atributos que permita diminuir, consideravelmente, a dimensão dos dados sem aumentar a inconsistência dos mesmos. “Rough Sets” é uma abordagem que difere doutras técnicas de seleção de atributos pela sua capacidade de lidar com dados inconsistentes. Outra abordagem para redução de dados é conhecida como Análise Lógica de Dados (LAD). A Análise Lógica de Dados Inconsistentes (LAID) junta as vantagens destas duas abordagens.N/
    corecore