16 research outputs found

    Comparing the Performances of Neural Network and Rough Set Theory to Reflect the Improvement of Prognostic in Medical Data

    Get PDF
    In this research, I investigate and compared two of Artificial Intelligence (AI)techniques which are; Neural network and Rough set will be the best technique to be use in analyzing data. Recently, AI is one of the techniques which still in development process that produced few of intelligent systems that helped human to support their daily life such as decision making. In Malaysia, it is newly introduced by a group of researchers from University Science Malaysia. They agreed with others world-wide researchers that AI is very helpful to replaced human intelligence and do many works that can be done by human especially in medical area.In this research, I have chosen three sets of medical data; Wisoncin Prognostic Breast cancer, Parkinson鈥檚 diseases and Hepatitis Prognostic. The reason why the medical data is selected for this research because of the popularity among the researchers that done their research in AI by using medical data and the prediction or target attributes is clearly understandable. The results and findings also discussed in this paper. How the experiment has been done; the steps involved also discussed in this paper. I also conclude this paper with conclusion and future work

    Greedy Algorithm for Inference of Decision Trees from Decision Rule Systems

    Full text link
    Decision trees and decision rule systems play important roles as classifiers, knowledge representation tools, and algorithms. They are easily interpretable models for data analysis, making them widely used and studied in computer science. Understanding the relationships between these two models is an important task in this field. There are well-known methods for converting decision trees into systems of decision rules. In this paper, we consider the inverse transformation problem, which is not so simple. Instead of constructing an entire decision tree, our study focuses on a greedy polynomial time algorithm that simulates the operation of a decision tree on a given tuple of attribute values.Comment: arXiv admin note: substantial text overlap with arXiv:2305.01721, arXiv:2302.0706

    Uncertainty Management of Intelligent Feature Selection in Wireless Sensor Networks

    Get PDF
    Wireless sensor networks (WSN) are envisioned to revolutionize the paradigm of monitoring complex real-world systems at a very high resolution. However, the deployment of a large number of unattended sensor nodes in hostile environments, frequent changes of environment dynamics, and severe resource constraints pose uncertainties and limit the potential use of WSN in complex real-world applications. Although uncertainty management in Artificial Intelligence (AI) is well developed and well investigated, its implications in wireless sensor environments are inadequately addressed. This dissertation addresses uncertainty management issues of spatio-temporal patterns generated from sensor data. It provides a framework for characterizing spatio-temporal pattern in WSN. Using rough set theory and temporal reasoning a novel formalism has been developed to characterize and quantify the uncertainties in predicting spatio-temporal patterns from sensor data. This research also uncovers the trade-off among the uncertainty measures, which can be used to develop a multi-objective optimization model for real-time decision making in sensor data aggregation and samplin

    Bounds on Depth of Decision Trees Derived from Decision Rule Systems

    Full text link
    Systems of decision rules and decision trees are widely used as a means for knowledge representation, as classifiers, and as algorithms. They are among the most interpretable models for classifying and representing knowledge. The study of relationships between these two models is an important task of computer science. It is easy to transform a decision tree into a decision rule system. The inverse transformation is a more difficult task. In this paper, we study unimprovable upper and lower bounds on the minimum depth of decision trees derived from decision rule systems depending on the various parameters of these systems

    Rough set and rule-based multicriteria decision aiding

    Get PDF
    The aim of multicriteria decision aiding is to give the decision maker a recommendation concerning a set of objects evaluated from multiple points of view called criteria. Since a rational decision maker acts with respect to his/her value system, in order to recommend the most-preferred decision, one must identify decision maker's preferences. In this paper, we focus on preference discovery from data concerning some past decisions of the decision maker. We consider the preference model in the form of a set of "if..., then..." decision rules discovered from the data by inductive learning. To structure the data prior to induction of rules, we use the Dominance-based Rough Set Approach (DRSA). DRSA is a methodology for reasoning about data, which handles ordinal evaluations of objects on considered criteria and monotonic relationships between these evaluations and the decision. We review applications of DRSA to a large variety of multicriteria decision problems

    Metody stosowania wiedzy dziedzinowej do poprawiania jako艣ci klasyfikator贸w

    Get PDF
    The dissertation deals with methods that allow the use of domain knowledge to improve the quality of classifiers, where quality improvement concerns: feature extraction methods, classifier construction methods, and methods for predicting decision values for new objects. In particular the following methods have been proposed to improve the quality of classifiers: the expert features (attributes) defined using domain knowledge expressed in a language that uses the temporal logic, a new method of measuring the quality of cuts during supervised discretization using a matrix of the distances between decision attribute values defined by a domain knowledge, a new decision tree that uses redundant cuts to verify the partition of a tree node, a new method for determination of similarities between objects (e.g. patients) using an ontology defined by an expert with its application to the k-nearest neighbors classifier construction and a new method for generating cross rules describing the effect of a factor interfering perception based on a classifier. All of the aforementioned methods have been implemented in the CommoDM software library, which is one of the RSES-lib library extensions. Implemented methods have been tested on real data sets. These were comparative data sets known from the literature as well as own medical data sets collected during the preparation of the dissertation. The latter data sets are associated with the medical aspect of the dissertation that deals with the support of treatment of patients with stable ischemic heart disease, and the main medical problem considered in the thesis is the problem of predicting the presence of significant coronary artery stenosis based on non-invasive heart monitoring by Holter method. The results of experiments confirm the effectiveness of the application of additional domain knowledge in the task of creating and testing classifiers, because after the application of new methods the quality of classifiers has increased considerably. At the same time, the clinical interpretation of the results is more consistent with medical knowledge. The research has been supported by the grant DEC-2013/09/B/ST6/01568 and the grant DEC-2013/09/B/NZ5/00758, both from the National Science Centre of the Republic of Poland. Their results were published in 10 publications, including 3 publications in journals from the A list of the Polish Ministry of Science and Higher Education, 3 publications indexed in the Web of Science, one chapter in a monograph and 3 post-conference publications

    Modelos h铆bridos de aprendizaje basados en instancias y reglas para Clasificaci贸n Monot贸nica

    Get PDF
    En los problemas de clasificaci贸n supervisada, el atributo respuesta depende de determinados atributos de entrada explicativos. En muchos problemas reales el atributo respuesta est谩 representado por valores ordinales que deber铆an incrementarse cuando algunos de los atributos explicativos de entrada tambi茅n lo hacen. Estos son los llamados problemas de clasificaci贸n con restricciones monot贸nicas. En esta Tesis, hemos revisado los clasificadores monot贸nicos propuestos en la literatura y hemos formalizado la teor铆a del aprendizaje basado en ejemplos anidados generalizados para abordar la clasificaci贸n monot贸nica. Propusimos dos algoritmos, un primer algoritmos voraz, que require de datos monot贸nicos y otro basado en algoritmos evolutivos, que es capaz de abordar datos imperfectos que presentan violaciones monot贸nicas entre las instancias. Ambos mejoran el acierto, el 铆ndice de no-monotonicidad de las predicciones y la simplicidad de los modelos sobre el estado-del-arte.In supervised prediction problems, the response attribute depends on certain explanatory attributes. Some real problems require the response attribute to represent ordinal values that should increase with some of the explaining attributes. They are called classification problems with monotonicity constraints. In this thesis, we have reviewed the monotonic classifiers proposed in the literature and we have formalized the nested generalized exemplar learning theory to tackle monotonic classification. Two algorithms were proposed, a first greedy one, which require monotonic data and an evolutionary based algorithm, which is able to address imperfect data with monotonic violations present among the instances. Both improve the accuracy, the non-monotinic index of predictions and the simplicity of models over the state-of-the-art.Tesis Univ. Ja茅n. Departamento INFORM脕TIC
    corecore