7 research outputs found

    Intelligent System for Recommending Study Level in English Language Course using CBR Method

    Get PDF
    In the admission process, an English Course uses a level placement test. The implementation of the test encountered some problems such as slow determination of student learning levels based on the results of paper based test that are still conventional. The purpose of this research provides the recommendations for an intelligent knowledgebased system in recommending student learning levels using the Case-Based Reasoning (CBR) method. CBR is one of the method that uses the Artificial Intelligence approach and focuses on solving problems based on knowledge from the previous cases, by calculating numerical local similarity and global similarity using the nearest neighbor algorithm as the basic for the technical development of this intelligent system. The result of the study was tested for the data accuracy with the confusion matrix method by the result 100% for the accuracy. For evaluating the system systematically was using the User Acceptance Test (UAT) method with the results of the evaluation is 88% of the system meets user needs and expectation

    Reccomendations on Selecting The Topic of Student Thesis Concentration using Case Based Reasoning

    Get PDF
    Case Based Reasoning (CBR) is a method that aims to resolve a new case by adapting the solutions contained in previous cases that are similar to the new case. The system built in this study is the CBR system to make recommendations on the topic of student thesis concentration.               This study used data from undergraduate students of Informatics Engineering IST AKPRIND Yogyakarta with a total of 115 data consisting of 80 training data and 35 test data. This study aims to design and build a Case Based Reasoning system using the Nearest Neighbor and Manhattan Distance Similarity Methods, and to compare the results of the accuracy value using the Nearest Neighbor Similarity and Manhattan Distance Similarity methods.               The recommendation process is carried out by calculating the value of closeness or similarity between new cases and old cases stored on a case basis using the Nearest Neighbor Method and Manhattan Distance.  The features used in this study consisted of GPA and course grades. The case taken is the case with the highest similarity value. If a case doesnt get a topic recommendation or is less than the trashold value of 0.8, a case revision will be carried out by an expert. Successfully revised cases are stored in the system to be made new knowledge. The test results using the Nearest Neighbor Method get an accuracy value of 97.14% and Manhattan Distance Method 94.29%

    PENERAPAN CASE BASED REASONING UNTUK SISTEM DIAGNOSIS PENYAKIT HEPATITIS

    Get PDF
    Hepatitis merupakan kelainan hati berupa peradangan pada sel-sel atau jaringan hati yangtergolong penyakit menular. Peradangan ditandai dengan peningkatan kadar enzim hati. Peningkatan ini disebabkan adanya gangguan atau kerusakan membran hati. Secara popular dikenal juga dengan istilah penyakit hati, sakit liver atau sakit kuning. Hepatitis dapat disebabkan oleh berbagai macam penyebab seperti virus, bakteri, parasit, jamur, obat-obatan, bahan kimia, alkohol, cacing, gizi buruk, dan bahkan autoimun. Penyakit hepatitis dapat menyerang siapa saja tidak pandang usia. Penelitian ini mengimplementasikan CBR untuk membantu melakukan diagnosis penyakit hepatitis. Proses diagnosis dilakukan dengan cara memasukkan permasalahan baru yang berisi gejala-gejala dan faktor resiko yang akan di diagnosis ke dalam sistem, kemudian melakukan proses perhitungan nilai similaritas antara permasalahan baru dengan kasus-kasus yang tersimpan di basis kasus menggunakan metode nearest neighbor yang dinormalisasikan dengan tingkat keyakinan pakar. Pengujian dilakukan dengan menggunakan 117 kasus dengan 82 kasus yang disimpan di basis kasus dan 35 data kasus yang dijadikan sebagai kasus baru. Hasil pengujian sistem dengan menggunakan data rekam medik pasien dengan diagnosis yang tervalidasi pakar menunjukkan bahwa sistem mampu mengenali tiga jenis penyakit hepatitis dengan tingkat akurasi sebesar 94,29%

    Examining the effect of explanation on satisfaction and trust in AI diagnostic systems

    Get PDF
    Background: Artificial Intelligence has the potential to revolutionize healthcare, and it is increasingly being deployed to support and assist medical diagnosis. One potential application of AI is as the first point of contact for patients, replacing initial diagnoses prior to sending a patient to a specialist, allowing health care professionals to focus on more challenging and critical aspects of treatment. But for AI systems to succeed in this role, it will not be enough for them to merely provide accurate diagnoses and predictions. In addition, it will need to provide explanations (both to physicians and patients) about why the diagnoses are made. Without this, accurate and correct diagnoses and treatments might otherwise be ignored or rejected. Method: It is important to evaluate the effectiveness of these explanations and understand the relative effectiveness of different kinds of explanations. In this paper, we examine this problem across two simulation experiments. For the first experiment, we tested a re-diagnosis scenario to understand the effect of local and global explanations. In a second simulation experiment, we implemented different forms of explanation in a similar diagnosis scenario. Results: Results show that explanation helps improve satisfaction measures during the critical re-diagnosis period but had little effect before re-diagnosis (when initial treatment was taking place) or after (when an alternate diagnosis resolved the case successfully). Furthermore, initial “global” explanations about the process had no impact on immediate satisfaction but improved later judgments of understanding about the AI. Results of the second experiment show that visual and example-based explanations integrated with rationales had a significantly better impact on patient satisfaction and trust than no explanations, or with text-based rationales alone. As in Experiment 1, these explanations had their effect primarily on immediate measures of satisfaction during the re-diagnosis crisis, with little advantage prior to re-diagnosis or once the diagnosis was successfully resolved. Conclusion: These two studies help us to draw several conclusions about how patient-facing explanatory diagnostic systems may succeed or fail. Based on these studies and the review of the literature, we will provide some design recommendations for the explanations offered for AI systems in the healthcare domain

    INVESTIGATING COLLABORATIVE EXPLAINABLE AI (CXAI)/SOCIAL FORUM AS AN EXPLAINABLE AI (XAI) METHOD IN AUTONOMOUS DRIVING (AD)

    Get PDF
    Explainable AI (XAI) systems primarily focus on algorithms, integrating additional information into AI decisions and classifications to enhance user or developer comprehension of the system\u27s behavior. These systems often incorporate untested concepts of explainability, lacking grounding in the cognitive and educational psychology literature (S. T. Mueller et al., 2021). Consequently, their effectiveness may be limited, as they may address problems that real users don\u27t encounter or provide information that users do not seek. In contrast, an alternative approach called Collaborative XAI (CXAI), as proposed by S. Mueller et al (2021), emphasizes generating explanations without relying solely on algorithms. CXAI centers on enabling users to ask questions and share explanations based on their knowledge and experience to facilitate others\u27 understanding of AI systems. Mamun, Hoffman, et al. (2021) developed a CXAI system akin to a Social Question and Answer (SQA) platform (S. Oh, 2018a), adapting it for AI system explanations. The system successfully passed evaluation based on XAI metrics Hoffman, Mueller, et al. (2018), as implemented in a master’s thesis by Mamun (2021), which validated its effectiveness in a basic image classification domain and explored the types of explanations it generated. This Ph.D. dissertation builds upon this prior work, aiming to apply it in a novel context: users and potential users of self-driving semi-autonomous vehicles. This approach seeks to unravel communication patterns within a social QA platform (S. Oh, 2018a), the types of questions it can assist with, and the benefits it might offer users of widely adopted AI systems. Initially, the feasibility of using existing social QA platforms as explanatory tools for an existing AI system was investigated. The study found that users on these platforms collaboratively assist one another in problem-solving, with many resolutions being reached (Linja et al., 2022). An intriguing discovery was that anger directed at the AI system drove increased engagement on the platform. The subsequent phase leverages observations from social QA platforms in the autonomous driving (AD) sector to gain insights into an AI system within a vehicle. The dissertation includes two simulation studies employing these observations as training materials. The studies explore users\u27 Level 3 Situational Awareness (Endsley, 1995) when the autonomous vehicle exhibits abnormal behavior. These investigate detection rates and users\u27 comprehension of abnormal driving situations. Additionally, these studies measure the perception of personalization within the context of the training process (Zhang & Curley, 2018), cognitive workload (Hart & Staveland, 1988), trust, and reliance (Körber, 2018) concerning the training process. The findings from these studies are mixed, showing higher detection rates of abnormal driving with training but diminished trust and reliance. The final study engages current Tesla FSD users in semi-structured interviews (Crandall et al., 2006) to explore their use of social QA platforms, their knowledge sources during the training phase, and their search for answers to abnormal driving scenarios. The results reveal extensive collaboration through social forums and group discussions, shedding light on differences in trust and reliance within this domain

    Leveraging tagging data for recommender systems

    Get PDF
    The goal of recommender systems is to provide personalized recommendations of products or services to users facing the problem of information overload on the Web. They provide personalized recommendations that best suit a customer's taste, preferences, and individual needs. Especially on large-scale Web sites where millions of items such as books or movies are offered to the users, recommender system technologies play an increasingly important role. One of their main advantages is that they reduce a user's decision-making effort. However, recommender systems are also of high importance from the service provider or system perspective. For instance, they can convince a customer to buy something or develop trust in the system as a whole which ensures customer loyalty and repeat sales gains. With the advent of the Social Web, user generated content has enriched the social dimension of the Web. New types of Web applications have emerged which emphasize content sharing and collaboration. These so-called Social Web platforms turned users from passive recipients of information into active and engaged contributors. As a result, the amount of user contributed information provided by the Social Web poses both new possibilities and challenges for recommender system research. This work deals with the question of how user-provided tagging data can be used to improve the quality of recommender systems. Tag-based recommendations and explanations are the two main areas of contribution in this work. The area of tag-based recommendations deals mainly with the topic of recommending items by exploiting tagging data. A tag recommender algorithm is proposed which can generate highly-accurate tag recommendations in real-time. Furthermore, the concept of user- and item-specific tag preferences is introduced in this work. By attaching feelings to tags users are provided a powerful means to express in detail which features of an item they particularly like or dislike. Additionally, new recommendation schemes are presented that can exploit tag preference data to improve recommendation accuracy. The area of tag-based explanations, on the other hand, deals with questions of how explanations for recommendations should be communicated to the user in the best possible way. New explanation methods based on personalized and non-personalized tag clouds are introduced. The personalized tag cloud interface makes use of the idea of user- and item-specific tag preferences. Furthermore, a first set of possible guidelines for designing or choosing an explanation interface for a recommender system is provided

    The Colorectal Cancer Recurrence Support (CARES) System

    No full text
    10.1016/S0933-3657(97)00029-8Artificial Intelligence in Medicine113175-188AIME
    corecore