10 research outputs found

    Why and How Knowledge Discovery Can Be Useful for Solving Problems with CBR

    Get PDF
    International audienceIn this talk, we discuss and illustrate links existing between knowledge discovery in databases (KDD), knowledge representation and reasoning (KRR), and case-based reasoning (CBR). KDD techniques especially based on Formal Concept Analysis (FCA) are well formalized and allow the design of concept lattices from binary and complex data. These concept lattices provide a realistic basis for knowledge base organization and ontology engineering. More generally, they can be used for representing knowledge and reasoning in knowledge systems and CBR systems as well

    A novel case-based reasoning approach to radiotherapy dose planning

    Get PDF
    In this thesis, novel Case-Based Reasoning (CBR) methods were developed to be included in CBRDP (Case-Based Reasoning Dose Planner) -an adaptive decision support system for radiotherapy dose planning. CBR is an artificial intelligence methodology which solves new problems by retrieving solutions to previously solved similar problems stored in a case base. The focus of this research is on dose planning for prostate cancer patients. The records of patients successfully treated in the Nottingham University Hospitals NHS Trust, City Hospital Campus, UK, were stored in a case base and were exploited using case-based reasoning for future decision making. After each successful run of the system, a group based Simulated Annealing (SA) algorithm automatically searches for an optimal/near optimal combination of feature weights to be used in the future retrieval process of CBR. A number of research issues associated with the prostate cancer dose planning problem and the use of CBR are addressed including: (a) trade-off between the benefit of delivering a higher dose of radiation to cancer cells and the risk to damage surrounding organs, (b) deciding when and how much to violate the limitations of dose limits imposed to surrounding organs, (c) fusion of knowledge and experience gained over time in treating patients similar to the new one, (d) incorporation of the 5 years Progression Free Probability and success rate in the decision making process and (e) hybridisation of CBR with a novel group based simulated annealing algorithm to update knowledge/experience gained in treating patients over time. The efficiency of the proposed system was validated using real data sets collected from the Nottingham University Hospitals. Experiments based on a leave-one-out strategy demonstrated that for most of the patients, the dose plans generated by our approach are coherent with the dose plans prescribed by an experienced oncologist or even better. This system may play a vital role to assist the oncologist in making a better decision in less time; it incorporates the success rate of previously treated similar patients in the dose planning for a new patient and it can also be used in teaching and training processes. In addition, the developed method is generic in nature and can be used to solve similar non-linear real world complex problems

    Case-Based Decision Support for Disaster Management

    Get PDF
    Disasters are characterized by severe disruptions of the society’s functionality and adverse impacts on humans, the environment, and economy that cannot be coped with by society using its own resources. This work presents a decision support method that identifies appropriate measures for protecting the public in the course of a nuclear accident. The method particularly considers the issue of uncertainty in decision-making as well as the structured integration of experience and expert knowledge

    A novel case-based reasoning approach to radiotherapy dose planning

    Get PDF
    In this thesis, novel Case-Based Reasoning (CBR) methods were developed to be included in CBRDP (Case-Based Reasoning Dose Planner) -an adaptive decision support system for radiotherapy dose planning. CBR is an artificial intelligence methodology which solves new problems by retrieving solutions to previously solved similar problems stored in a case base. The focus of this research is on dose planning for prostate cancer patients. The records of patients successfully treated in the Nottingham University Hospitals NHS Trust, City Hospital Campus, UK, were stored in a case base and were exploited using case-based reasoning for future decision making. After each successful run of the system, a group based Simulated Annealing (SA) algorithm automatically searches for an optimal/near optimal combination of feature weights to be used in the future retrieval process of CBR. A number of research issues associated with the prostate cancer dose planning problem and the use of CBR are addressed including: (a) trade-off between the benefit of delivering a higher dose of radiation to cancer cells and the risk to damage surrounding organs, (b) deciding when and how much to violate the limitations of dose limits imposed to surrounding organs, (c) fusion of knowledge and experience gained over time in treating patients similar to the new one, (d) incorporation of the 5 years Progression Free Probability and success rate in the decision making process and (e) hybridisation of CBR with a novel group based simulated annealing algorithm to update knowledge/experience gained in treating patients over time. The efficiency of the proposed system was validated using real data sets collected from the Nottingham University Hospitals. Experiments based on a leave-one-out strategy demonstrated that for most of the patients, the dose plans generated by our approach are coherent with the dose plans prescribed by an experienced oncologist or even better. This system may play a vital role to assist the oncologist in making a better decision in less time; it incorporates the success rate of previously treated similar patients in the dose planning for a new patient and it can also be used in teaching and training processes. In addition, the developed method is generic in nature and can be used to solve similar non-linear real world complex problems

    Enhancing explainability and scrutability of recommender systems

    Get PDF
    Our increasing reliance on complex algorithms for recommendations calls for models and methods for explainable, scrutable, and trustworthy AI. While explainability is required for understanding the relationships between model inputs and outputs, a scrutable system allows us to modify its behavior as desired. These properties help bridge the gap between our expectations and the algorithm’s behavior and accordingly boost our trust in AI. Aiming to cope with information overload, recommender systems play a crucial role in ïŹltering content (such as products, news, songs, and movies) and shaping a personalized experience for their users. Consequently, there has been a growing demand from the information consumers to receive proper explanations for their personalized recommendations. These explanations aim at helping users understand why certain items are recommended to them and how their previous inputs to the system relate to the generation of such recommendations. Besides, in the event of receiving undesirable content, explanations could possibly contain valuable information as to how the system’s behavior can be modiïŹed accordingly. In this thesis, we present our contributions towards explainability and scrutability of recommender systems: ‱ We introduce a user-centric framework, FAIRY, for discovering and ranking post-hoc explanations for the social feeds generated by black-box platforms. These explanations reveal relationships between users’ proïŹles and their feed items and are extracted from the local interaction graphs of users. FAIRY employs a learning-to-rank (LTR) method to score candidate explanations based on their relevance and surprisal. ‱ We propose a method, PRINCE, to facilitate provider-side explainability in graph-based recommender systems that use personalized PageRank at their core. PRINCE explanations are comprehensible for users, because they present subsets of the user’s prior actions responsible for the received recommendations. PRINCE operates in a counterfactual setup and builds on a polynomial-time algorithm for ïŹnding the smallest counterfactual explanations. ‱ We propose a human-in-the-loop framework, ELIXIR, for enhancing scrutability and subsequently the recommendation models by leveraging user feedback on explanations. ELIXIR enables recommender systems to collect user feedback on pairs of recommendations and explanations. The feedback is incorporated into the model by imposing a soft constraint for learning user-speciïŹc item representations. We evaluate all proposed models and methods with real user studies and demonstrate their beneïŹts at achieving explainability and scrutability in recommender systems.Unsere zunehmende AbhĂ€ngigkeit von komplexen Algorithmen fĂŒr maschinelle Empfehlungen erfordert Modelle und Methoden fĂŒr erklĂ€rbare, nachvollziehbare und vertrauenswĂŒrdige KI. Zum Verstehen der Beziehungen zwischen Modellein- und ausgaben muss KI erklĂ€rbar sein. Möchten wir das Verhalten des Systems hingegen nach unseren Vorstellungen Ă€ndern, muss dessen Entscheidungsprozess nachvollziehbar sein. ErklĂ€rbarkeit und Nachvollziehbarkeit von KI helfen uns dabei, die LĂŒcke zwischen dem von uns erwarteten und dem tatsĂ€chlichen Verhalten der Algorithmen zu schließen und unser Vertrauen in KI-Systeme entsprechend zu stĂ€rken. Um ein Übermaß an Informationen zu verhindern, spielen Empfehlungsdienste eine entscheidende Rolle um Inhalte (z.B. Produkten, Nachrichten, Musik und Filmen) zu ïŹltern und deren Benutzern eine personalisierte Erfahrung zu bieten. Infolgedessen erheben immer mehr In- formationskonsumenten Anspruch auf angemessene ErklĂ€rungen fĂŒr deren personalisierte Empfehlungen. Diese ErklĂ€rungen sollen den Benutzern helfen zu verstehen, warum ihnen bestimmte Dinge empfohlen wurden und wie sich ihre frĂŒheren Eingaben in das System auf die Generierung solcher Empfehlungen auswirken. Außerdem können ErklĂ€rungen fĂŒr den Fall, dass unerwĂŒnschte Inhalte empfohlen werden, wertvolle Informationen darĂŒber enthalten, wie das Verhalten des Systems entsprechend geĂ€ndert werden kann. In dieser Dissertation stellen wir unsere BeitrĂ€ge zu ErklĂ€rbarkeit und Nachvollziehbarkeit von Empfehlungsdiensten vor. ‱ Mit FAIRY stellen wir ein benutzerzentriertes Framework vor, mit dem post-hoc ErklĂ€rungen fĂŒr die von Black-Box-Plattformen generierten sozialen Feeds entdeckt und bewertet werden können. Diese ErklĂ€rungen zeigen Beziehungen zwischen BenutzerproïŹlen und deren Feeds auf und werden aus den lokalen Interaktionsgraphen der Benutzer extrahiert. FAIRY verwendet eine LTR-Methode (Learning-to-Rank), um die ErklĂ€rungen anhand ihrer Relevanz und ihres Grads unerwarteter Empfehlungen zu bewerten. ‱ Mit der PRINCE-Methode erleichtern wir das anbieterseitige Generieren von ErklĂ€rungen fĂŒr PageRank-basierte Empfehlungsdienste. PRINCE-ErklĂ€rungen sind fĂŒr Benutzer verstĂ€ndlich, da sie Teilmengen frĂŒherer Nutzerinteraktionen darstellen, die fĂŒr die erhaltenen Empfehlungen verantwortlich sind. PRINCE-ErklĂ€rungen sind somit kausaler Natur und werden von einem Algorithmus mit polynomieller Laufzeit erzeugt , um prĂ€zise ErklĂ€rungen zu ïŹnden. ‱ Wir prĂ€sentieren ein Human-in-the-Loop-Framework, ELIXIR, um die Nachvollziehbarkeit der Empfehlungsmodelle und die QualitĂ€t der Empfehlungen zu verbessern. Mit ELIXIR können Empfehlungsdienste Benutzerfeedback zu Empfehlungen und ErklĂ€rungen sammeln. Das Feedback wird in das Modell einbezogen, indem benutzerspeziïŹscher Einbettungen von Objekten gelernt werden. Wir evaluieren alle Modelle und Methoden in Benutzerstudien und demonstrieren ihren Nutzen hinsichtlich ErklĂ€rbarkeit und Nachvollziehbarkeit von Empfehlungsdiensten

    Goal Reasoning: Papers from the ACS workshop

    Get PDF
    This technical report contains the 11 accepted papers presented at the Workshop on Goal Reasoning, which was held as part of the 2013 Conference on Advances in Cognitive Systems (ACS-13) in Baltimore, Maryland on 14 December 2013. This is the third in a series of workshops related to this topic, the first of which was the AAAI-10 Workshop on Goal-Directed Autonomy while the second was the Self-Motivated Agents (SeMoA) Workshop, held at Lehigh University in November 2012. Our objective for holding this meeting was to encourage researchers to share information on the study, development, integration, evaluation, and application of techniques related to goal reasoning, which concerns the ability of an intelligent agent to reason about, formulate, select, and manage its goals/objectives. Goal reasoning differs from frameworks in which agents are told what goals to achieve, and possibly how goals can be decomposed into subgoals, but not how to dynamically and autonomously decide what goals they should pursue. This constraint can be limiting for agents that solve tasks in complex environments when it is not feasible to manually engineer/encode complete knowledge of what goal(s) should be pursued for every conceivable state. Yet, in such environments, states can be reached in which actions can fail, opportunities can arise, and events can otherwise take place that strongly motivate changing the goal(s) that the agent is currently trying to achieve. This topic is not new; researchers in several areas have studied goal reasoning (e.g., in the context of cognitive architectures, automated planning, game AI, and robotics). However, it has infrequently been the focus of intensive study, and (to our knowledge) no other series of meetings has focused specifically on goal reasoning. As shown in these papers, providing an agent with the ability to reason about its goals can increase performance measures for some tasks. Recent advances in hardware and software platforms (involving the availability of interesting/complex simulators or databases) have increasingly permitted the application of intelligent agents to tasks that involve partially observable and dynamically-updated states (e.g., due to unpredictable exogenous events), stochastic actions, multiple (cooperating, neutral, or adversarial) agents, and other complexities. Thus, this is an appropriate time to foster dialogue among researchers with interests in goal reasoning. Research on goal reasoning is still in its early stages; no mature application of it yet exists (e.g., for controlling autonomous unmanned vehicles or in a deployed decision aid). However, it appears to have a bright future. For example, leaders in the automated planning community have specifically acknowledged that goal reasoning has a prominent role among intelligent agents that act on their own plans, and it is gathering increasing attention from roboticists and cognitive systems researchers. In addition to a survey, the papers in this workshop relate to, among other topics, cognitive architectures and models, environment modeling, game AI, machine learning, meta-reasoning, planning, selfmotivated systems, simulation, and vehicle control. The authors discuss a wide range of issues pertaining to goal reasoning, including representations and reasoning methods for dynamically revising goal priorities. We hope that readers will find that this theme for enhancing agent autonomy to be appealing and relevant to their own interests, and that these papers will spur further investigations on this important yet (mostly) understudied topic

    The risk of re-intervention after endovascular aortic aneurysm repair

    Get PDF
    This thesis studies survival analysis techniques dealing with censoring to produce predictive tools that predict the risk of endovascular aortic aneurysm repair (EVAR) re-intervention. Censoring indicates that some patients do not continue follow up, so their outcome class is unknown. Methods dealing with censoring have drawbacks and cannot handle the high censoring of the two EVAR datasets collected. Therefore, this thesis presents a new solution to high censoring by modifying an approach that was incapable of differentiating between risks groups of aortic complications. Feature selection (FS) becomes complicated with censoring. Most survival FS methods depends on Cox's model, however machine learning classifiers (MLC) are preferred. Few methods adopted MLC to perform survival FS, but they cannot be used with high censoring. This thesis proposes two FS methods which use MLC to evaluate features. The two FS methods use the new solution to deal with censoring. They combine factor analysis with greedy stepwise FS search which allows eliminated features to enter the FS process. The first FS method searches for the best neural networks' configuration and subset of features. The second approach combines support vector machines, neural networks, and K nearest neighbor classifiers using simple and weighted majority voting to construct a multiple classifier system (MCS) for improving the performance of individual classifiers. It presents a new hybrid FS process by using MCS as a wrapper method and merging it with the iterated feature ranking filter method to further reduce the features. The proposed techniques outperformed FS methods based on Cox's model such as; Akaike and Bayesian information criteria, and least absolute shrinkage and selector operator in the log-rank test's p-values, sensitivity, and concordance. This proves that the proposed techniques are more powerful in correctly predicting the risk of re-intervention. Consequently, they enable doctors to set patients’ appropriate future observation plan
    corecore