495,384 research outputs found

    On-line case-based policy learning for automated planning in probabilistic environments

    Get PDF
    Many robotic control architectures perform a continuous cycle of sensing, reasoning and acting, where that reasoning can be carried out in a reactive or deliberative form. Reactive methods are fast and provide the robot with high interaction and response capabilities. Deliberative reasoning is particularly suitable in robotic systems because it employs some form of forward projection (reasoning in depth about goals, pre-conditions, resources and timing constraints) and provides the robot reasonable responses in situations unforeseen by the designer. However, this reasoning, typically conducted using Artificial Intelligence techniques like Automated Planning (AP), is not effective for controlling autonomous agents which operate in complex and dynamic environments. Deliberative planning, although feasible in stable situations, takes too long in unexpected or changing situations which require re-planning. Therefore, planning cannot be done on-line in many complex robotic problems, where quick responses are frequently required. In this paper, we propose an alternative approach based on case-based policy learning which integrates deliberative reasoning through AP and reactive response time through reactive planning policies. The method is based on learning planning knowledge from actual experiences to obtain a case-based policy. The contribution of this paper is two fold. First, it is shown that the learned case-based policy produces reasonable and timely responses in complex environments. Second, it is also shown how one case-based policy that solves a particular problem can be reused to solve a similar but more complex problem in a transfer learning scope.This paper has been partially supported by the Spanish Ministerio de Econom a y Competitividad TIN2015-65686-C5-1-R and the European Union's Horizon 2020 Research and Innovation programme under Grant Agreement No. 730086 (ERGO)

    Integrating Case-Based Reasoning with Adaptive Process Management

    Get PDF
    The need for more flexiblity of process-aware information systems (PAIS) has been discussed for several years and different approaches for adaptive process management have emerged. Only few of them provide support for both changes of individual process instances and the propagation of process type changes to a collection of related process instances. The knowledge about changes has not yet been exploited by any of these systems. To overcome this practical limitation, PAIS must capture the whole process life cycle and all kinds of changes in an integrated way. They must allow users to deviate from the predefined process in exceptional situations, and assist them in retrieving and reusing knowledge about previously performed changes. In this report we present a proof-of concept implementation of a learning adaptive PAIS. The prototype combines the ADEPT2 framework for dynamic process changes with concepts and methods provided by case-based reasoning(CBR) technology

    Workshop 13. Clinical Diagnostic Reasoning: Equipping students with peer instruction skills to work together in developing their diagnostic reasoning

    Get PDF
    Workshop Format An introductory presentation covering best evidence in current medical education literature regarding development of diagnostic clinical reasoning skills for undergraduate students Small group work focusing on clinical tutor- identified real case scenarios to enable delegates to identify teaching and learning approaches to help undergraduate students to develop diagnostic reasoning skills. This will include consideration of facilitation of peer-peer approaches for development of clinical reasoning skills A closing plenary will include • DVD demonstrating the authors’ approach to facilitation of skills development in this area • Further discussion about the student-led approach • Reflection on incorporating novel approaches in delegates` own curriculum and teaching sessions • Presentation of the authors student “pocket guide” hand-out • Questions/Answers/Sharing best practice. Workshop Submissions Objectives To consider clinical tutor-identified, specific, student cognitive-processing difficulties in clinical diagnostic reasoning in contemporary systems based curricula. o consider specific challenges for students in developing their own clinical reasoning skills, following a transition from university to clinical teaching environments. To aid development of students` ability to consider their own clinical reasoning skills and facilitate development of these skills in their colleagues To share best practice with colleagues To discuss the authors` example of curricular innovation in this area Intended audience Tutors responsible for delivering clinical skills/ clinical reasoning teaching in undergraduate training

    Explanation for defeasible entailment

    Get PDF
    Explanation facilities are an essential part of tools for knowledge representation and reasoning systems. Knowledge representation and reasoning systems allow users to capture information about the world and reason about it. They are useful in understanding entailments which allow users to derive implicit knowledge that can be made explicit through inferences. Additionally, explanations also assist users in debugging and repairing knowledge bases when conflicts arise. Understanding the conclusions drawn from logic-based systems are complex and requires expert knowledge, especially when defeasible knowledge bases are taken into account for both expert and general users. A defeasible knowledge base represents statements that can be retracted because they refer to information in which there are exceptions to stated rules. That is, any defeasible statement is one that may be withdrawn upon learning of an exception. Explanations for classical logics such as description logics which are well-known formalisms for reasoning about information in a given domain are provided through the notion of justifications. Simply providing or listing the statements that are responsible for an entailment in the classical case is enough to justify an entailment. However, when looking at the defeasible case where entailed statements can be retracted, this is not adequate because the way in which entailment is performed is more complicated than the classical case. In this dissertation, we combine explanations with a particular approach to dealing with defeasible reasoning. We provide an algorithm to compute justification-based explanations for defeasible knowledge bases. It is shown that in order to accurately derive justifications for defeasible knowledge bases, we need to establish the point at which conflicts arise by using an algorithm to come up with a ranking of defeasible statements. This means that only a portion of the knowledge is considered because the statements that cause conflicts are discarded. The final algorithm consists of two parts; the first part establishes the point at which the conflicts occur and the second part uses the information obtained from the first algorithm to compute justifications for defeasible knowledge bases

    Toward Learning Systems: A Validated Methodfor Classifying Knowledge Queries

    Get PDF
    Organizations currently possess a vast and rapidly growing amount of information--much of it residing in corporate databases, generated as a by-product of transaction automation. Although this information is a potentially rich source of knowledge about production processes, few companies have begun to implement learning systems to leverage the value of this stored data and information (Bohn 1994). At the same time there is a broad and expanding array of technologies, tools, and models to assist in deriving knowledge from data. These include technological areas such as knowledge discovery in databases, machine learning, statistics, neural networks, expert systems, and case-based reasoning (Piatetsky-Shapiro and Frawley 1991). These technologies and tools possess a broad range of capabilities for inductive, deductive and analogical reasoning approaches to the creation and validation of knowledge. The literature of computer science, information systems, and statistics contains a vast number of studies comparing the most similar types of learning algorithms from a very technical perspective (Curram and Mingers 1994; Kodratoff 1988; Weiss and Kulikowski 1991) and specialists exist in each of the technology areas. However organizations faced with planning learning systems cannot normally assemble a team with expertise in each of the potentially important technology areas. They must start by identifying critical questions and learning goals, which are driven by business context and unconstrained by the capabilities of particular technologies. The premise of the research is that there is a growing need for guiding frameworks and methods to help organizations assess their learning needs--starting from the broadened perspective of business knowledge requirements --and match them with the most suitable categories of learning technologies, models, tools and specialists (Keen 1994). This report describes research that is underway to develop a classification theory to serve as the foundation for technology selection stages of such a metho

    Modeling Purposive Legal Argumentation and Case Outcome Prediction using Argument Schemes�in the Value Judgment Formalism

    Get PDF
    Artificial Intelligence and Law studies how legal reasoning can be formalized in order to eventually be able to develop systems that assist lawyers in the task of researching, drafting and evaluating arguments in a professional setting. To further this goal, researchers have been developing systems, which, to a limited extent, autonomously engage in legal reasoning, and argumentation on closed domains. This dissertation presents the Value Judgment Formalism and its experimental implementation in the VJAP system, which is capable of arguing about, and predicting outcomes of, a set of trade secret misappropriation cases. VJAP argues about cases by creating an argument graph for each case using a set of argument schemes. These schemes use a representation of values underlying trade secret law and effects of facts on these values. VJAP argumentatively balances effects in the given case and analogizes it to individual precedents and the value tradeoffs in those precedents. It predicts case outcomes using a confidence measure computed from the argument graph and generates textual legal arguments justifying its predictions. The confidence propagation uses quantitative weights assigned to effects of facts on values. VJAP automatically learns these weights from past cases using an iterative optimization method. The experimental evaluation shows that VJAP generates case-based legal arguments that make plausible and intelligent-appearing use of precedents to reason about a case in terms of differences and similarities to a precedent and the value tradeoffs that both contain. VJAP’s prediction performance is promising when compared to machine learning algorithms, which do not generate legal arguments. Due to the small case base, however, the assessment of prediction performance was not statistically rigorous. VJAP exhibits argumentation and prediction behavior that, to some extent, resembles phenomena in real case-based legal reasoning, such as realistically appearing citation graphs. The VJAP system and experiment demonstrate that it is possible to effectively combine symbolic knowledge and inference with quantitative confidence propagation. In AI\&Law, such systems can embrace the structure of legal reasoning and learn quantitative information about the domain from prior cases, as well as apply this information in a structurally realistic way in the context of new cases

    The Effectiveness of Case-Based Learning in Facilitating Clinical Reasoning Skills in Undergraduate Anatomy and Physiology Instruction

    Get PDF
    Case-based learning (CBL) is an approach that uses clinical case activities in the classroom to engage students and encourage a deeper understanding of scientific concepts. Anatomy and Physiology (A&P) is a course that many students take as a prerequisite for admission to professional health schools. This study investigated the effect of CBL in facilitating clinical reasoning skills (CRS) in undergraduate A&P instruction. Undergraduate students from two classes taught by the same instructor participated in the study. One class (experimental group, n = 24 ) was taught with the CBL approach, and the other class (control group, n = 24 ) was taught without CBL. Quantitative data collected for this study were scores on the pretest and posttest clinical reasoning problem (CRP) instrument about the central nervous system, autonomic nervous system, and special senses. A 2 Ă— 2 (CBL vs. No CBL Ă— Pre-Posttest) mixed-model analysis of variance (ANOVA) was performed for each of the three systems with the scores on CRP as a dependent variable. Nine students were selected for interviews from the control and experimental groups based on their CRP assessments. Interviews were conducted after the completion of each CRP assessment, and content analysis was performed for the interview data.Analysis of the quantitative data revealed an increase in mean scores from pretest to posttest for those in the experimental group but a decrease in mean scores from pretest to posttest for those in the control group. Scores on special senses revealed a significant group Ă— time interaction effect. Analysis of the interviews revealed that students in the experimental group utilized A&P concepts while reasoning through the CRP assessments. These results suggest that CBL may help facilitate CRS

    Challenges for a CBR framework for argumentation in open MAS

    Full text link
    [EN] Nowadays, Multi-Agent Systems (MAS) are broadening their applications to open environments, where heterogeneous agents could enter into the system, form agents’ organizations and interact. The high dynamism of open MAS gives rise to potential conflicts between agents and thus, to a need for a mechanism to reach agreements. Argumentation is a natural way of harmonizing conflicts of opinion that has been applied to many disciplines, such as Case-Based Reasoning (CBR) and MAS. Some approaches that apply CBR to manage argumentation in MAS have been proposed in the literature. These improve agents’ argumentation skills by allowing them to reason and learn from experiences. In this paper, we have reviewed these approaches and identified the current contributions of the CBR methodology in this area. As a result of this work, we have proposed several open issues that must be taken into consideration to develop a CBR framework that provides the agents of an open MAS with arguing and learning capabilities.This work was partially supported by CONSOLIDER-INGENIO 2010 under grant CSD2007-00022 and by the Spanish government and FEDER funds under TIN2006-14630-C0301 project.Heras Barberá, SM.; Botti Navarro, VJ.; Julian Inglada, VJ. (2009). Challenges for a CBR framework for argumentation in open MAS. Knowledge Engineering Review. 24(4):327-352. https://doi.org/10.1017/S0269888909990178S327352244Willmott S. , Vreeswijk G. , Chesñevar C. , South M. , McGinnis J. , Modgil S. , Rahwan I. , Reed C. , Simari G. 2006. Towards an argument interchange format for multi-agent systems. In Proceedings of the AAMAS International Workshop on Argumentation in Multi-Agent Systems, ArgMAS-06, 17–34.Sycara, K. P. (1990). Persuasive argumentation in negotiation. Theory and Decision, 28(3), 203-242. doi:10.1007/bf00162699Ontañón S. , Plaza E. 2006. Arguments and counterexamples in case-based joint deliberation. In AAMAS-06 Workshop on Argumentation in Multi-Agent Systems, ArgMAS-06, 36–53.Sadri F. , Toni F. , Torroni P. 2001. Dialogues for negotiation: agent varieties and dialogue sequences. In Proceedings of the 8th International Workshop on Agent Theories, Architectures, and Languages, ATAL-01, Intelligent Agents VIII 2333, 405–421. Springer.Fox J. , Parsons S. 1998. Arguing about beliefs and actions. In Applications of Uncertainty Formalisms, Lecture Notes in Computer Science 1455, 266–302. Springer.Dung, P. M. (1995). On the acceptability of arguments and its fundamental role in nonmonotonic reasoning, logic programming and n-person games. Artificial Intelligence, 77(2), 321-357. doi:10.1016/0004-3702(94)00041-xAulinas M. , Tolchinsky P. , Turon C. , Poch M. , Cortés U. 2007. Is my spill environmentally safe? Towards an integrated management of wastewater in a river basin using agents that can argue. In 7th International IWA Symposium on Systems Analysis and Integrated Assessment in Water Management. Washington DC, USA.Amgoud L. 2003. A formal framework for handling conflicting desires. In Symbolic and Quantitative Approaches to Reasoning with Uncertainty, Lecture Notes in Computer Science 2711, 552–563. Springer.Armengol E. , Plaza E. 2001. Lazy induction of descriptions for relational case-based learning. In European Conference on Machine Learning, ECML-01, 13–24.Sørmo, F., Cassens, J., & Aamodt, A. (2005). Explanation in Case-Based Reasoning–Perspectives and Goals. Artificial Intelligence Review, 24(2), 109-143. doi:10.1007/s10462-005-4607-7RAHWAN, I., RAMCHURN, S. D., JENNINGS, N. R., McBURNEY, P., PARSONS, S., & SONENBERG, L. (2003). Argumentation-based negotiation. The Knowledge Engineering Review, 18(4), 343-375. doi:10.1017/s0269888904000098Brüninghaus S. , Ashley K. D. 2001. Improving the representation of legal case texts with information extraction methods. In 7th International Conference on Artificial Intelligence and Law, ICAIL-01, 42–51.Parsons, S. (1998). Agents that reason and negotiate by arguing. Journal of Logic and Computation, 8(3), 261-292. doi:10.1093/logcom/8.3.261Atkinson, K., Bench-Capon, T., & Mcburney, P. (2005). A Dialogue Game Protocol for Multi-Agent Argument over Proposals for Action. Autonomous Agents and Multi-Agent Systems, 11(2), 153-171. doi:10.1007/s10458-005-1166-xBrüninghaus S. , Ashley K. D. 2003. Predicting the outcome of case-based legal arguments. In 9th International Conference on Artificial Intelligence and Law, ICAIL-03, 233–242.Modgil S. , Tolchinsky P. , Cortés U. 2005. Towards formalising agent argumentation over the viability of human organs for transplantation. In 4th Mexican International Conference on Artificial Intelligence, MICAI-05, 928–938.Tolchinsky P. , Atkinson K. , McBurney P. , Modgil S. , Cortés U. 2007. Agents deliberating over action proposals using the ProCLAIM model. In 5th International Central and Eastern European Conference on Multi-Agent Systems, CEEMAS-07, 32–41.Prakken, H., & Sartor, G. (1998). Artificial Intelligence and Law, 6(2/4), 231-287. doi:10.1023/a:1008278309945Gordon T. F. , Karacapilidis N. 1997. The Zeno argumentation framework. In International Conference on Artificial Intelligence and Law, ICAIL-97, ACM Press, 10–18.Tolchinsky P. , Modgil S. , Cortés U. 2006a. Argument schemes and critical questions for heterogeneous agents to argue over the viability of a human organ. In AAAI Spring Symposium Series; Argumentation for Consumers of Healthcare, 377–384.Aleven V. , Ashley K. D. 1997. Teaching case-based argumentation through a model and examples, empirical evaluation of an intelligent learning environment. In 8th World Conference of the Artificial Intelligence in Education Society, 87–94.Rahwan, I. (2005). Guest Editorial: Argumentation in Multi-Agent Systems. Autonomous Agents and Multi-Agent Systems, 11(2), 115-125. doi:10.1007/s10458-005-3079-0RISSLAND, E. L., ASHLEY, K. D., & BRANTING, L. K. (2005). Case-based reasoning and law. The Knowledge Engineering Review, 20(3), 293-298. doi:10.1017/s0269888906000701Tolchinsky, P., Cortes, U., Modgil, S., Caballero, F., & Lopez-Navidad, A. (2006). Increasing Human-Organ Transplant Availability: Argumentation-Based Agent Deliberation. IEEE Intelligent Systems, 21(6), 30-37. doi:10.1109/mis.2006.116McBurney, P., Hitchcock, D., & Parsons, S. (2006). The eightfold way of deliberation dialogue. International Journal of Intelligent Systems, 22(1), 95-132. doi:10.1002/int.20191Rissland, E. L., Ashley, K. D., & Loui, R. P. (2003). AI and Law: A fruitful synergy. Artificial Intelligence, 150(1-2), 1-15. doi:10.1016/s0004-3702(03)00122-xSoh, L.-K., & Tsatsoulis, C. (2005). A Real-Time Negotiation Model and A Multi-Agent Sensor Network Implementation. Autonomous Agents and Multi-Agent Systems, 11(3), 215-271. doi:10.1007/s10458-005-0539-5Capobianco, M., Chesñevar, C. I., & Simari, G. R. (2005). Argumentation and the Dynamics of Warranted Beliefs in Changing Environments. Autonomous Agents and Multi-Agent Systems, 11(2), 127-151. doi:10.1007/s10458-005-1354-8Tolchinsky P. , Modgil S. , Cortés U. , Sànchez-Marrè M. 2006b. CBR and argument schemes for collaborative decision making. In Conference on Computational Models of Argument, COMMA-06, 144, 71–82. IOS Press.Ossowski S. , Julian V. , Bajo J. , Billhardt H. , Botti V. , Corchado J. M. 2007. Open issues in open MAS: an abstract architecture proposal. In Conferencia de la Asociacion Española para la Inteligencia Artificial, CAEPIA-07, 2, 151–160.Karacapilidis, N., & Papadias, D. (2001). Computer supported argumentation and collaborative decision making: the HERMES system. Information Systems, 26(4), 259-277. doi:10.1016/s0306-4379(01)00020-5Aamodt A. 2004. Knowledge-intensive case-based reasoning in Creek. In 7th European Conference on Case-Based Reasoning ECCBR-04, 1–15.Jakobovits H. , Vermeir D. 1999. Dialectic semantics for argumentation frameworks. In Proceedings of the 7th International Conference on Artificial Intelligence and Law, ICAIL-99, ACM Press, 53–62.Díaz-Agudo, B., & González-Calero, P. A. (s. f.). An Ontological Approach to Develop Knowledge Intensive CBR Systems. Ontologies, 173-213. doi:10.1007/978-0-387-37022-4_7Reed C. , Walton D. 2005. Towards a formal and implemented model of argumentation schemes in agent communication. In Proceedings of the 1st International Workshop in Multi-Agent Systems, ArgMAS-04, 173–188.Sycara K. 1989. Argumentation: planning other agents’ plans. In 11th International Joint Conference on Artificial Intelligence, 1, 517–523. Morgan Kaufmann Publishers, Inc.Bench-Capon, T. J. M., & Dunne, P. E. (2007). Argumentation in artificial intelligence. Artificial Intelligence, 171(10-15), 619-641. doi:10.1016/j.artint.2007.05.001Reiter, R. (1980). A logic for default reasoning. Artificial Intelligence, 13(1-2), 81-132. doi:10.1016/0004-3702(80)90014-4Amgoud L. , Kaci S. 2004. On the generation of bipolar goals in argumentation-based negotiation. In 1st International Workshop on Argumentation in Multi-Agent Systems, ArgMAS, Lecture Notes in Computer Science 3366, 192–207. Springer.CHESÑEVAR, C., MCGINNIS, MODGIL, S., RAHWAN, I., REED, C., SIMARI, G., … WILLMOTT, S. (2006). Towards an argument interchange format. The Knowledge Engineering Review, 21(4), 293-316. doi:10.1017/s0269888906001044Rahwan I. , Amgoud L. 2006. An argumentation-based approach for practical reasoning. In Proceedings of the 5th International Joint Conference on Autonomous Agents and Multiagent Systems, AAMAS-06, ACM Press, 347–354.Rittel, H. W. J., & Webber, M. M. (1973). Dilemmas in a general theory of planning. Policy Sciences, 4(2), 155-169. doi:10.1007/bf01405730Soh L.-K. , Tsatsoulis C. 2001b. Reflective negotiating agents for real-time multisensor target tracking. In International Joint Conference on Artificial Intelligence, IJCAI-01, 1121–1127.Eemeren, F. H. van, & Grootendorst, R. (1984). Speech Acts in Argumentative Discussions. doi:10.1515/9783110846089Rissland E. L. , Skalak D. B. , Friedman M. T. 1993. Bankxx: a program to generate argument through case-based search. In International Conference on Artificial Intelligence and Law, ICAIL-93, 117–124.Sycara K. 1987. Resolving Adversarial Conflicts: An Approach Integrating Case-Based and Analytic Methods, PhD thesis, School of Information and Computer Science. Georgia Institute of Technology.Ontañón S. , Plaza E. 2007. Learning and joint deliberation through argumentation in multi-agent systems. In International Conference on Autonomous Agents and Multiagent Systems, AAMAS-07, 971–978.Rissland, E. L., & Skalak, D. B. (1991). CABARET: rule interpretation in a hybrid architecture. International Journal of Man-Machine Studies, 34(6), 839-887. doi:10.1016/0020-7373(91)90013-wDaniels J. J. , Rissland E. L. 1997. Finding legally relevant passages in case opinions. In 6th International Conference on Artificial Intelligence and Law, ICAIL-97, 39–47.Brüninghaus S. , Ashley K. D. 2005. Generating legal arguments and predictions from case texts. In 10th International Conference on Artificial Intelligence and Law, ICAIL-05, 65–74.Simari G. R. , García A. J. , Capobianco M. 2004. Actions, planning and defeasible reasoning. In Proceedings of the 10th International Workshop on Non-monotonic Reasoning, NMR-04, 377–384.Soh L.-K. , Tsatsoulis C. 2001a. Agent-based argumentative negotiations with case-based reasoning. In AAAI Fall Symposium on Negotiation Methods for Autonomous Cooperative Systems, 16–25.Ashley, K. D. (1991). Reasoning with cases and hypotheticals in HYPO. International Journal of Man-Machine Studies, 34(6), 753-796. doi:10.1016/0020-7373(91)90011-uHulstijn J. , van der Torre L. 2004, Combining goal generation and planning in an argumentation framework. In Proceedings of the Workshop on Argument, Dialogue and Decision. International Workshop on Non-monotonic Reasoning, NMR-04, 212–218.Karacapilidis N. , Trousse B. , Papadias D. 1997. Using case-based reasoning for argumentation with multiple viewpoints. In 2nd International Conference on Case-Based Reasoning, ICCBR-97, 541–552.Branting, L. K. (1991). Building explanations from rules and structured cases. International Journal of Man-Machine Studies, 34(6), 797-837. doi:10.1016/0020-7373(91)90012-

    Case-Based Capture and Reuse of Aerospace Design Rationale

    Get PDF
    The goal of this project is to apply artificial intelligence techniques to facilitate capture and reuse of aerospace design rationale. The project applies case-based reasoning (CBR) and concept mapping (CMAP) tools to the task of capturing, organizing, and interactively accessing experiences or "cases" encapsulating the methods and rationale underlying expert aerospace design. As stipulated in the award, Indiana University and Ames personnel are collaborating on performance of research and determining the direction of research, to assure that the project focuses on high-value tasks. In the first five months of the project, we have made two visits to Ames Research Center to consult with our NASA collaborators, to learn about the advanced aerospace design tools being developed there, and to identify specific needs for intelligent design support. These meetings identified a number of task areas for applying CBR and concept mapping technology. We jointly selected a first task area to focus on: Acquiring the convergence criteria that experts use to guide the selection of useful data from a set of numerical simulations of high-lift systems. During the first funding period, we developed two software systems. First, we have adapted a CBR system developed at Indiana University into a prototype case-based reasoning shell to capture and retrieve information about design experiences, with the sample task of capturing and reusing experts' intuitive criteria for determining convergence (work conducted at Indiana University). Second, we have also adapted and refined existing concept mapping tools that will be used to clarify and capture the rationale underlying those experiences, to facilitate understanding of the expert's reasoning and guide future reuse of captured information (work conducted at the University of West Florida). The tools we have developed are designed to be the basis for a general framework for facilitating tasks within systems developed by the Advanced Design Technologies Testbed (ADTT) project at ARC. The tenets of our framework are (1) that the systems developed should leverage a designer's knowledge, rather than attempting to replace it; (2) that learning and user feedback must play a central role, so that the system can adapt to how it is used, and (3) that the learning and feedback processes must be as natural and as unobtrusive as possible. In the second funding period we will extend our current work, applying the tools to capturing higher-level design rationale

    Similarity and explanation for dynamic telecommunication engineer support.

    Get PDF
    Understanding similarity between different examples is a crucial aspect of Case-Based Reasoning (CBR) systems, but learning representations optimised for similarity comparisons can be difficult. CBR systems typically rely on separate algorithms to learn representations for cases and to compare those representations, as symbolised by the vocabulary and similarity knowledge containers respectively. Deep Metric Learners (DMLs) are a branch of deep learning architectures which learn a representation optimised for similarity comparison by leveraging direct case comparisons during training. In this thesis we explore the symbiotic relationship between these two fields of research. Firstly we examine what can be learned from traditional CBR research to improve the training of DMLs through training strategies. We then examine how DMLs can fill the traditionally separate roles of the vocabulary and similarity knowledge containers. We perform this exploration on the real-world problem of experience transfer between experts and non-experts on service provisioning for telecommunication organisations. This problem is also revealing about the requirements for practical applications to be explainable to their intended user group. With that in mind, we conclude this thesis with work towards the development of an explanation framework designed to explain the recommendations of similarity-based classifiers. We support this practical contribution with an exploration of similarity knowledge to support autonomous measurement of explanation quality
    • …
    corecore