727,982 research outputs found

    Knowledge Acquisition and Structuring by Multiple Experts in a Group Support Systems Environment

    Get PDF
    This study addresses the impact of Group Decision Support Systems (GDSS) on expert system development by multiple Domain Experts. Current approaches to building expert systems rely heavily on knowledge acquisition and prototyping by a Knowledge Engineer working directly with the Domain Expert. Although the complexity of knowledge domains and new organizational approaches demand the involvement of multiple experts, standard procedures limit the ability of the Knowledge Engineer to work with more than one expert at a time. Group Decision Support Systems offer a networked computerized environment for group work activities, in which multiple experts may express their ideas concurrently and anonymously through the electronic channel. GDSS systems have been widely used in other applications to support idea generation, conflict management, and the organizing, prioritizing, and synthesizing of ideas. The effects of many group process and technical factors on GDSS have been widely studied and documented. A review of the literature on expert systems, GDSS, and GDSS in relation to expert systems was conducted. Knowledge gained from this review was applied in the construction of an exploratory research model intended to provide the necessary breadth to identify factors worthy of future, more statistically-based, investigation. Domain Experts represented by college students were charged with developing and prioritizing ideas for creating a pre-prototypical expert system. The treatment group worked in a GDSS environment with a facilitator; a control group worked with a facilitator but without the assistance of GDSS. Each group then exchanged facilitators and technology to address another real-life problem. Additional groups worked with GDSS over time, addressing both problems. Data were gathered, analyzed and discussed relating to group efficiency factors, group process factors, attitudinal factors, and product quality factors. Independent Knowledge Engineers and Domain Experts evaluated the validity and verifiability of the group products. Analysis focused on the effect of GDSS in facilitating the acquisition and structuring of ideas for expert systems by multiple Domain Experts

    Knowledge-based platform for the provisioning system

    Get PDF
    The study examined the effectiveness and application of the tacit knowledge in the area of service delivery and order provisioning. The selected organization for this study is a corporate telecommunication body namely Telekom Malaysia Bhd (TM) and their specific customer called Government Integrated Telecommunication Network (GITN). The study was conducted on the chosen domain and organizations due to the special business arrangement which is based on wholesale approach between TM and GITN. It is noted that the current Order Management System (OMS) in TM is inefficient to provide the required analysis and real-time status reports of the service delivery due to its poor handling of bulk service orders. The real-time analysis and service delivery reports in relation to service provisioning are vital drivers to TM management for the decision making process. It is the key objective of this study to propose a solution to improve TM current business process particularly in the area of tracking and monitoring. The approach proposed in this study is to make use of the tacit knowledge acquired from the experts at ground level. In the process of leveraging the underlying tacit knowledge in TM day-to-day business process, the study requires the process of elicitation, adoption of effective interview technique, codification of tacit knowledge into explicit knowledge and building up appropriate system rules for the prototype. In general, the results have shown an acceptable improvement especially in the project management of service delivery area. The findings of this study are sufficient to encourage further work on the research model. Several recommendations are presented for future research. (Author's abstract

    Large-Scale Neural Systems for Vision and Cognition

    Full text link
    — Consideration of how people respond to the question What is this? has suggested new problem frontiers for pattern recognition and information fusion, as well as neural systems that embody the cognitive transformation of declarative information into relational knowledge. In contrast to traditional classification methods, which aim to find the single correct label for each exemplar (This is a car), the new approach discovers rules that embody coherent relationships among labels which would otherwise appear contradictory to a learning system (This is a car, that is a vehicle, over there is a sedan). This talk will describe how an individual who experiences exemplars in real time, with each exemplar trained on at most one category label, can autonomously discover a hierarchy of cognitive rules, thereby converting local information into global knowledge. Computational examples are based on the observation that sensors working at different times, locations, and spatial scales, and experts with different goals, languages, and situations, may produce apparently inconsistent image labels, which are reconciled by implicit underlying relationships that the network’s learning process discovers. The ARTMAP information fusion system can, moreover, integrate multiple separate knowledge hierarchies, by fusing independent domains into a unified structure. In the process, the system discovers cross-domain rules, inferring multilevel relationships among groups of output classes, without any supervised labeling of these relationships. In order to self-organize its expert system, the ARTMAP information fusion network features distributed code representations which exploit the model’s intrinsic capacity for one-to-many learning (This is a car and a vehicle and a sedan) as well as many-to-one learning (Each of those vehicles is a car). Fusion system software, testbed datasets, and articles are available from http://cns.bu.edu/techlab.Defense Advanced Research Projects Research Agency (Hewlett-Packard Company, DARPA HR0011-09-3-0001; HRL Laboratories LLC subcontract 801881-BS under prime contract HR0011-09-C-0011); Science of Learning Centers program of the National Science Foundation (SBE-0354378

    Automated Clinical Coding:What, Why, and Where We Are?

    Get PDF
    Clinical coding is the task of transforming medical information in a patient's health records into structured codes so that they can be used for statistical analysis. This is a cognitive and time-consuming task that follows a standard process in order to achieve a high level of consistency. Clinical coding could potentially be supported by an automated system to improve the efficiency and accuracy of the process. We introduce the idea of automated clinical coding and summarise its challenges from the perspective of Artificial Intelligence (AI) and Natural Language Processing (NLP), based on the literature, our project experience over the past two and half years (late 2019 - early 2022), and discussions with clinical coding experts in Scotland and the UK. Our research reveals the gaps between the current deep learning-based approach applied to clinical coding and the need for explainability and consistency in real-world practice. Knowledge-based methods that represent and reason the standard, explainable process of a task may need to be incorporated into deep learning-based methods for clinical coding. Automated clinical coding is a promising task for AI, despite the technical and organisational challenges. Coders are needed to be involved in the development process. There is much to achieve to develop and deploy an AI-based automated system to support coding in the next five years and beyond.Comment: accepted for npj Digital Medicin

    RESAM: Requirements Elicitation and Specification for Deep-Learning Anomaly Models with Applications to UAV Flight Controllers

    Full text link
    CyberPhysical systems (CPS) must be closely monitored to identify and potentially mitigate emergent problems that arise during their routine operations. However, the multivariate time-series data which they typically produce can be complex to understand and analyze. While formal product documentation often provides example data plots with diagnostic suggestions, the sheer diversity of attributes, critical thresholds, and data interactions can be overwhelming to non-experts who subsequently seek help from discussion forums to interpret their data logs. Deep learning models, such as Long Short-term memory (LSTM) networks can be used to automate these tasks and to provide clear explanations of diverse anomalies detected in real-time multivariate data-streams. In this paper we present RESAM, a requirements process that integrates knowledge from domain experts, discussion forums, and formal product documentation, to discover and specify requirements and design definitions in the form of time-series attributes that contribute to the construction of effective deep learning anomaly detectors. We present a case-study based on a flight control system for small Uncrewed Aerial Systems and demonstrate that its use guides the construction of effective anomaly detection models whilst also providing underlying support for explainability. RESAM is relevant to domains in which open or closed online forums provide discussion support for log analysis

    Exploratory analysis of methods for automated classification of laboratory test orders into syndromic groups in veterinary medicine

    Get PDF
    Background: Recent focus on earlier detection of pathogen introduction in human and animal populations has led to the development of surveillance systems based on automated monitoring of health data. Real- or near real-time monitoring of pre-diagnostic data requires automated classification of records into syndromes-syndromic surveillance-using algorithms that incorporate medical knowledge in a reliable and efficient way, while remaining comprehensible to end users. Methods: This paper describes the application of two of machine learning (Naïve Bayes and Decision Trees) and rule-based methods to extract syndromic information from laboratory test requests submitted to a veterinary diagnostic laboratory. Results: High performance (F1-macro = 0.9995) was achieved through the use of a rule-based syndrome classifier, based on rule induction followed by manual modification during the construction phase, which also resulted in clear interpretability of the resulting classification process. An unmodified rule induction algorithm achieved an F1-micro score of 0.979 though this fell to 0.677 when performance for individual classes was averaged in an unweighted manner (F1-macro), due to the fact that the algorithm failed to learn 3 of the 16 classes from the training set. Decision Trees showed equal interpretability to the rule-based approaches, but achieved an F1-micro score of 0.923 (falling to 0.311 when classes are given equal weight). A Naïve Bayes classifier learned all classes and achieved high performance (F1-micro = 0.994 and F1-macro =. 955), however the classification process is not transparent to the domain experts. Conclusion: The use of a manually customised rule set allowed for the development of a system for classification of laboratory tests into syndromic groups with very high performance, and high interpretability by the domain experts. Further research is required to develop internal validation rules in order to establish automated methods to update model rules without user input

    Fuzzy clustering with an application to scheduling

    Get PDF
    Usually, the generation of an optimal schedule is a costly and time-consuming process. This process requires expensive computational software and hardware. Scheduling problem modeling using human expert knowledge is promising and flexible in dealing with real world applications. Unfortunately, human expert knowledge may not be available in all cases, and human experts may not be able to explain their knowledge explicitly. A new scheduling decision learning approach is introduced in this thesis. A subtractive clustering based system identification method is developed to learn the scheduling decision mechanism from an existing schedule. It is utilized to build a fuzzy expert model. The existing schedule can be an optimal schedule developed using an optimization method or a schedule generated by a human expert. The fuzzy expert model is then used to generate new schedules for other problems following the decision mechanism it learned. The implementation of this method is demonstrated by modeling a single machine weighted flowtime problem. Furthermore, selective subtractive clustering and modified subtractive clustering algorithms are developed and used to improve knowledge extraction. Those algorithms can also be used to model nonlinear and spiral systems using the clustering based system identification, such as function approximation applications and pattern classification applications when the information about the system is scarce

    Discovering and Utilising Expert Knowledge from Security Event Logs

    Get PDF
    Security assessment and configuration is a methodology of protecting computer systems from malicious entities. It is a continuous process and heavily dependent on human experts, which are widely attributed to being in short supply. This can result in a system being left insecure because of the lack of easily accessible experience and specialist resources. While performing security tasks, human experts often revert to a system's event logs to determine status of security, such as failures, configuration modifications, system operations etc. However, finding and exploiting knowledge from event logs is a challenging and time-consuming task for non-experts. Hence, there is a strong need to provide mechanisms to make the process easier for security experts, as well as providing tools for those with significantly less security expertise. Doing so automatically allows for persistent and methodical testing without an excessive amount of manual time and effort, and makes computer security more accessible to on-experts. In this thesis, we present a novel technique to process security event logs of a system that have been evaluated and configured by a security expert, extract key domain knowledge indicative of human decision making, and automatically apply acquired knowledge to previously unseen systems by non-experts to recommend security improvements. The proposed solution utilises association and causal rule mining techniques to automatically discover relationships in the event log entries. The relationships are in the form of cause and effect rules that define security-related patterns. These rules and other relevant information are encoded into a PDDL-based domain action model. The domain model and problem instance generated from any vulnerable system can then be used to produce a plan-of-action by employing a state-of-the-art automated planning algorithm. The plan can be exploited by non-professionals to identify the security issues and make improvements. Empirical analysis is subsequently performed on 21 live, real world event log datasets, where the acquired domain model and identified plans are closely examined. The solution's accuracy lies between 73% - 92% and gained a significant performance boost as compared to the manual approach of identifying event relationships. The research presented in this thesis is an automation of extracting knowledge from event data steams. The previous research and current industry practices suggest that this knowledge elicitation is performed by human experts. As evident from the empirical analysis, we present a promising line of work that has the capacity to be utilised in commercial settings. This would reduce (or even eliminate) the dire and immediate need for human resources along with contributing towards financial savings

    Application of CBR for intelligent process control of a WWTP

    Get PDF
    This paper proposes the use of a Case-Based Reasoning (CBR) system for the control and the supervision of a real wastewater treatment plant (WWTP). A WWTP is a critical system which aims to ensure the quality of the water discharged to the receiving bodies, stablished by applicable regulations. At the current stage the proposed methodology has been tested off-line on a real system for the control of the aeration process in the biological treatment of a WWTP within the ambit ofConsorci Besòs Tordera (CBT), a local water administration in the area of Barcelona. For this purpose, data mining methods are considered to extract the available knowledge from historical data to find a useful case base to be able to generate set-points for the local controllers in the WWTP. The results presented in this work are evaluated taking into account the performance of the CBR method e.g. case base size, CBR cycle time or number of cases resolved satisfactorily (forthcoming steps will include on-line tests). For this purpose, some Key Performance Indicators (KPI) are designed together with the plant manager and process experts, in order to monitor key parameters of the WWTP which are representative of the performance of the control and supervision system. Hence, these KPI are related with water quality regulations —e.g. ammonia concentration in the WWTP effluent— and the economic cost efficiency —e.g. electrical consumption of the installation. In order to evaluate the results, different flat-based memory organizations (i.e. cases are stored sequentially in a list) for the case base are considered. First, a unique case base is used. At the current stage and for the results shown in this work, this case base is divided in multiple libraries depending on a case classification. Finally, the combination of this approach with Rule-Based Reasoning (RBR) methods is proposed for the next stages of the work.The authors acknowledge the partial support of this work by the Industrial Doctorate Programme (2017-DI-006) and the Research Consolidated Groups/Centres Grant (2017 SGR 574) from the Catalan Agency of University and Research Grants Management (AGAUR), from Catalan Government.Peer ReviewedPostprint (author's final draft

    Tacit knowledge integration within the traditional construction procurement system

    Get PDF
    Knowledge management is a broad concept that has been investigated in many disciplines. Tacit knowledge management is more important in construction industry where common issues exist between the design and construction phase. However, most knowledge is embedded in the minds of professions and based on experiences they achieved from project. The successful completion of a project requires a rigorous understanding of each stage of project lifecycle that can be enhanced through integrating knowledge between project members, in terms of capturing and sharing knowledge between project members, and transferring it to the next project. Due to the temporary nature of construction projects, people who work on these project tend to disperse after completion of the project. This means the knowledge and experiences they achieved through project will be wasted, if it is not captured and shared structurally across project. Within this context, the failure to integrate knowledge will result in increasing the possibility of ‘reinventing the wheel’, which means spending more time and cost. The rational that led to this study came as a result of increasing interest in the need for tacit knowledge integration, in terms of capturing, sharing and transferring knowledge, especially within construction project undertaken through the traditional procurement system, because this system is based on the separation of the design and construction phase. The aim of this research is to develop a framework on how to integrate tacit knowledge in terms of capturing, sharing and transferring, within a construction project undertaken through the traditional procurement system. This is done through conducting documentary survey, experts’ survey and case studies sample within the UK construction industry. The documentary survey was used to form researcher’s background information and develop a conceptual framework which would be then taken to real life situation to investigate, gather relevant information and understand the perceptions and values of stakeholders in using knowledge integration within construction projects. Furthermore, an experts’ survey (expert’s interviews) was used to collect qualitative data through interviews with four experts. These experts were from both academia and industry, and they were selected based on their experiences and engagement in the traditional-based construction projects. Multiple-case holistic design was selected for conducting this research, in order to provide credibility to the research outcome. There is only one unit of analysis that is needed to study in order to explore the approaches and techniques that were used by construction organisations for tackling challenges in the process of tacit knowledge integration. Two case studies were selected to reflect the building sector within construction industry. The projects were complex, large and costs over £5m. The selected case studies differ in that one of them is completed project and the other is an ongoing project at construction phase. As most of the problems and errors occurred in project lifecycle are related to designing phase, the cases were selected from same organisation involved at designing phase in order to analyse and compare the process of knowledge integration. Furthermore, an online open-end questionnaire was conducted to collect experts’ opinion on the developed framework. The questionnaire was distributed among 180 experts. In this research the target population was professionals who were involved and experienced in the traditional-based construction project in the UK construction industry. Research findings highlighted three main challenges for integrating tacit knowledge within the traditional construction project which are Organisational Culture, Contractual Boundaries and Knowledge management system (strategies and policies). The Critical Success Factors (CSFs) for tackling these challenges and required techniques for structurally implementing the process of tacit knowledge integration are identified. Furthermore, it is concluded that BIM technology can be used and enhanced the process of tacit knowledge integration, if the two-stage process traditional procurement is adopted. This means construction contractors should be involved in project before the completion of designing phase. Building on the research findings, this research offers a framework, with a guideline, on how to integrate tacit knowledge, in terms of capturing, sharing and transferring, within the traditional construction project
    corecore