513,705 research outputs found
DESIGNING OBJECT-ORIENTED REPRESENTATIONS FOR REASONING FROM FIRST-PRINCIPLES
Modeling expert knowledge using "situation-action" rules is not always feasible in knowledge
intensive domains involving volatile knowledge (e.g., trading). The explosive search
space involved in such domains and its dynamic nature make it extremely difficult to setup
a rule base and keep it accurate. An alternative approach suggests that in some domains
many of the rules expert use can be derived by reasoning from "first-principles". That approach
entails modeling experts' deep knowledge, and emulating reasoning processes with
deep knowledge that allow experts to derive many of the rules they use and justify them.
This paper discusses the design and implementation of an object-oriented representation
for the deep knowledge traders utilize in a business domain called hedging, which is knowledge
intensive and involves volatile knowledge. It illustrates how deep knowledge modeled
using that representation is used to support reasoning from first-principles. The paper also
analyzes features of that representation that we have found to be extremely beneficial in
the development of a knowledge-based system called INTELLIGENT-HEDGER. Based on
our experience we feel that, with minor modifications, this representation can be used in
other managerial domains involving financial reasoning.Information Systems Working Papers Serie
CARDS: A blueprint and environment for domain-specific software reuse
CARDS (Central Archive for Reusable Defense Software) exploits advances in domain analysis and domain modeling to identify, specify, develop, archive, retrieve, understand, and reuse domain-specific software components. An important element of CARDS is to provide visibility into the domain model artifacts produced by, and services provided by, commercial computer-aided software engineering (CASE) technology. The use of commercial CASE technology is important to provide rich, robust support for the varied roles involved in a reuse process. We refer to this kind of use of knowledge representation systems as supporting 'knowledge-based integration.
Building a decision support system with a knowledge modeling tool
Knowledge modeling tools are software tools that follow a modeling approach to help developers in building a knowledge-based system. The purpose of this article is to show the advantages of using this type of tools in the development of complex knowledge-based decision support systems. In order to do so, the article describes the development of a system called SAIDA in the domain of hydrology with the help of the KSM modeling tool. SAIDA operates on real-time receiving data recorded by sensors (rainfall, water levels, flows, etc.). It follows a multi-agent architecture to interpret the data, predict the future behavior and recommend control actions. The system includes an advanced knowledge based architecture with multiple symbolic representation. KSM was especially useful to design and implement the complex knowledge based architecture in an efficient way
TARGET: Rapid Capture of Process Knowledge
TARGET (Task Analysis/Rule Generation Tool) represents a new breed of tool that blends graphical process flow modeling capabilities with the function of a top-down reporting facility. Since NASA personnel frequently perform tasks that are primarily procedural in nature, TARGET models mission or task procedures and generates hierarchical reports as part of the process capture and analysis effort. Historically, capturing knowledge has proven to be one of the greatest barriers to the development of intelligent systems. Current practice generally requires lengthy interactions between the expert whose knowledge is to be captured and the knowledge engineer whose responsibility is to acquire and represent the expert's knowledge in a useful form. Although much research has been devoted to the development of methodologies and computer software to aid in the capture and representation of some types of knowledge, procedural knowledge has received relatively little attention. In essence, TARGET is one of the first tools of its kind, commercial or institutional, that is designed to support this type of knowledge capture undertaking. This paper will describe the design and development of TARGET for the acquisition and representation of procedural knowledge. The strategies employed by TARGET to support use by knowledge engineers, subject matter experts, programmers and managers will be discussed. This discussion includes the method by which the tool employs its graphical user interface to generate a task hierarchy report. Next, the approach to generate production rules for incorporation in and development of a CLIPS based expert system will be elaborated. TARGET also permits experts to visually describe procedural tasks as a common medium for knowledge refinement by the expert community and knowledge engineer making knowledge consensus possible. The paper briefly touches on the verification and validation issues facing the CLIPS rule generation aspects of TARGET. A description of efforts to support TARGET's interoperability issues on PCs, Macintoshes and UNIX workstations concludes the paper
Computational neuroanatomy: ontology-based representation of neural components and connectivity
Background: A critical challenge in neuroscience is organizing, managing, and accessing the explosion in neuroscientific knowledge, particularly anatomic knowledge. We believe that explicit knowledge-based approaches to make neuroscientific knowledge computationally accessible will be helpful in tackling this challenge and will enable a variety of applications exploiting this knowledge, such as surgical planning. Results: We developed ontology-based models of neuroanatomy to enable symbolic lookup, logical inference and mathematical modeling of neural systems. We built a prototype model of the motor system that integrates descriptive anatomic and qualitative functional neuroanatomical knowledge. In addition to modeling normal neuroanatomy, our approach provides an explicit representation of abnormal neural connectivity in disease states, such as common movement disorders. The ontology-based representation encodes both structural and functional aspects of neuroanatomy. The ontology-based models can be evaluated computationally, enabling development of automated computer reasoning applications. Conclusion: Neuroanatomical knowledge can be represented in machine-accessible format using ontologies. Computational neuroanatomical approaches such as described in this work could become a key tool in translational informatics, leading to decision support applications that inform and guide surgical planning and personalized care for neurological disease in the future
Representation of Aggregation Knowledge in OLAP Systems
Decision support systems are mainly based on multidimensional modeling. Using On-Line Analytical Processing (OLAP) tools, decision makers navigate through and analyze multidimensional data. Typically, users need to analyze data at different aggregation levels, using OLAP operators such as roll-up and drill-down. Roll-up operators decrease the details of the measure, aggregating it along the dimension hierarchy. Conversely, drill-down operators increase the details of the measure. As a consequence, dimensions hierarchies play a central role in knowledge representation. More precisely, since aggregation hierarchies are widely used to support data aggregation, aggregation knowledge should be adequately represented in conceptual multidimensional models, and mapped in subsequent logical and physical models. However, current conceptual multidimensional models poorly represent aggregation knowledge, which (1) has a complex structure and dynamics and (2) is highly contextual. In order to account for the characteristics of this knowledge, we propose to represent it with objects and rules. Static aggregation knowledge is represented using UML class diagrams, while rules, which represent the dynamics (i.e. how aggregation may be performed depending on context), are represented using the Production Rule Representation (PRR) language. The latter allows us to incorporate dynamic aggregation knowledge. We argue that this representation of aggregation knowledge allows an early modeling of user requirements in a decision support system project. In order to illustrate the applicability and benefits of our approach, we exemplify the production rules and present an application scenario
DESIGNING OBJECT-ORIENTED REPRESENTATIONS FOR REASONING FROM FIRST-PRINCIPLES
Modeling expert knowledge using "situation-action" rules is not always feasible in knowledge
intensive domains involving volatile knowledge (e.g., trading). The explosive search
space involved in such domains and its dynamic nature make it extremely difficult to setup
a rule base and keep it accurate. An alternative approach suggests that in some domains
many of the rules expert use can be derived by reasoning from "first-principles". That approach
entails modeling experts' deep knowledge, and emulating reasoning processes with
deep knowledge that allow experts to derive many of the rules they use and justify them.
This paper discusses the design and implementation of an object-oriented representation
for the deep knowledge traders utilize in a business domain called hedging, which is knowledge
intensive and involves volatile knowledge. It illustrates how deep knowledge modeled
using that representation is used to support reasoning from first-principles. The paper also
analyzes features of that representation that we have found to be extremely beneficial in
the development of a knowledge-based system called INTELLIGENT-HEDGER. Based on
our experience we feel that, with minor modifications, this representation can be used in
other managerial domains involving financial reasoning.Information Systems Working Papers Serie
24th International Conference on Information Modelling and Knowledge Bases
In the last three decades information modelling and knowledge bases have become essentially important subjects not only in academic communities related to information systems and computer science but also in the business area where information technology is applied. The series of European – Japanese Conference on Information Modelling and Knowledge Bases (EJC) originally started as a co-operation initiative between Japan and Finland in 1982. The practical operations were then organised by professor Ohsuga in Japan and professors Hannu Kangassalo and Hannu Jaakkola in Finland (Nordic countries). Geographical scope has expanded to cover Europe and also other countries. Workshop characteristic - discussion, enough time for presentations and limited number of participants (50) / papers (30) - is typical for the conference. Suggested topics include, but are not limited to: 1. Conceptual modelling: Modelling and specification languages; Domain-specific conceptual modelling; Concepts, concept theories and ontologies; Conceptual modelling of large and heterogeneous systems; Conceptual modelling of spatial, temporal and biological data; Methods for developing, validating and communicating conceptual models. 2. Knowledge and information modelling and discovery: Knowledge discovery, knowledge representation and knowledge management; Advanced data mining and analysis methods; Conceptions of knowledge and information; Modelling information requirements; Intelligent information systems; Information recognition and information modelling. 3. Linguistic modelling: Models of HCI; Information delivery to users; Intelligent informal querying; Linguistic foundation of information and knowledge; Fuzzy linguistic models; Philosophical and linguistic foundations of conceptual models. 4. Cross-cultural communication and social computing: Cross-cultural support systems; Integration, evolution and migration of systems; Collaborative societies; Multicultural web-based software systems; Intercultural collaboration and support systems; Social computing, behavioral modeling and prediction. 5. Environmental modelling and engineering: Environmental information systems (architecture); Spatial, temporal and observational information systems; Large-scale environmental systems; Collaborative knowledge base systems; Agent concepts and conceptualisation; Hazard prediction, prevention and steering systems. 6. Multimedia data modelling and systems: Modelling multimedia information and knowledge; Contentbased multimedia data management; Content-based multimedia retrieval; Privacy and context enhancing technologies; Semantics and pragmatics of multimedia data; Metadata for multimedia information systems. Overall we received 56 submissions. After careful evaluation, 16 papers have been selected as long paper, 17 papers as short papers, 5 papers as position papers, and 3 papers for presentation of perspective challenges. We thank all colleagues for their support of this issue of the EJC conference, especially the program committee, the organising committee, and the programme coordination team. The long and the short papers presented in the conference are revised after the conference and published in the Series of “Frontiers in Artificial Intelligence” by IOS Press (Amsterdam). The books “Information Modelling and Knowledge Bases” are edited by the Editing Committee of the conference. We believe that the conference will be productive and fruitful in the advance of research and application of information modelling and knowledge bases. Bernhard Thalheim Hannu Jaakkola Yasushi Kiyok
Implementation of workflow engine technology to deliver basic clinical decision support functionality
BACKGROUND: Workflow engine technology represents a new class of software with the ability to graphically model step-based knowledge. We present application of this novel technology to the domain of clinical decision support. Successful implementation of decision support within an electronic health record (EHR) remains an unsolved research challenge. Previous research efforts were mostly based on healthcare-specific representation standards and execution engines and did not reach wide adoption. We focus on two challenges in decision support systems: the ability to test decision logic on retrospective data prior prospective deployment and the challenge of user-friendly representation of clinical logic. RESULTS: We present our implementation of a workflow engine technology that addresses the two above-described challenges in delivering clinical decision support. Our system is based on a cross-industry standard of XML (extensible markup language) process definition language (XPDL). The core components of the system are a workflow editor for modeling clinical scenarios and a workflow engine for execution of those scenarios. We demonstrate, with an open-source and publicly available workflow suite, that clinical decision support logic can be executed on retrospective data. The same flowchart-based representation can also function in a prospective mode where the system can be integrated with an EHR system and respond to real-time clinical events. We limit the scope of our implementation to decision support content generation (which can be EHR system vendor independent). We do not focus on supporting complex decision support content delivery mechanisms due to lack of standardization of EHR systems in this area. We present results of our evaluation of the flowchart-based graphical notation as well as architectural evaluation of our implementation using an established evaluation framework for clinical decision support architecture. CONCLUSIONS: We describe an implementation of a free workflow technology software suite (available at http://code.google.com/p/healthflow) and its application in the domain of clinical decision support. Our implementation seamlessly supports clinical logic testing on retrospective data and offers a user-friendly knowledge representation paradigm. With the presented software implementation, we demonstrate that workflow engine technology can provide a decision support platform which evaluates well against an established clinical decision support architecture evaluation framework. Due to cross-industry usage of workflow engine technology, we can expect significant future functionality enhancements that will further improve the technology's capacity to serve as a clinical decision support platform
OntoAIMS: ontological approach to courseware authoring
In this paper we discuss how current ontology concepts can be beneficial for more
flexible and semantic rich description of the authoring process and for the provision of authoring
support of Intelligent Educational Systems (IES) with respect to the three main authoring
modules: domain editing, course composition and resource management. We take a semantic
perspective on the knowledge representation within such systems and explore the interoperability
between the various ontological structures for domain, instructional and resource modeling and
the modeling of the entire authoring process. We build upon our research on Authoring Task
Ontology and exemplify it within OntoAIMS system. We present authoring scenarios and show
their mapping with authoring task ontology. Further we discuss the OntoAIMS framework for
management of electronic learning objects (resources) and their usage in the automatic generation
of course templates for the authors. Finally, we describe our architecture, based on the
ontological specification of the authoring process
- …