726,665 research outputs found

    Conceptual information processing: A robust approach to KBS-DBMS integration

    Get PDF
    Integrating the respective functionality and architectural features of knowledge base and data base management systems is a topic of considerable interest. Several aspects of this topic and associated issues are addressed. The significance of integration and the problems associated with accomplishing that integration are discussed. The shortcomings of current approaches to integration and the need to fuse the capabilities of both knowledge base and data base management systems motivates the investigation of information processing paradigms. One such paradigm is concept based processing, i.e., processing based on concepts and conceptual relations. An approach to robust knowledge and data base system integration is discussed by addressing progress made in the development of an experimental model for conceptual information processing

    The Requirements for Ontologies in Medical Data Integration: A Case Study

    Full text link
    Evidence-based medicine is critically dependent on three sources of information: a medical knowledge base, the patients medical record and knowledge of available resources, including where appropriate, clinical protocols. Patient data is often scattered in a variety of databases and may, in a distributed model, be held across several disparate repositories. Consequently addressing the needs of an evidence-based medicine community presents issues of biomedical data integration, clinical interpretation and knowledge management. This paper outlines how the Health-e-Child project has approached the challenge of requirements specification for (bio-) medical data integration, from the level of cellular data, through disease to that of patient and population. The approach is illuminated through the requirements elicitation and analysis of Juvenile Idiopathic Arthritis (JIA), one of three diseases being studied in the EC-funded Health-e-Child project.Comment: 6 pages, 1 figure. Presented at the 11th International Database Engineering & Applications Symposium (Ideas2007). Banff, Canada September 200

    Fifth Conference on Artificial Intelligence for Space Applications

    Get PDF
    The Fifth Conference on Artificial Intelligence for Space Applications brings together diverse technical and scientific work in order to help those who employ AI methods in space applications to identify common goals and to address issues of general interest in the AI community. Topics include the following: automation for Space Station; intelligent control, testing, and fault diagnosis; robotics and vision; planning and scheduling; simulation, modeling, and tutoring; development tools and automatic programming; knowledge representation and acquisition; and knowledge base/data base integration

    SETL: A programmable semantic extract-transform-load framework for semantic data warehouses

    Get PDF
    In order to create better decisions for business analytics, organizations increasingly use external structured, semi-structured, and unstructured data in addition to the (mostly structured) internal data. Current Extract-Transform-Load (ETL) tools are not suitable for this “open world scenario” because they do not consider semantic issues in the integration processing. Current ETL tools neither support processing semantic data nor create a semantic Data Warehouse (DW), a repository of semantically integrated data. This paper describes our programmable Semantic ETL (SETL) framework. SETL builds on Semantic Web (SW) standards and tools and supports developers by offering a number of powerful modules, classes, and methods for (dimensional and semantic) DW constructs and tasks. Thus it supports semantic data sources in addition to traditional data sources, semantic integration, and creating or publishing a semantic (multidimensional) DW in terms of a knowledge base. A comprehensive experimental evaluation comparing SETL to a solution made with traditional tools (requiring much more hand-coding) on a concrete use case, shows that SETL provides better programmer productivity, knowledge base quality, and performance.Peer ReviewedPostprint (author's final draft

    Hierarchical Task Network Planning with Common-Sense Reasoning for Multiple-People Behaviour Analysis

    Get PDF
    Safety on public transport is a major concern for the relevant authorities. We address this issue by proposing an automated surveillance platform which combines data from video, infrared and pressure sensors. Data homogenisation and integration is achieved by a distributed architecture based on communication middleware that resolves interconnection issues, thereby enabling data modelling. A common-sense knowledge base models and encodes knowledge about public-transport platforms and the actions and activities of passengers. Trajectory data from passengers is modelled as a time-series of human activities. Common-sense knowledge and rules are then applied to detect inconsistencies or errors in the data interpretation. Lastly, the rationality that characterises human behaviour is also captured here through a bottom-up Hierarchical Task Network planner that, along with common-sense, corrects misinterpretations to explain passenger behaviour. The system is validated using a simulated bus saloon scenario as a case-study. Eighteen video sequences were recorded with up to six passengers. Four metrics were used to evaluate performance. The system, with an accuracy greater than 90% for each of the four metrics, was found to outperform a rule-base system and a system containing planning alone

    The Impact of Organizational Memory on IT Systems

    Get PDF
    Organizational Memory Information Systems (OMISs) combine the attributes of culture, history, business process, human memory and fact into an integrated knowledge based business system. While not currently in existence in the configuration suggested in this paper, this type of information system would be an integral part of any firm wanting to anticipate business climate changes, expand their customer base and improve existing customer service. OMIS’s would benefit businesses wanting to integrate disparate data bases, capture the expertise of retiring staff, improve organizational coordination and provide a decision making aid to staff members encountering new and complex issues requiring the integration of diverse and inconsistent types of knowledge

    An Ontology for Defect Detection in Metal Additive Manufacturing

    Full text link
    A key challenge for Industry 4.0 applications is to develop control systems for automated manufacturing services that are capable of addressing both data integration and semantic interoperability issues, as well as monitoring and decision making tasks. To address such an issue in advanced manufacturing systems, principled knowledge representation approaches based on formal ontologies have been proposed as a foundation to information management and maintenance in presence of heterogeneous data sources. In addition, ontologies provide reasoning and querying capabilities to aid domain experts and end users in the context of constraint validation and decision making. Finally, ontology-based approaches to advanced manufacturing services can support the explainability and interpretability of the behaviour of monitoring, control, and simulation systems that are based on black-box machine learning algorithms. In this work, we provide a novel ontology for the classification of process-induced defects known from the metal additive manufacturing literature. Together with a formal representation of the characterising features and sources of defects, we integrate our knowledge base with state-of-the-art ontologies in the field. Our knowledge base aims at enhancing the modelling capabilities of additive manufacturing ontologies by adding further defect analysis terminology and diagnostic inference features

    Advanced Software Development Workstation Project

    Get PDF
    The Advanced Software Development Workstation Project, funded by Johnson Space Center, is investigating knowledge-based techniques for software reuse in NASA software development projects. Two prototypes have been demonstrated and a third is now in development. The approach is to build a foundation that provides passive reuse support, add a layer that uses domain-independent programming knowledge, add a layer that supports the acquisition of domain-specific programming knowledge to provide active support, and enhance maintainability and modifiability through an object-oriented approach. The development of new application software would use specification-by-reformulation, based on a cognitive theory of retrieval from very long-term memory in humans, and using an Ada code library and an object base. Current tasks include enhancements to the knowledge representation of Ada packages and abstract data types, extensions to support Ada package instantiation knowledge acquisition, integration with Ada compilers and relational databases, enhancements to the graphical user interface, and demonstration of the system with a NASA contractor-developed trajectory simulation package. Future work will focus on investigating issues involving scale-up and integration

    Large AI Model-Based Semantic Communications

    Full text link
    Semantic communication (SC) is an emerging intelligent paradigm, offering solutions for various future applications like metaverse, mixed-reality, and the Internet of everything. However, in current SC systems, the construction of the knowledge base (KB) faces several issues, including limited knowledge representation, frequent knowledge updates, and insecure knowledge sharing. Fortunately, the development of the large AI model provides new solutions to overcome above issues. Here, we propose a large AI model-based SC framework (LAM-SC) specifically designed for image data, where we first design the segment anything model (SAM)-based KB (SKB) that can split the original image into different semantic segments by universal semantic knowledge. Then, we present an attention-based semantic integration (ASI) to weigh the semantic segments generated by SKB without human participation and integrate them as the semantic-aware image. Additionally, we propose an adaptive semantic compression (ASC) encoding to remove redundant information in semantic features, thereby reducing communication overhead. Finally, through simulations, we demonstrate the effectiveness of the LAM-SC framework and the significance of the large AI model-based KB development in future SC paradigms.Comment: Plan to submit it to journal for possible publicatio

    Pay-as-you-go data integration for bio-informatics

    Get PDF
    Scientific research in bio-informatics is often data-driven and supported by numerous biological databases. A biological database contains factual information collected from scientific experiments and computational analyses about areas including genomics, proteomics, metabolomics, microarray gene expression and phylogenetics. Information contained in biological databases includes gene function, structure, localization (both cellular and chromosomal), clinical effects of mutations as well as similarities of biological sequences and structures. In a growing number of research projects, bio-informatics researchers like to ask combined ques- tions, i.e., questions that require the combination of information from more than one database. We have observed that most bio-informatics papers do not go into detail on the integration of different databases. It has been observed that roughly 30% of all tasks in bio-informatics workflows are data transformation tasks, a lot of time is used to integrate these databases (shown by [1]). As data sources are created and evolve, many design decisions made by their creators. Not all of these choices are documented. Some of such choices are made implicitly based on experience or preference of the creator. Other choices are mandated by the purpose of the data source, as well as inherent data quality issues such as imprecision in measurements, or ongoing scientific debates. Integrating multiple data sources can be difficult. We propose to approach the time-consuming problem of integrating multiple biological databases through the principles of ‘pay-as-you-go’ and ‘good-is-good-enough’. By assisting the user in defin- ing a knowledge base of data mapping rules, schema alignment, trust information and other evidence we allow the user to focus on the work, and put in as little effort as is necessary for the integration to serve the purposes of the user. By using user feedback on query results and trust assessments, the integration can be improved upon over time. The research will be guided by a set of use cases. As the research is in its early stages, we have determined three use cases: Homologues, the representation and integration of groupings. Homology is the relationship between two characteristics that have descended, usually with divergence, from a common ancestral characteristic. A characteristic can be any genic, structural or behavioural feature of an organism Metabolomics integration, with a focus on the TCA cycle. The TCA cycle (also known as the citric acid cycle, or Krebs cycle) is used by aerobic organism to generate energy from the oxidation of carbohydrates, fats and proteins. Bibliography integration and improvement, the correction and expansion of citation databases. [1] I. Wassink. Work flows in life science. PhD thesis, University of Twente, Enschede, January 2010
    corecore