899,085 research outputs found

    Is a Semantic Web Agent a Knowledge-Savvy Agent?

    No full text
    The issue of knowledge sharing has permeated the field of distributed AI and in particular, its successor, multiagent systems. Through the years, many research and engineering efforts have tackled the problem of encoding and sharing knowledge without the need for a single, centralized knowledge base. However, the emergence of modern computing paradigms such as distributed, open systems have highlighted the importance of sharing distributed and heterogeneous knowledge at a larger scale—possibly at the scale of the Internet. The very characteristics that define the Semantic Web—that is, dynamic, distributed, incomplete, and uncertain knowledge—suggest the need for autonomy in distributed software systems. Semantic Web research promises more than mere management of ontologies and data through the definition of machine-understandable languages. The openness and decentralization introduced by multiagent systems and service-oriented architectures give rise to new knowledge management models, for which we can’t make a priori assumptions about the type of interaction an agent or a service may be engaged in, and likewise about the message protocols and vocabulary used. We therefore discuss the problem of knowledge management for open multi-agent systems, and highlight a number of challenges relating to the exchange and evolution of knowledge in open environments, which pertinent to both the Semantic Web and Multi Agent System communities alike

    Space shuttle main engine anomaly data and inductive knowledge based systems: Automated corporate expertise

    Get PDF
    Progress is reported on the development of SCOTTY, an expert knowledge-based system to automate the analysis procedure following test firings of the Space Shuttle Main Engine (SSME). The integration of a large-scale relational data base system, a computer graphics interface for experts and end-user engineers, potential extension of the system to flight engines, application of the system for training of newly-hired engineers, technology transfer to other engines, and the essential qualities of good software engineering practices for building expert knowledge-based systems are among the topics discussed

    Algorithm Optimally Orders Forward-Chaining Inference Rules

    Get PDF
    People typically develop knowledge bases in a somewhat ad hoc manner by incrementally adding rules with no specific organization. This often results in a very inefficient execution of those rules since they are so often order sensitive. This is relevant to tasks like Deep Space Network in that it allows the knowledge base to be incrementally developed and have it automatically ordered for efficiency. Although data flow analysis was first developed for use in compilers for producing optimal code sequences, its usefulness is now recognized in many software systems including knowledge-based systems. However, this approach for exhaustively computing data-flow information cannot directly be applied to inference systems because of the ubiquitous execution of the rules. An algorithm is presented that efficiently performs a complete producer/consumer analysis for each antecedent and consequence clause in a knowledge base to optimally order the rules to minimize inference cycles. An algorithm was developed that optimally orders a knowledge base composed of forwarding chaining inference rules such that independent inference cycle executions are minimized, thus, resulting in significantly faster execution. This algorithm was integrated into the JPL tool Spacecraft Health Inference Engine (SHINE) for verification and it resulted in a significant reduction in inference cycles for what was previously considered an ordered knowledge base. For a knowledge base that is completely unordered, then the improvement is much greater

    Adopting New Software: Drivers of Voluntary Adoption in the same Product Category

    Get PDF
    Information systems use research has investigated post-adoption issues as a means for identifying the factors that are relevant for long term IS success. Our objective in this study is to investigate voluntary adoption decisions of new software in an organizational setting. We study how attributes of prior use, perceived ease of use and perceived usefulness affect knowledge transferability and adoption intention of new software in the same primary base domain. Out study was based in the context of changeover of a course management system at a small Southern University. Data was collected from 81 faculty members about their intention to adopt the new CMS. Results indicate that in the context of voluntary adoption of new software in the same primary base domain, habit and knowledge transferability are positively associated with adoption intention while frequency of feature use is negatively associated

    The StoreGate: a Data Model for the Atlas Software Architecture

    Full text link
    The Atlas collaboration at CERN has adopted the Gaudi software architecture which belongs to the blackboard family: data objects produced by knowledge sources (e.g. reconstruction modules) are posted to a common in-memory data base from where other modules can access them and produce new data objects. The StoreGate has been designed, based on the Atlas requirements and the experience of other HENP systems such as Babar, CDF, CLEO, D0 and LHCB, to identify in a simple and efficient fashion (collections of) data objects based on their type and/or the modules which posted them to the Transient Data Store (the blackboard). The developer also has the freedom to use her preferred key class to uniquely identify a data object according to any other criterion. Besides this core functionality, the StoreGate provides the developers with a powerful interface to handle in a coherent fashion persistable references, object lifetimes, memory management and access control policy for the data objects in the Store. It also provides a Handle/Proxy mechanism to define and hide the cache fault mechanism: upon request, a missing Data Object can be transparently created and added to the Transient Store presumably retrieving it from a persistent data-base, or even reconstructing it on demand.Comment: Talk from the 2003 Computing in High Energy and Nuclear Physics (CHEP03), La Jolla, Ca, USA, March 2003, 4 pages, LaTeX, MOJT00

    Knowledge management for systems biology a general and visually driven framework applied to translational medicine

    Get PDF
    Background: To enhance our understanding of complex biological systems like diseases we need to put all of the available data into context and use this to detect relations, pattern and rules which allow predictive hypotheses to be defined. Life science has become a data rich science with information about the behaviour of millions of entities like genes, chemical compounds, diseases, cell types and organs, which are organised in many different databases and/or spread throughout the literature. Existing knowledge such as genotype - phenotype relations or signal transduction pathways must be semantically integrated and dynamically organised into structured networks that are connected with clinical and experimental data. Different approaches to this challenge exist but so far none has proven entirely satisfactory. Results: To address this challenge we previously developed a generic knowledge management framework, BioXM , which allows the dynamic, graphic generation of domain specific knowledge representation models based on specific objects and their relations supporting annotations and ontologies. Here we demonstrate the utility of BioXM for knowledge management in systems biology as part of the EU FP6 BioBridge project on translational approaches to chronic diseases. From clinical and experimental data, text-mining results and public databases we generate a chronic obstructive pulmonary disease (COPD) knowledge base and demonstrate its use by mining specific molecular networks together with integrated clinical and experimental data. Conclusions: We generate the first semantically integrated COPD specific public knowledge base and find that for the integration of clinical and experimental data with pre-existing knowledge the configuration based set-up enabled by BioXM reduced implementation time and effort for the knowledge base compared to similar systems implemented as classical software development projects. The knowledgebase enables the retrieval of sub-networks including protein-protein interaction, pathway, gene - disease and gene - compound data which are used for subsequent data analysis, modelling and simulation. Pre-structured queries and reports enhance usability; establishing their use in everyday clinical settings requires further simplification with a browser based interface which is currently under development

    Ground data systems resource allocation process

    Get PDF
    The Ground Data Systems Resource Allocation Process at the Jet Propulsion Laboratory provides medium- and long-range planning for the use of Deep Space Network and Mission Control and Computing Center resources in support of NASA's deep space missions and Earth-based science. Resources consist of radio antenna complexes and associated data processing and control computer networks. A semi-automated system was developed that allows operations personnel to interactively generate, edit, and revise allocation plans spanning periods of up to ten years (as opposed to only two or three weeks under the manual system) based on the relative merit of mission events. It also enhances scientific data return. A software system known as the Resource Allocation and Planning Helper (RALPH) merges the conventional methods of operations research, rule-based knowledge engineering, and advanced data base structures. RALPH employs a generic, highly modular architecture capable of solving a wide variety of scheduling and resource sequencing problems. The rule-based RALPH system has saved significant labor in resource allocation. Its successful use affirms the importance of establishing and applying event priorities based on scientific merit, and the benefit of continuity in planning provided by knowledge-based engineering. The RALPH system exhibits a strong potential for minimizing development cycles of resource and payload planning systems throughout NASA and the private sector
    • 

    corecore