248,996 research outputs found

    Implementation of active collections framework using .NET

    Get PDF
    For many years, large distributed enterprises have faced a common problem of near real time sharing of enterprise data typically stored in databases. As these databases may be located globally, distributed nature of this data makes it difficult to access instantaneously. The Active Collections Framework (ACF) acts as a good foundation on which to build distributed applications. This framework requires distributed applications to view changes to data as events of interest. The ACF framework integrates access to data and data changes through active collections. ACF framework is based on two different research areas: event management in distributed computing and active database systems. Active databases support mechanisms to monitor changes to the database state. The central concept in ACF event management is active collections. Each active collection is a collection of all objects specified by a query on the enterprise data. For each client interested in obtaining data, an entry is made in the active collection. This information is then used by windows service to notify the registered client of any data changes. This project implements the Active Collections Framework using Microsoft .Net and Visual Studio .Net. Two sample applications using the developed framework have been developed to demonstrate the efficiency of data storage and event notification capabilities of the developed ACF framework

    Evaluation of a Bayesian inference network for ligand-based virtual screening

    Get PDF
    Background Bayesian inference networks enable the computation of the probability that an event will occur. They have been used previously to rank textual documents in order of decreasing relevance to a user-defined query. Here, we modify the approach to enable a Bayesian inference network to be used for chemical similarity searching, where a database is ranked in order of decreasing probability of bioactivity. Results Bayesian inference networks were implemented using two different types of network and four different types of belief function. Experiments with the MDDR and WOMBAT databases show that a Bayesian inference network can be used to provide effective ligand-based screening, especially when the active molecules being sought have a high degree of structural homogeneity; in such cases, the network substantially out-performs a conventional, Tanimoto-based similarity searching system. However, the effectiveness of the network is much less when structurally heterogeneous sets of actives are being sought. Conclusion A Bayesian inference network provides an interesting alternative to existing tools for ligand-based virtual screening

    A Java implementation of Coordination Rules as ECA Rules

    Get PDF
    This paper gives an insight in to the design and implementation of the coordination rules as ECA rules. The language specifications of the ECA rules were designed and the corresponding implementation of the same using JAVA as been partially done. The paper also hints about the future work in this area which deals with embedding this code in JXTA, thus enabling to form a P2P layer with JXTA as the back bone

    State-of-the-art on evolution and reactivity

    Get PDF
    This report starts by, in Chapter 1, outlining aspects of querying and updating resources on the Web and on the Semantic Web, including the development of query and update languages to be carried out within the Rewerse project. From this outline, it becomes clear that several existing research areas and topics are of interest for this work in Rewerse. In the remainder of this report we further present state of the art surveys in a selection of such areas and topics. More precisely: in Chapter 2 we give an overview of logics for reasoning about state change and updates; Chapter 3 is devoted to briefly describing existing update languages for the Web, and also for updating logic programs; in Chapter 4 event-condition-action rules, both in the context of active database systems and in the context of semistructured data, are surveyed; in Chapter 5 we give an overview of some relevant rule-based agents frameworks

    Automated construction of fuzzy event sets and its application to active databases

    Get PDF
    Fuzzy sets and fuzzy logic research aims to bridge the gap between the crisp world of math and the real world. Fuzzy set theory was applied to many different areas, from control to databases. Sometimes the number of events in an event-driven system may become very high and unmanageable. Therefore, it is very useful to organize the events into fuzzy event sets also introducing the benefits of the fuzzy set theory. All the events that have occurred in a system can be stored in event histories which contain precious hidden information. In this paper, we propose a method for automated construction of fuzzy event sets out of event histories via data mining techniques. The useful information hidden in the event history is extracted into a matrix called sequential proximity matrix. This matrix shows the proximities of events and it is used for fuzzy rule execution via similarity based event detection and construction of fuzzy event sets. Our application platform is active databases. We describe how fuzzy event sets can be exploited for similarity based event detection and fuzzy rule execution in active database systems

    Structuring the process of integrity maintenance (extended version)

    Get PDF
    Two different approaches have been traditionally considered for dealing with the process of integrity constraints enforcement: integrity checking and integrity maintenance. However, while previous research in the first approach has mainly addressed efficiency issues, research in the second approach has been mainly concentrated in being able to generate all possible repairs that falsify an integrity constraint violation. In this paper we address efficiency issues during the process of integrity maintenance. In this sense, we propose a technique which improves efficiency of existing methods by defining the order in which maintenance of integrity constraints should be performed. Moreover, we use also this technique for being able to handle in an integrated way the integrity constraintsPostprint (published version

    Introducing Dynamic Behavior in Amalgamated Knowledge Bases

    Full text link
    The problem of integrating knowledge from multiple and heterogeneous sources is a fundamental issue in current information systems. In order to cope with this problem, the concept of mediator has been introduced as a software component providing intermediate services, linking data resources and application programs, and making transparent the heterogeneity of the underlying systems. In designing a mediator architecture, we believe that an important aspect is the definition of a formal framework by which one is able to model integration according to a declarative style. To this purpose, the use of a logical approach seems very promising. Another important aspect is the ability to model both static integration aspects, concerning query execution, and dynamic ones, concerning data updates and their propagation among the various data sources. Unfortunately, as far as we know, no formal proposals for logically modeling mediator architectures both from a static and dynamic point of view have already been developed. In this paper, we extend the framework for amalgamated knowledge bases, presented by Subrahmanian, to deal with dynamic aspects. The language we propose is based on the Active U-Datalog language, and extends it with annotated logic and amalgamation concepts. We model the sources of information and the mediator (also called supervisor) as Active U-Datalog deductive databases, thus modeling queries, transactions, and active rules, interpreted according to the PARK semantics. By using active rules, the system can efficiently perform update propagation among different databases. The result is a logical environment, integrating active and deductive rules, to perform queries and update propagation in an heterogeneous mediated framework.Comment: Other Keywords: Deductive databases; Heterogeneous databases; Active rules; Update

    Complex Event Processing (CEP)

    Get PDF
    Event-driven information systems demand a systematic and automatic processing of events. Complex Event Processing (CEP) encompasses methods, techniques, and tools for processing events while they occur, i.e., in a continuous and timely fashion. CEP derives valuable higher-level knowledge from lower-level events; this knowledge takes the form of so called complex events, that is, situations that can only be recognized as a combination of several events. 1 Application Areas Service Oriented Architecture (SOA), Event-Driven Architecture (EDA), cost-reductions in sensor technology and the monitoring of IT systems due to legal, contractual, or operational concerns have lead to a significantly increased generation of events in computer systems in recent years. This development is accompanied by a demand to manage and process these events in an automatic, systematic, and timely fashion. Important application areas for Complex Event Processing (CEP) are the following
    • …
    corecore