43 research outputs found

    Agent-Based Fault Tolerant Distributed Event System

    Get PDF
    In the last years, event-based communication style has been extensively studied and is considered a promising approach to develop large scale distributed systems. The historical development of event based systems has followed a line which has evolved from channel-based systems, to subject-based systems, next content-based systems and finally type-based systems which use objects as event messages. According to this historical development the next step should be usage of agents in event systems. In this paper, we propose a new model for Agent Based Distributed Event Systems, called ABDES, which combines the advantages of event-based communication and intelligent mobile agents into a flexible, extensible and fault tolerant distributed execution environment

    Modelo de servicio semántico de difusión selectiva de información (DSI) para bibliotecas digitales

    Get PDF
    We present the theoretical and methodological foundations for the development of a multi-agent SDI service model for specialized digital libraries, applying semantic web technologies that permit more efficient information management, improving agent-user communication processes and facilitating accurate access to relevant resources. To do this, rss feeds are used as "current awareness bulletins" to generate personalized bibliographic alerts. The SDI service model has an rss feeds management module and an information push module. In the first module, resources are represented as rss feed items and are also semi-automatically assigned subject terms by matching their associated keywords against the terms of a SKOS Core format thesaurus. In the information push module, bibliographic alerts are customized according to the preferences defined on users' profiles

    Efficient Probabilistic Subsumption Checking for Content-Based Publish/Subscribe Systems

    Get PDF
    Abstract. Efficient subsumption checking, deciding whether a subscription or publication is covered by a set of previously defined subscriptions, is of paramount importance for publish/subscribe systems. It provides the core system functionality—matching of publications to subscriber needs expressed as subscriptions—and additionally, reduces the overall system load and generated traffic since the covered subscriptions are not propagated in distributed environments. As the subsumption problem was shown previously to be co-NP complete and existing solutions typically apply pairwise comparisons to detect the subsumption relationship, we propose a ‘Monte Carlo type ’ probabilistic algorithm for the general subsumption problem. It determines whether a publication/subscription is covered by a disjunction of subscriptions in O(k md), wherek is the number of subscriptions, m is the number of distinct attributes in subscriptions, and d is the number of tests performed to answer a subsumption question. The probability of error is problem-specific and typically very small, and sets an upper bound on d. Our experimental results show significant gains in term of subscription set reduction which has favorable impact on the overall system performance as it reduces the total computational costs and networking traffic. Furthermore, the expected theoretical bounds underestimate algorithm performance because it performs much better in practice due to introduced optimizations, and is adequate for fast forwarding of subscriptions in case of high subscription rate.

    Relational subscription middleware for Internet-scale publish-subscribe

    Get PDF
    The nonlinear inverse problem of electromagnetic induction to recover electrical conductivity is examined. As this is an ill-posed problem based on inaccurate data, there is a strong need to find the reliable features of the models of electrical conductivity. By using optimization theory for an all-at-once approach to inverting frequency-domain electromagnetic data, we attempt to make conclusions about Earth structure under assumptions of one-dimensional and two-dimensional structure. The forward modeling equations are constraints in an optimization problem solving for the fields and the conductivity simultaneously. The computational framework easily allows additional inequality constraints to be imposed.Under the one-dimensional assumption, we develop the optimization approach for use on the magnetotelluric inverse problem. After verifying its accuracy, we use our method to obtain bounds on Earth's average conductivity that all conductivity profiles must obey. There is no regularization required to solve the problem. With the emplacement of additional inequality constraints, we further narrow the bounds. We draw conclusions from a global geomagnetic depth sounding data set and compare with laboratory results, inferring temperature and water content through published Boltzmann-Arrhenius conductivity models.We take the lessons from the 1-D inverse problem and apply them to the 2-D inverse problem. The difficulty of the 2-D inverse problem requires that we first examine our ability to solve the forward problem, where the conductivity structure is known and the fields are unknown. Our forward problem is designed such that we are able to directly transfer it into the optimization approach used for the inversion. With the successful 2-D forward problem as the constraints, a one-dimensional 2-D inverse problem is stepped into a fully 2-D inverse problem for testing purposes. The computational machinery is incrementally modified to meet the challenge of the realistic two-dimensional magnetotelluric inverse problem. We then use two shallow-Earth data sets from different conductivity regimes and invert them for bounded and regularized structure

    OS Support for P2P Programming: a Case for TPS

    Get PDF
    Just like Remote Procedure Call (RPC) turned out to be a very effective OS ab-straction in building client-server applications over LANs, Type-based Publish-Sub-scribe (TPS) can be viewed as a high-level candidate OS abstraction for building Peer-to-Peer (P2P) applications over WANs. This paper relates our preliminary, though positive, experience of implementing and using TPS over JXTA: an analogous to the sockets for P2P infrastructures. We show that, at least for P2P applications with the Java type model, TPS provides a high-level programming support that ensures type safety and encapsulation, without hampering the decoupled nature of these applications. Furthermore, the loss of flex-ibility (inherent to the use of any high level abstraction) and the performance over-head, are negligible with respect to the simplicity gained by using TPS

    ZStream: A cost-based query processor for adaptively detecting composite events

    Get PDF
    Composite (or Complex) event processing (CEP) systems search sequences of incoming events for occurrences of user-specified event patterns. Recently, they have gained more attention in a variety of areas due to their powerful and expressive query language and performance potential. Sequentiality (temporal ordering) is the primary way in which CEP systems relate events to each other. In this paper, we present a CEP system called ZStream to efficiently process such sequential patterns. Besides simple sequential patterns, ZStream is also able to detect other patterns, including conjunction, disjunction, negation and Kleene closure. Unlike most recently proposed CEP systems, which use non-deterministic finite automata (NFA's) to detect patterns, ZStream uses tree-based query plans for both the logical and physical representation of query patterns. By carefully designing the underlying infrastructure and algorithms, ZStream is able to unify the evaluation of sequence, conjunction, disjunction, negation, and Kleene closure as variants of the join operator. Under this framework, a single pattern in ZStream may have several equivalent physical tree plans, with different evaluation costs. We propose a cost model to estimate the computation costs of a plan. We show that our cost model can accurately capture the actual runtime behavior of a plan, and that choosing the optimal plan can result in a factor of four or more speedup versus an NFA based approach. Based on this cost model and using a simple set of statistics about operator selectivity and data rates, ZStream is able to adaptively and seamlessly adjust the order in which it detects patterns on the fly. Finally, we describe a dynamic programming algorithm used in our cost model to efficiently search for an optimal query plan for a given pattern.National Natural Science Foundation (Grant number NETS-NOSS 0520032
    corecore