21,710 research outputs found

    Contention management for distributed data replication

    Get PDF
    PhD ThesisOptimistic replication schemes provide distributed applications with access to shared data at lower latencies and greater availability. This is achieved by allowing clients to replicate shared data and execute actions locally. A consequence of this scheme raises issues regarding shared data consistency. Sometimes an action executed by a client may result in shared data that may conflict and, as a consequence, may conflict with subsequent actions that are caused by the conflicting action. This requires a client to rollback to the action that caused the conflicting data, and to execute some exception handling. This can be achieved by relying on the application layer to either ignore or handle shared data inconsistencies when they are discovered during the reconciliation phase of an optimistic protocol. Inconsistency of shared data has an impact on the causality relationship across client actions. In protocol design, it is desirable to preserve the property of causality between different actions occurring across a distributed application. Without application level knowledge, we assume an action causes all the subsequent actions at the same client. With application knowledge, we can significantly ease the protocol burden of provisioning causal ordering, as we can identify which actions do not cause other actions (even if they precede them). This, in turn, makes possible the client’s ability to rollback to past actions and to change them, without having to alter subsequent actions. Unfortunately, increased instances of application level causal relations between actions lead to a significant overhead in protocol. Therefore, minimizing the rollback associated with conflicting actions, while preserving causality, is seen as desirable for lower exception handling in the application layer. In this thesis, we present a framework that utilizes causality to create a scheduler that can inform a contention management scheme to reduce the rollback associated with the conflicting access of shared data. Our framework uses a backoff contention management scheme to provide causality preserving for those optimistic replication systems with high causality requirements, without the need for application layer knowledge. We present experiments which demonstrate that our framework reduces clients’ rollback and, more importantly, that the overall throughput of the system is improved when the contention management is used with applications that require causality to be preserved across all actions

    Summarisation for mobile databases

    Get PDF

    Fuzzy Dynamic Discrimination Algorithms for Distributed Knowledge Management Systems

    Get PDF
    A reduction of the algorithmic complexity of the fuzzy inference engine has the following property: the inputs (the fuzzy rules and the fuzzy facts) can be divided in two parts, one being relatively constant for a long a time (the fuzzy rule or the knowledge model) when it is compared to the second part (the fuzzy facts) for every inference cycle. The occurrence of certain transformations over the constant part makes sense, in order to decrease the solution procurement time, in the case that the second part varies, but it is known at certain moments in time. The transformations attained in advance are called pre-processing or knowledge compilation. The use of variables in a Business Rule Management System knowledge representation allows factorising knowledge, like in classical knowledge based systems. The language of the first-degree predicates facilitates the formulation of complex knowledge in a rigorous way, imposing appropriate reasoning techniques. It is, thus, necessary to define the description method of fuzzy knowledge, to justify the knowledge exploiting efficiency when the compiling technique is used, to present the inference engine and highlight the functional features of the pattern matching and the state space processes. This paper presents the main results of our project PR356 for designing a compiler for fuzzy knowledge, like Rete compiler, that comprises two main components: a static fuzzy discrimination structure (Fuzzy Unification Tree) and the Fuzzy Variables Linking Network. There are also presented the features of the elementary pattern matching process that is based on the compiled structure of fuzzy knowledge. We developed fuzzy discrimination algorithms for Distributed Knowledge Management Systems (DKMSs). The implementations have been elaborated in a prototype system FRCOM (Fuzzy Rule COMpiler).Fuzzy Unification Tree, Dynamic Discrimination of Fuzzy Sets, DKMS, FRCOM

    State-of-the-art on evolution and reactivity

    Get PDF
    This report starts by, in Chapter 1, outlining aspects of querying and updating resources on the Web and on the Semantic Web, including the development of query and update languages to be carried out within the Rewerse project. From this outline, it becomes clear that several existing research areas and topics are of interest for this work in Rewerse. In the remainder of this report we further present state of the art surveys in a selection of such areas and topics. More precisely: in Chapter 2 we give an overview of logics for reasoning about state change and updates; Chapter 3 is devoted to briefly describing existing update languages for the Web, and also for updating logic programs; in Chapter 4 event-condition-action rules, both in the context of active database systems and in the context of semistructured data, are surveyed; in Chapter 5 we give an overview of some relevant rule-based agents frameworks

    Maintaining consistency in client-server database systems with client-side caching

    Get PDF
    PhD ThesisCaching has been used in client-server database systems to improve the performance of applications. Much of the current work has concentrated on caching techniques at the server side, since the underlying assumption has been that clients are “thin” with application level processing taking place mainly at the server side. There are also a new class of “thick client” applications where clients need to access the database at the server but also perform substantial amount of processing at the client side; here client-side caching is needed to provide good performance for applications. This thesis presents a transactional cache consistency scheme suitable for systems with client-side caching. The scheme is based on the optimistic approach to concurrency control. The scheme provides serializability for committed transactions. This is in contrast to many modern systems that only provide the snapshot isolation property which is weaker than serializability. A novel feature is that the processing load for validating transactions at commit time is shared between clients and the database server, thereby reducing the load at the server. Read-only transactions can be validated at the client-side, without communicating with the server. Another feature is that the scheme permits disconnected operation, allowing clients with cached objects to work offline. The performance of the scheme is evaluated using simulation experiments. The experiments demonstrate that for mostly read only transaction load – for which caching is most effective - the scheme outperforms the existing concurrency control scheme with client-side caching considered to be the best, and matches the performance of the widely used scheme that only provides snapshot isolation. The results also show that the scheme in a disconnected environment provides reasonable performance.Directorate General of Higher Education, Ministry of National Education, Indonesia

    Data Credence in IoT: Vision and Challenges

    Get PDF
    As the Internet of Things permeates every aspect of human life, assessing the credence or integrity of the data generated by "things" becomes a central exercise for making decisions or in auditing events. In this paper, we present a vision of this exercise that includes the notion of data credence, assessing data credence in an efficient manner, and the use of technologies that are on the horizon for the very large scale Internet of Things

    Data Credence in IoR: Vision and Challenges

    Get PDF
    As the Internet of Things permeates every aspect of human life, assessing the credence or integrity of the data generated by "things" becomes a central exercise for making decisions or in auditing events. In this paper, we present a vision of this exercise that includes the notion of data credence, assessing data credence in an efficient manner, and the use of technologies that are on the horizon for the very large scale Internet of Things
    corecore