48 research outputs found

    Tau Be or not Tau Be? - A Perspective on Service Compatibility and Substitutability

    Get PDF
    One of the main open research issues in Service Oriented Computing is to propose automated techniques to analyse service interfaces. A first problem, called compatibility, aims at determining whether a set of services (two in this paper) can be composed together and interact with each other as expected. Another related problem is to check the substitutability of one service with another. These problems are especially difficult when behavioural descriptions (i.e., message calls and their ordering) are taken into account in service interfaces. Interfaces should capture as faithfully as possible the service behaviour to make their automated analysis possible while not exhibiting implementation details. In this position paper, we choose Labelled Transition Systems to specify the behavioural part of service interfaces. In particular, we show that internal behaviours (tau transitions) are necessary in these transition systems in order to detect subtle errors that may occur when composing a set of services together. We also show that tau transitions should be handled differently in the compatibility and substitutability problem: the former problem requires to check if the compatibility is preserved every time a tau transition is traversed in one interface, whereas the latter requires a precise analysis of tau branchings in order to make the substitution preserve the properties (e.g., a compatibility notion) which were ensured before replacement.Comment: In Proceedings WCSI 2010, arXiv:1010.233

    Dynamic replication strategies in data grid systems: A survey

    Get PDF
    In data grid systems, data replication aims to increase availability, fault tolerance, load balancing and scalability while reducing bandwidth consumption, and job execution time. Several classification schemes for data replication were proposed in the literature, (i) static vs. dynamic, (ii) centralized vs. decentralized, (iii) push vs. pull, and (iv) objective function based. Dynamic data replication is a form of data replication that is performed with respect to the changing conditions of the grid environment. In this paper, we present a survey of recent dynamic data replication strategies. We study and classify these strategies by taking the target data grid architecture as the sole classifier. We discuss the key points of the studied strategies and provide feature comparison of them according to important metrics. Furthermore, the impact of data grid architecture on dynamic replication performance is investigated in a simulation study. Finally, some important issues and open research problems in the area are pointed out

    A Cost Evaluator for Parallel Database Systems

    No full text
    . The design of ESQL queries Optimizer may be decomposed into three dimensions: (i) the search space which defines the syntactic representation of all relevant aspects of an execution, (ii) the search strategy used to generate an optimal execution plan and (iii) the cost evaluator which calculates the metrics used by the search strategies. In this paper, we investigate issues involved in designing and using a cost evaluator, separate from the Optimizer, for queries optimization in parallel database environments. This cost evaluator can be seen as an extension of the cost evaluator of the EDS [1], DBS3 [3, 20] parallel database systems and the Papyrus project [7]. 1. Introduction Following [13, 30], we break the ESQL [8] (a conservative extension of SQL with object and deductive capabilities) queries Optimization into three phases: logical optimization, physical optimization and parallelization. Logical optimization [30] consists of two main activities: simplification (i.e. eliminatio..

    Scheduling and Mapping for Parallel Execution of Extended SQL Queries

    No full text
    In this paper, we present an extension of PSA strategy (Parallel Scheduling Algorithm ), to determine an appropriate mapping of operations onto physical processors, taking into account the interconnection network topology of a sharednothing architecture. Performance evaluation, which relies on two benchmarks shows the efficiency of PSA strategy by comparing to Static Right-Deep strategy and to Bushy Tree Scheduling strategy. The major contributions of this work are (i) the incorporation of the mapping process into PSA strategy and (ii) the PSA strategy which provides a good trade-off between response time minimization and throughput maximization. 1 Introduction The problem of ESQL [9] queries optimization for parallel execution is fundamental to obtain high performance and high data availability. One way to increase optimization capacity is to improve the efficiency of generating an optimal execution plan. The design of an ESQL queries Optimizer may be decomposed into three dimension..

    An Optimization Method of Data Communication and Control for Parallel Execution of SQL Queries

    No full text
    . This paper describes a method for optimizing data communication and control for parallel execution of SQL queries. The optimization consists in avoiding the data communication and control which are not needed for the global parallel execution consistent with the execution plan generated by the optimizer. The basis of the method lies in the propagation of the partition attributes and of the number of processors during the relational operations parallelization phase. Performance evaluation shows the impact the propagation process has on improving response time. The main contribution of this work is the homogeneous integration in the parallelization step of the propagation process for partition attributes and number of processors in order to decrease communication volumes. 1. Introduction and Motivation Optimization of parallel execution in a parallel architecture now represents a critical step point in the evaluation of simple and complex SQL queries. Two types of parallel databases s..

    Transactions on Large-Scale Data- and Knowledge-Centered Systems LII

    No full text
    International audienceThe LNCS journal Transactions on Large-scale Data and Knowledge-centered Systems focuses on data management, knowledge discovery, and knowledge processing, which are core and hot topics in computer science. Since the 1990s, the Internet has become the main driving force behind application development in all domains. An increase in the demand for resource sharing (e.g. computing resources, services, metadata, data sources) across different sites connected through networks has led to an evolution of data- and knowledge-management systems from centralized systems to decentralized systems enabling large-scale distributed applications providing high scalability. This, the 52nd issue of Transactions on Large-scale Data and Knowledge-centered Systems, contains six fully revised selected regular papers. Topics covered include management of decryption keys, delegations as a rights management system, data analytics in connected environments, knowledge graph augmentation, online optimized product quantization for All-Nearest-Neighbours queries, generalization of argument models and transactional modelling using coverage pattern mining

    The SAGE geographic analysis system

    No full text
    Pages: 8
    corecore