59 research outputs found
Distributive Join Strategy Based on Tuple Inversion
In this paper, we propose a new direction for distributive join operations. We assume that there will be a scalable distributed computer system in which many computers (processors) are connected through a communication network that can be in a LAN or as part of the Internet with sufficient bandwidth. A relational database is then distributed across this network of processors. However, in our approach, the distribution of the database is very fine-grained and is based on the Distributed Hash Table (DHT) concept. A tuple of a table is assigned to a specific processor by using a fair hash function applied to its key value. For each joinable attribute, an inverted file list is further generated and distributed again based on the DHT. This pre-distribution is done when the tuple enters the system and therefore does not require any distribution of data tuples on the fly when the join is executed. When a join operation request is broadcast, each processor performs a local join and the results are sent back to a query processor which, in turn, merges the join results and returns them to the user. Note that the distribution of the DHT of the inverted file lists can be either pre-processed or distributed on the fly. If the lists are pre-processed and distributed, they have to be maintained. We evaluate our approach by comparing it empirically to two other approaches: the naive join method and the fully distributed join method. The results show a significantly higher performance of our method for a wide range of possible parameter
Recommended from our members
Messengers : distributed computing using autonomous objects
Autonomous Objects is a new computing and coordination paradigm for distributed systems, based on the concept of intelligent messages that carry their own behavior and that propagate autonomously through the underlying computational network. This is accomplished by running an interpreter of the autonomous objects language in each node, which carries out the tasks prescribed by the program contained in a received message. The tasks could be computational, including the invocation of some node-resident compiled programs, or navigational, which cause the message to be propagated to neighboring nodes. Hence interpretation is incremental in that each node interprets a portion of the received program and passes the rest of it on to one or more of its neighboring nodes. This is repeated until the given problem is solved.We survey and classify several existing systems that fall into this general category of autonomous objects and present a unifying view of the paradigm by describing the principles of a high-level language and its interpreter, suitable to express the behaviors of complex autonomous objects, called Messengers. We discuss the capabilities and applications of this paradigm by presenting solutions to a wide spectrum of distributed computing problems. This includes inherently open-ended applications, such as interactive simulations, where it is not possible to define and precompile the entire experiment prior to starting its execution, and distributed computations where the underlying network topology is unknown or changes dynamically
Recommended from our members
Messengers : distributed computing using autonomous objects
Autonomous Objects is a new computing and coordination paradigm for distributed systems, based on the concept of intelligent messages that carry their own behavior and that propagate autonomously through the underlying computational network. This is accomplished by running an interpreter of the autonomous objects language in each node, which carries out the tasks prescribed by the program contained in a received message. The tasks could be computational, including the invocation of some node-resident compiled programs, or navigational, which cause the message to be propagated to neighboring nodes. Hence interpretation is incremental in that each node interprets a portion of the received program and passes the rest of it on to one or more of its neighboring nodes. This is repeated until the given problem is solved.We survey and classify several existing systems that fall into this general category of autonomous objects and present a unifying view of the paradigm by describing the principles of a high-level language and its interpreter, suitable to express the behaviors of complex autonomous objects, called Messengers. We discuss the capabilities and applications of this paradigm by presenting solutions to a wide spectrum of distributed computing problems. This includes inherently open-ended applications, such as interactive simulations, where it is not possible to define and precompile the entire experiment prior to starting its execution, and distributed computations where the underlying network topology is unknown or changes dynamically
Self-Migrating Threads for Multi-Agent Applications
We propose "self-migrating threads" as a new cluster computing paradigm for multi-agent applications, which can be viewed as the interactions among autonomous computing entities, each having its own objectives, behavior, and local information in a synthetic world. Self-migrating threads have both navigational autonomy of mobile agents and fine computation granularity of threads. They are also given the capability to construct system-wide logical networks, representing synthetic worlds. With those aspects, we expect that self-migrating threads provide multi-agent applications with good programmability and performance. We have designed the functionality of self-migrating threads and implemented a low-level migration library. In this paper, we discuss the feasibility of our design by considering the implementation techniques and basic migration performance
Efficient Checkpointing Algorithm for Distributed Systems Implementing Reliable Communication Channels
This paper presents a new checkpointing algorithm for systems using reliable communication channels. The new algorithm requires O(n + m) communication messages, where n is the number of participating processes, and m is the number of late messages. The algorithm is non-blocking, requires minimal message logging, and has minimal stable storage requirements. This algorithm is also scalable, simple, transparent to the user, and facilitates fast recovery. By introducing sutable delay in the checkpointing process, the parameter m can be made small. We also describe a variant of the algorithm that requires only O(n) messages, at a cost of O(n) additional storage for each process. This paper also presents an efficient coordination mechanism, called the Process Order. The Process Order mechanism can be used for grouping processes in arbitrary structures in order to solve various problems, including scalability, failure detection, and coordinator election. The Process Order mechanism groups the..
Mobile Agents, DSM, Coordination, and Self-Migrating Threads: A Common Framework
We compare four paradigms that have recently been the subject of considerable attention: mobile agents, distributed shared memory (DSM) systems, coordination paradigms, and self-migrating threads. We place these paradigms in a common framework consisting of three layers: the computational model, the implementation of the computational model on a physical architecture, and the interface between the computational model and the system's environment. We consider two examples of self-migrating thread systems---Messengers and WAVE---and place these into the same framework to illustrate their relationship to the other related lines of research in terms of their capabilities to organize and coordinate computation, map the concurrent activities onto a multicomputer architecture, and provide an interface for interaction with their environments on the underlying host computers. 1 Introduction Mobile agents, distributed shared memory (DSM) systems, coordination paradigms, and self-migrating threa..
Recommended from our members
Process interconnection structures in dynamically changing topologies
Centralized coordination protocols are simpler and more efficient than distributed ones. However, as a distributed system gets large, the bottleneck of the central coordinator renders protocols relying on centralized coordination inefficient. To solve this problem, hierarchical coordination can be used, where performance degrades logarithmically with the number of participating processes.In this paper we present a mechanism that automatically organizes processes in a hierarchy and maintains the hierarchy in the presence of node failures, and incremental addition and removal of processes in the system. The new topology resulting from a change is computed by each process locally, without having to broadcast the entire topology to all processes. The proposed scheme can concurrently support multiple logical structures, such as a ring, a hypercube, a mesh, or a tree. It supports total order of broadcasts and does not rely on any specific system features or special hardware
- …