8 research outputs found

    A threshold based dynamic data allocation algorithm - a Markov Chain model approach

    Get PDF
    In this study, a new dynamic data allocation algorithm for non-replicated Distributed Database Systems (DDS), namely the threshold algorithm, is formulated and proposed. The threshold algorithm reallocates data with respect to changing data access patterns. The proposed algorithm is distributed in the sense that each node autonomously decides whether to transfer the ownership of a fragment in DDS to another node or not. The transfer decision depends on the past accesses of the fragment. Each fragment continuously migrates ftom the node where it is not accessed locally more than a certain number of past accesses, namely a threshold value. The threshold algorithm is modeled for a fragment of the database as a finite Markov chain with constant node access probabilities. In the model, a special case, where all nodes have equal access probabilities except one with a different access probability, is analyzed. It has been shown that for positive threshold values the fragment will tend to remain at the node with the higher access probability. It is also shown that the greater the threshold values are, the greater the tendency of the fragment to remain at the node with higher access probability will be. The threshold algorithm is especially suitable for a DDS where data access pattern changes dynamically

    Facility Location with Dynamic Distance Functions

    Get PDF
    Facility location problems have always been studied with the assumption that the edge lengths in the network are {\em static} and do not change over time. The underlying network could be used to model a city street network for emergency facility location/hospitals, or an electronic network for locating information centers. In any case, it is clear that due to traffic congestion the traversal time on links {\em changes} with time. Very often, we have some estimates as to how the edge lengths change over time, and our objective is to choose a set of locations (vertices) as centers, such that at {\em every} time instant each vertex has a center close to it (clearly, the center close to a vertex may change over time). We also provide approximation algorithms as well as hardness results for the KK-center problem under this model. This is the first comprehensive study regarding approximation algorithms for facility location for good time-invariant solutions. (Also cross-references as UMIACS-TR-97-70

    File Allocation and Join Site Selection Problem in Distributed Database Systems.

    Get PDF
    There are two important problems associated with the design of distributed database systems. One is the file allocation problem, and the other is the query optimization problem. In this research a methodology that considers both these aspects is developed that determines the optimal location of files and join sites for given queries simultaneously. Using this methodology, three different mixed integer programming models that describe three cases of the file allocation and join site selection problem are developed. Dual-based procedures are developed for each of the three mixed integer programming models. Extensive computational testing is performed which shows that the dual-based algorithms developed are able to generate solutions which are very close to the optimal. Also, these near optimal solutions are found very quickly, even for large scale problems

    Design issues in distributed management information systems.

    Get PDF
    Thesis. 1978. Ph.D.--Massachusetts Institute of Technology. Alfred P. Sloan School of Management.MICROFICHE COPY AVAILABLE IN ARCHIVES AND DEWEY.Includes bibliographical references.Ph.D

    Dynamic load balancing

    Get PDF
    Thesis (M.S.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1986.MICROFICHE COPY AVAILABLE IN ARCHIVES AND ENGINEERINGBibliography: leaves 72-74.by Christopher W. Clifton.M.S

    Efficient Partitioning and Allocation of Data for Workflow Compositions

    Get PDF
    Our aim is to provide efficient partitioning and allocation of data for web service compositions. Web service compositions are represented as partial order database transactions. We accommodate a variety of transaction types, such as read-only and write-oriented transactions, to support workloads in cloud environments. We introduce an approach that partitions and allocates small units of data, called micropartitions, to multiple database nodes. Each database node stores only the data needed to support a specific workload. Transactions are routed directly to the appropriate data nodes. Our approach guarantees serializability and efficient execution. In Phase 1, we cluster transactions based on data requirements. We associate each cluster with an abstract query definition. An abstract query represents the minimal data requirement that would satisfy all the queries that belong to a given cluster. A micropartition is generated by executing the abstract query on the original database. We show that our abstract query definition is complete and minimal. Intuitively, completeness means that all queries of the corresponding cluster can be correctly answered using the micropartition generated from the abstract query. The minimality property means that no smaller partition of the data can satisfy all of the queries in the cluster. We also aim to support efficient web services execution. Our approach reduces the number of data accesses to distributed data. We also aim to limit the number of replica updates. Our empirical results show that the partitioning approach improves data access efficiency over standard partitioning of data. In Phase 2, we investigate the performance improvement via parallel execution.Based on the data allocation achieved in Phase I, we develop a scheduling approach. Our approach guarantees serializability while efficiently exploiting parallel execution of web services. We achieve conflict serializability by scheduling conflicting operations in a predefined order. This order is based on the calculation of a minimal delay requirement. We use this delay to schedule services to preserve serializability without the traditional locking mechanisms

    Dynamic Storage Allocation Using Simon\u27s Model of Information Usage.

    Get PDF
    In today\u27s rapidly changing field of Management Information Systems (MIS) one problem faced by organizations is the consumption of storage capacity due to the growing base of software assets. Research has shown that very few firms effectively monitor program usage. Storage management issues arise due to the fact that many of the programs occupying valuable storage space are used infrequently. In this dissertation, we apply Simon\u27s model of information usage in order to model the dynamic behavior of program usage. This methodology enables organizations to identify the changing usage frequencies of software assets. We propose a classification scheme which MIS personnel can use in order to effectively monitor their program usage tendencies. This classification scheme may then serve as a basis for storage allocation decision-making. Through a study of the dynamic behavior of programs, we have formulated a minimum cost model for hierarchical storage allocation. We will show the value of incorporating dynamic usage frequencies into algorithms which have traditionally considered only a static view
    corecore