3 research outputs found

    Constructing a reproducible testing environment for distributed Java applications.

    Get PDF
    The emergence of the global Internet, wireless data communications, and the availability of powerful computers is enabling a new generation of distributed and concurrent systems. However, the inherent complexity of such systems introduces many new challenges in system testing and maintenance. One of the major problems in testing such systems is that executions with internal non-deterministic choices make the testing procedure non-repeatable. A natural solution is to artificially force the execution of a program to take desired paths so that a test can be reproduced. However, with geographically distributed processes and heterogeneous platform architectures, distributed systems have imposed new challenges in developing effective techniques for reproducible testing. The goal of this research is to build an environment to automate testing for distributed and concurrent Java applications. We will focus on controlling the order of occurrences of input and remote call events according to a user-specified test scenario, which is composed of input data, a constraint expressed as a partial order over the input and remote call events, and expected output. The testing environment is by itself distributed and does not require source code intrusion into the application under test. With minor changes, the testing components can also be reused in CORBA-based applications implemented in Java.Dept. of Computer Science. Paper copy at Leddy Library: Theses & Major Papers - Basement, West Bldg. / Call Number: Thesis2003 .W35. Source: Masters Abstracts International, Volume: 42-05, page: 1769. Adviser: Jessica Chen. Thesis (M.Sc.)--University of Windsor (Canada), 2003

    Supporting policy-based contextual reconfiguration and adaptation in ubiquitous computing

    Get PDF
    In order for pervasive computing systems to be able to perform tasks which support us in everyday life without requiring attention from the users of the environment, they need to adapt themselves in response to context. This makes context-awareness in general, and context-aware adaptation in particular, an essential requirement for pervasive computing systems. Two of the features of context-awareness are: contextual reconfiguration and contextual adaptation in which applications adapt their behaviour in response to context. We combine both these features of context-awareness to provide a broad scope of adaptation and put forward a system, called Policy-Based Contextual Reconfiguration and Adaptation (PCRA) that provides runtime support for both. The combination of both context-aware reconfiguration and context-aware adaptation provides a broad scope of adaptation and hence allows the development of diverse adaptive context-aware applications. However, another important issue is the choice of an effective means for developing, modifying and extending such applications. The main argument forming the basis of this thesis is that we advocate the use of a policy-based programming model and argue that it provides more effective means for developing, modifying and extending such applications. This thesis addresses other important surrounding issues which are associated with adaptive context-aware applications. These include the management of invalid bindings and the provision of seamless caching support for remote services involved in bindings for improved performance. The bindings may become invalid due to failure conditions that can arise due to network problems or migration of software components, causing bindings between the application component and remote service to become invalid. We have integrated reconfiguration support to manage bindings, and seamless caching support for remote services in PCRA. This thesis also describes the design and implementation of PCRA, which enables development of adaptive context-aware applications using policy specifications. Within PCRA, adaptive context-aware applications are modelled by specifying binding policies and adaptation policies. The use of policies within PCRA simplifies the development task because policies are expressed at a high-level of abstraction, and are expressed independently of each other. PCRA also allows the dynamic modification of applications since policies are independent units of execution and can be dynamically loaded and removed from the system. This is a powerful and useful capability as applications may evolve over time, i.e. the user needs and preferences may change, but re-starting is undesirable. We evaluate PCRA by comparing its features to other systems in the literature, and by performance measures

    Practical database replication

    Get PDF
    Tese de doutoramento em InformĂĄticaSoftware-based replication is a cost-effective approach for fault-tolerance when combined with commodity hardware. In particular, shared-nothing database clusters built upon commodity machines and synchronized through eager software-based replication protocols have been driven by the distributed systems community in the last decade. The efforts on eager database replication, however, stem from the late 1970s with initial proposals designed by the database community. From that time, we have the distributed locking and atomic commitment protocols. Briefly speaking, before updating a data item, all copies are locked through a distributed lock, and upon commit, an atomic commitment protocol is responsible for guaranteeing that the transaction’s changes are written to a non-volatile storage at all replicas before committing it. Both these processes contributed to a poor performance. The distributed systems community improved these processes by reducing the number of interactions among replicas through the use of group communication and by relaxing the durability requirements imposed by the atomic commitment protocol. The approach requires at most two interactions among replicas and disseminates updates without necessarily applying them before committing a transaction. This relies on a high number of machines to reduce the likelihood of failures and ensure data resilience. Clearly, the availability of commodity machines and their increasing processing power makes this feasible. Proving the feasibility of this approach requires us to build several prototypes and evaluate them with different workloads and scenarios. Although simulation environments are a good starting point, mainly those that allow us to combine real (e.g., replication protocols, group communication) and simulated-code (e.g., database, network), full-fledged implementations should be developed and tested. Unfortunately, database vendors usually do not provide native support for the development of third-party replication protocols, thus forcing protocol developers to either change the database engines, when the source code is available, or construct in the middleware server wrappers that intercept client requests otherwise. The former solution is hard to maintain as new database releases are constantly being produced, whereas the latter represents a strenuous development effort as it requires us to rebuild several database features at the middleware. Unfortunately, the group-based replication protocols, optimistic or conservative, that had been proposed so far have drawbacks that present a major hurdle to their practicability. The optimistic protocols make it difficult to commit transactions in the presence of hot-spots, whereas the conservative protocols have a poor performance due to concurrency issues. In this thesis, we propose using a generic architecture and programming interface, titled GAPI, to facilitate the development of different replication strategies. The idea consists of providing key extensions to multiple DBMSs (Database Management Systems), thus enabling a replication strategy to be developed once and tested on several databases that have such extensions, i.e., those that are replication-friendly. To tackle the aforementioned problems in groupbased replication protocols, we propose using a novel protocol, titled AKARA. AKARA guarantees fairness, and thus all transactions have a chance to commit, and ensures great performance while exploiting parallelism as provided by local database engines. Finally, we outline a simple but comprehensive set of components to build group-based replication protocols and discuss key points in its design and implementation.A replicação baseada em software Ă© uma abordagem que fornece um bom custo benefĂ­cio para tolerĂąncia a falhas quando combinada com hardware commodity. Em particular, os clusters de base de dados “shared-nothing” construĂ­dos com hardware commodity e sincronizados atravĂ©s de protocolos “eager” tĂȘm sido impulsionados pela comunidade de sistemas distribuĂ­dos na Ășltima dĂ©cada. Os primeiros esforços na utilização dos protocolos “eager”, decorrem da dĂ©cada de 70 do sĂ©culo XX com as propostas da comunidade de base de dados. Dessa Ă©poca, temos os protocolos de bloqueio distribuĂ­do e de terminação atĂłmica (i.e. “two-phase commit”). De forma sucinta, antes de actualizar um item de dados, todas as cĂłpias sĂŁo bloqueadas atravĂ©s de um protocolo de bloqueio distribuĂ­do e, no momento de efetivar uma transacção, um protocolo de terminação atĂłmica Ă© responsĂĄvel por garantir que as alteraçÔes da transacção sĂŁo gravadas em todas as rĂ©plicas num sistema de armazenamento nĂŁo-volĂĄtil. No entanto, ambos os processos contribuem para um mau desempenho do sistema. A comunidade de sistemas distribuĂ­dos melhorou esses processos, reduzindo o nĂșmero de interacçÔes entre rĂ©plicas, atravĂ©s do uso da comunicação em grupo e minimizando a rigidez os requisitos de durabilidade impostos pelo protocolo de terminação atĂłmica. Essa abordagem requer no mĂĄximo duas interacçÔes entre as rĂ©plicas e dissemina actualizaçÔes sem necessariamente aplicĂĄ-las antes de efectivar uma transacção. Para funcionar, a solução depende de um elevado nĂșmero de mĂĄquinas para reduzirem a probabilidade de falhas e garantir a resiliĂȘncia de dados. Claramente, a disponibilidade de hardware commodity e o seu poder de processamento crescente tornam essa abordagem possĂ­vel. Comprovar a viabilidade desta abordagem obriga-nos a construir vĂĄrios protĂłtipos e a avaliĂĄlos com diferentes cargas de trabalho e cenĂĄrios. Embora os ambientes de simulação sejam um bom ponto de partida, principalmente aqueles que nos permitem combinar o cĂłdigo real (por exemplo, protocolos de replicação, a comunicação em grupo) e o simulado (por exemplo, base de dados, rede), implementaçÔes reais devem ser desenvolvidas e testadas. Infelizmente, os fornecedores de base de dados, geralmente, nĂŁo possuem suporte nativo para o desenvolvimento de protocolos de replicação de terceiros, forçando os desenvolvedores de protocolo a mudar o motor de base de dados, quando o cĂłdigo fonte estĂĄ disponĂ­vel, ou a construir no middleware abordagens que interceptam as solicitaçÔes do cliente. A primeira solução Ă© difĂ­cil de manter jĂĄ que novas “releases” das bases de dados estĂŁo constantemente a serem produzidas, enquanto a segunda representa um desenvolvimento ĂĄrduo, pois obriga-nos a reconstruir vĂĄrios recursos de uma base de dados no middleware. Infelizmente, os protocolos de replicação baseados em comunicação em grupo, optimistas ou conservadores, que foram propostos atĂ© agora apresentam inconvenientes que sĂŁo um grande obstĂĄculo Ă  sua utilização. Com os protocolos optimistas Ă© difĂ­cil efectivar transacçÔes na presença de “hot-spots”, enquanto que os protocolos conservadores tĂȘm um fraco desempenho devido a problemas de concorrĂȘncia. Nesta tese, propomos utilizar uma arquitetura genĂ©rica e uma interface de programação, intitulada GAPI, para facilitar o desenvolvimento de diferentes estratĂ©gias de replicação. A ideia consiste em fornecer extensĂ”es chaves para mĂșltiplos SGBDs (Database Management Systems), permitindo assim que uma estratĂ©gia de replicação possa ser desenvolvida uma Ășnica vez e testada em vĂĄrias bases de dados que possuam tais extensĂ”es, ou seja, aquelas que sĂŁo “replicationfriendly”. Para resolver os problemas acima referidos nos protocolos de replicação baseados em comunicação em grupo, propomos utilizar um novo protocolo, intitulado AKARA. AKARA garante a equidade, portanto, todas as operaçÔes tĂȘm uma oportunidade de serem efectivadas, e garante um excelente desempenho ao tirar partido do paralelismo fornecido pelos motores de base de dados. Finalmente, propomos um conjunto simples, mas abrangente de componentes para construir protocolos de replicação baseados em comunicação em grupo e discutimos pontoschave na sua concepção e implementação
    corecore