58 research outputs found

    Scalable hosting of web applications

    Get PDF
    Modern Web sites have evolved from simple monolithic systems to complex multitiered systems. In contrast to traditional Web sites, these sites do not simply deliver pre-written content but dynamically generate content using (one or more) multi-tiered Web applications. In this thesis, we addressed the question: How to host multi-tiered Web applications in a scalable manner? Scaling up a Web application requires scaling its individual tiers. To this end, various research works have proposed techniques that employ replication or caching solutions at different tiers. However, most of these techniques aim to optimize the performance of individual tiers and not the entire application. A key observation made in our research is that there exists no elixir technique that performs the best for allWeb applications. Effective hosting of a Web application requires careful selection and deployment of several techniques at different tiers. To this end, we present several caching and replication strategies, such as GlobeCBC, GlobeDB and GlobeTP, to improve the scalability of different tiers of a Web application. While these techniques and systems improve the performance of the individual tiers (and eventually the application), an application's administrator is not only interested in the performance of its individual tiers but also in its endto- end performance. To this end, we propose a resource provisioning approach that allows us to choose the best resource configuration for hosting a Web application such that its end-to-end response time can be optimized with minimum usage of resources. The proposed approach is based on an analytical model for multi-tier systems, which allows us to derive expressions for estimating the mean end-to-end response time and its variance.Steen, M.R. van [Promotor]Pierre, G.E.O. [Copromotor

    High-fidelity graphics using unconventional distributed rendering approaches

    Get PDF
    High-fidelity rendering requires a substantial amount of computational resources to accurately simulate lighting in virtual environments. While desktop computing, with the aid of modern graphics hardware, has shown promise in delivering realistic rendering at interactive rates, real-time rendering of moderately complex scenes is still unachievable on the majority of desktop machines and the vast plethora of mobile computing devices that have recently become commonplace. This work provides a wide range of computing devices with high-fidelity rendering capabilities via oft-unused distributed computing paradigms. It speeds up the rendering process on formerly capable devices and provides full functionality to incapable devices. Novel scheduling and rendering algorithms have been designed to best take advantage of the characteristics of these systems and demonstrate the efficacy of such distributed methods. The first is a novel system that provides multiple clients with parallel resources for rendering a single task, and adapts in real-time to the number of concurrent requests. The second is a distributed algorithm for the remote asynchronous computation of the indirect diffuse component, which is merged with locally-computed direct lighting for a full global illumination solution. The third is a method for precomputing indirect lighting information for dynamically-generated multi-user environments by using the aggregated resources of the clients themselves. The fourth is a novel peer-to-peer system for improving the rendering performance in multi-user environments through the sharing of computation results, propagated via a mechanism based on epidemiology. The results demonstrate that the boundaries of the distributed computing typically used for computer graphics can be significantly and successfully expanded by adapting alternative distributed methods

    Metamorphic testing: a review of challenges and opportunities

    Get PDF
    Metamorphic testing is an approach to both test case generation and test result verification. A central element is a set of metamorphic relations, which are necessary properties of the target function or algorithm in relation to multiple inputs and their expected outputs. Since its first publication, we have witnessed a rapidly increasing body of work examining metamorphic testing from various perspectives, including metamorphic relation identification, test case generation, integration with other software engineering techniques, and the validation and evaluation of software systems. In this paper, we review the current research of metamorphic testing and discuss the challenges yet to be addressed. We also present visions for further improvement of metamorphic testing and highlight opportunities for new research

    Doctor of Philosophy

    Get PDF
    dissertationWith the explosion of chip transistor counts, the semiconductor industry has struggled with ways to continue scaling computing performance in line with historical trends. In recent years, the de facto solution to utilize excess transistors has been to increase the size of the on-chip data cache, allowing fast access to an increased portion of main memory. These large caches allowed the continued scaling of single thread performance, which had not yet reached the limit of instruction level parallelism (ILP). As we approach the potential limits of parallelism within a single threaded application, new approaches such as chip multiprocessors (CMP) have become popular for scaling performance utilizing thread level parallelism (TLP). This dissertation identifies the operating system as a ubiquitous area where single threaded performance and multithreaded performance have often been ignored by computer architects. We propose that novel hardware and OS co-design has the potential to significantly improve current chip multiprocessor designs, enabling increased performance and improved power efficiency. We show that the operating system contributes a nontrivial overhead to even the most computationally intense workloads and that this OS contribution grows to a significant fraction of total instructions when executing several common applications found in the datacenter. We demonstrate that architectural improvements have had little to no effect on the performance of the OS over the last 15 years, leaving ample room for improvements. We specifically consider three potential solutions to improve OS execution on modern processors. First, we consider the potential of a separate operating system processor (OSP) operating concurrently with general purpose processors (GPP) in a chip multiprocessor organization, with several specialized structures acting as efficient conduits between these processors. Second, we consider the potential of segregating existing caching structures to decrease cache interference between the OS and application. Third, we propose that there are components within the OS itself that should be refactored to be both multithreaded and cache topology aware, which in turn, improves the performance and scalability of many-threaded applications

    Digitization of industrial quality control procedures applied to visual and geometrical inspections

    Get PDF
    Mestrado de dupla diplomação com a UTFPR - Universidade Tecnológica Federal do ParanáIndustries quality control procedures are usually dependent on gauge inspection tools, and these tools are used to inspect visual and geometrical tolerance conformity. Operators are guided during an inspection by using paper tutorials that assist them in performing their tasks and registering the result of the performed analysis. This traditional method of registering information may be misleading, lowering the effectiveness of the quality control by providing inaccurate and error-prone inspection results. This work implements a system that uses emergent technologies (e.g., Human-Machine Interfaces, Virtual Reality, Distributed Systems, Cloud Computing, and Internet of Things (IoT)) to propose a costeffective solution that supports operators and quality control managers in the realization and data collection of gauge inspection control procedures. The final system was deployed in an industrial production plant, with the delivered results showing its efficiency, robustness, and highly positive feedback from the operators and managers. The software may offer a quicker and efficient execution of analysis tasks, significantly decreasing the setup time required to change the inspected product reference

    Porting Transmedia Storytelling to Journalism

    Get PDF
    This thesis examines how the methods of transmedia storytelling emerging in the entertainment industry might be used in a journalism context. Journalism is facing many crises, not the least of which is a loss of readership and perceived relevance to its public. Presented with an ever-expanding array of media with which to interact, the public is more difficult to attract to a socially relevant issue or a politically important story. Faced with similar issues, the entertainment industry has developed a means to engage with fans in a way that draws them across multiple media platforms, better captures their imagination and engages them personally into the story being told. Transmedia Storytelling lets narrative unfold on multiple lines, from varying perspectives and with the help of the fans themselves. Scholars of the methodology describe it as the art of world building. This thesis illustrates that journalists can better engage their publics by adapting the methods of transmedia storytelling to journalism. By comparing entertainment transmedia storytelling theory and technique with examples of journalism that illustrate one or more of these techniques, this thesis explores whether journalists can reach more individuals, achieve better engagement and participation from their publics and more thoroughly communicate the complexity and context of any story

    Atomic Transfer for Distributed Systems

    Get PDF
    Building applications and information systems increasingly means dealing with concurrency and faults stemming from distribution of system components. Atomic transactions are a well-known method for transferring the responsibility for handling concurrency and faults from developers to the software\u27s execution environment, but incur considerable execution overhead. This dissertation investigates methods that shift some of the burden of concurrency control into the network layer, to reduce response times and increase throughput. It anticipates future programmable network devices, enabling customized high-performance network protocols. We propose Atomic Transfer (AT), a distributed algorithm to prevent race conditions due to messages crossing on a path of network switches. Switches check request messages for conflicts with response messages traveling in the opposite direction. Conflicting requests are dropped, obviating the request\u27s receiving host from detecting and handling the conflict. AT is designed to perform well under high data contention, as concurrency control effort is balanced across a network instead of being handled by the contended endpoint hosts themselves. We use AT as the basis for a new optimistic transactional cache consistency algorithm, supporting execution of atomic applications caching shared data. We then present a scalable refinement, allowing hierarchical consistent caches with predictable performance despite high data update rates. We give detailed I/O Automata models of our algorithms along with correctness proofs. We begin with a simplified model, assuming static network paths and no message loss, and then refine it to support dynamic network paths and safe handling of message loss. We present a trie-based data structure for accelerating conflict-checking on switches, with benchmarks suggesting the feasibility of our approach from a performance stand-point
    corecore