1,041 research outputs found

    Clustering Techniques : A solution for e-business

    Get PDF
    The purpose of this thesis was to provide the best clustering solution for the Archipelago web site project which would have been part of the Central Baltic Intereg IV programme 2007-2013. The entire program is a merger between the central Baltic regions of Finland, including the Åland Islands, Sweden and Estonia. A literature review of articles and research on various clustering techniques for the different sections of the project led to the findings of this document. Clustering was needed for web servers and the underlying database implementation. Additionally, the operating system used for all servers in both sections was required to present the best clustering solution. Implementing OSI layer 7 clustering for the web server cluster, MySQL database clustering and using Linux operating system would have provided the best solution for the Archipelago website. This implementation would have provided unlimited scalability, availability and high performance for the web site. Also, it is the most cost effective solution because it would utilize the commodity hardware

    Group-based replication of on-line transaction processing servers

    Get PDF
    Several techniques for database replication using group communication have recently been proposed, namely, the Database State Machine, Postgres-R, and the NODO protocol. Although all rely on a totally ordered multicast for consistency, they differ substantially on how multicast is used. This results in different performance trade-offs which are hard to compare as each protocol is presented using a different load scenario and evaluation method. In this paper we evaluate the suitability of such protocols for replication of On-Line Transaction Processing (OLTP) applications in clusters of servers and over wide area networks. This is achieved by implementing them using a common infra-structure and by using a standard workload. The results allows us to select the best protocol regarding performance and scalability in a demanding but realistic usage scenario.Projecto STRONGRE (POSI/CHS/41285/2001) financiado pela Fundação para a Ciência e a Tecnologia (FCT)

    Improving Parallel I/O Performance Using Interval I/O

    Get PDF
    Today\u27s most advanced scientific applications run on large clusters consisting of hundreds of thousands of processing cores, access state of the art parallel file systems that allow files to be distributed across hundreds of storage targets, and utilize advanced interconnections systems that allow for theoretical I/O bandwidth of hundreds of gigabytes per second. Despite these advanced technologies, these applications often fail to obtain a reasonable proportion of available I/O bandwidth. The reasons for the poor performance of application I/O include the noncontiguous I/O access patterns used for scientific computing, contention due to false sharing, and the somewhat finicky nature of parallel file system performance. We argue that a more fundamental cause of this problem is the legacy view of a file as a linear sequence of bytes. To address these issues, we introduce a novel approach for parallel I/O called Interval I/O. Interval I/O is an innovative approach that uses application access patterns to partition a file into a series of intervals, which are used as the fundamental unit for subsequent I/O operations. Use of this approach provides superior performance for the noncontiguous access patterns which are frequently used by scientific applications. In addition, the approach reduces false contention and the unnecessary serialization it causes. Interval I/O also significantly increases the performance of atomic mode operations. Finally, the Interval I/O approach includes a technique for supporting parallel I/O for cooperating applications. We provide a prototype implementation of our Interval I/O system and use it to demonstrate performance improvements of as much as 1000% compared to ROMIO when using Interval I/O with several common benchmarks

    eHDDP: Enhanced Hybrid Domain Discovery Protocol for network topologies with both wired/wireless and SDN/non-SDN devices

    Get PDF
    Handling efficiently both wired and/or wireless devices in SDN networks is still an open issue. eHDDP comes as an enhanced version of the Hybrid Domain Discovery Protocol (HDDP) that allows the SDN control plane to discover and manage hybrid topologies composed by both SDN and non-SDN devices with wired and/or wireless interfaces, thus opening a path for the integration of IoT and SDN networks. Moreover, the proposal is also able to detect both unidirectional and bidirectional links between wireless devices. eHDDP has been thoroughly evaluated in different scenarios and exhibits good scalability properties since the number of required messages is proportional to the number of existing links in the network topology. Moreover, the obtained discovery and processing times give the opportunity to support scenarios with low mobility devices since the discovery times are in the range of hundreds of milliseconds.Comunidad de MadridJunta de Comunidades de Castilla-La Manch

    An Efficient Network API for in-Kernel Applications in Clusters

    Get PDF
    International audienceRunning parallel applications on clusters with high-speed local networks requires fast communication between computing nodes but also low latency and high bandwidth file access. However, the application programming interfaces of high-speed local networks were designed for MPI communication and do not always meet the requirements of other applications like distributed file systems. In this paper, we explore several solutions to improve the use of high-speed network for in-kernel applications. Distributed file systems implemented on top of the GM interface of Myrinet are first examined to demonstrate how hard it is to get an efficient interaction between such applications and the network. Then, we propose solutions to simplify and improve this interaction and integrate them into the kernel interface of the new Myrinet. Performance comparisons between MX and GM, and their usage in both a distributed file system and a zero-copy protocol show nice improvements. Moreover, we are able to improve the performance of the flexible kernel API we designed in MX that allows to remove some intermediate copy
    corecore