10,423 research outputs found

    GLive: The Gradient overlay as a market maker for mesh-based P2P live streaming

    Get PDF
    Peer-to-Peer (P2P) live video streaming over the Internet is becoming increasingly popular, but it is still plagued by problems of high playback latency and intermittent playback streams. This paper presents GLive, a distributed market-based solution that builds a mesh overlay for P2P live streaming. The mesh overlay is constructed such that (i) nodes with increasing upload bandwidth are located closer to the media source, and (ii) nodes with similar upload bandwidth become neighbours. We introduce a market-based approach that matches nodes willing and able to share the stream with one another. However, market-based approaches converge slowly on random overlay networks, and we improve the rate of convergence by adapting our market-based algorithm to exploit the clustering of nodes with similar upload bandwidths in our mesh overlay. We address the problem of free-riding through nodes preferentially uploading more of the stream to the best uploaders. We compare GLive with our previous tree-based streaming protocol, Sepidar, and NewCoolstreaming in simulation, and our results show significantly improved playback continuity and playback latency

    Efficient and robust adaptive consensus services based on oracles

    Get PDF

    High-Throughput Computing on High-Performance Platforms: A Case Study

    Full text link
    The computing systems used by LHC experiments has historically consisted of the federation of hundreds to thousands of distributed resources, ranging from small to mid-size resource. In spite of the impressive scale of the existing distributed computing solutions, the federation of small to mid-size resources will be insufficient to meet projected future demands. This paper is a case study of how the ATLAS experiment has embraced Titan---a DOE leadership facility in conjunction with traditional distributed high- throughput computing to reach sustained production scales of approximately 52M core-hours a years. The three main contributions of this paper are: (i) a critical evaluation of design and operational considerations to support the sustained, scalable and production usage of Titan; (ii) a preliminary characterization of a next generation executor for PanDA to support new workloads and advanced execution modes; and (iii) early lessons for how current and future experimental and observational systems can be integrated with production supercomputers and other platforms in a general and extensible manner

    NASA/ESA CT-990 Spacelab simulation. Appendix A: The experiment operator

    Get PDF
    A joint NASA/ESA endeavor was established to conduct an extensive spacelab simulation using the NASA CV-990 airborne laboratory. The scientific payload was selected to perform studies in upper atmospheric physics and infrared astronomy with principal investigators from France, the Netherlands, England, and several groups from the United States. Two experiment operators from Europe and two from the U.S. were selected to live aboard the aircraft along with a mission manager for a six-day period and operate the experiments in behalf of the principal scientists. This appendix discusses the experiment operators and their relationship to the joint mission under the following general headings: selection criteria, training programs, and performance. The performance of the proxy operators was assessed in terms of adequacy of training, amount of scientific data obtained, quality of data obtained, and reactions to problems that arose in experiment operation

    An eventually perfect failure detector in a high-availability scenario

    Get PDF
    Modern-day distributed systems have been increasing in complexity and dynamism due to the heterogeneity of the system execution environment, different network technologies, online repairs, frequent updates and upgrades, and the addition or removal of system components. Such complexity has elevated the operational and maintenance costs and triggered efforts to reduce it while improving its reliability. Availability is the ratio of uptime to total time of a system. A High Available system, or systems with at least 99.999% of Availability, imposes a challenge to maintain such levels of uptime. Prior work shows that by using system state monitoring and fault management with failure detectors it is possible to increase system availability. The main objective of this work is to develop an Eventually Perfect Failure Detector to improve a database system Availability through fault-tolerance methods. Such a system was developed and tested in a proposed High-Availability database access infrastructure. Final results have shown that is possible to achieve performance and availability improvements by using, respectively, replication and a failure detector.Os Sistemas distribuídos modernos têm aumentando em dinamismo e complexidade devido à heterogeneidade do ambiente de execução, diferentes tecnologias de rede, manutenção online, atualizações frequentes e a adição ou remoção de componentes do sistema. Esta complexidade tem elevado os custos operacionais e de manutenção, incentivando o desenvolvimento de soluções para reduzir a manutenção dos sistemas enquanto melhora sua confiabilidade. Disponibilidade é a razão do tempo de atividade sobre um intervalo de tempo total. Sistemas de Alta Disponibilidade, ou seja, que possuem pelo menos 99.9999% de Disponibilidade, representam um grande desafio para manter tais níveis de operacionalidade. Trabalhos anteriores mostram que é possível melhorar a Disponibilidade do sistema utilizando o monitoramento de estados do sistema e o gerenciamento de falhas com detectores. O objetivo principal deste trabalho é desenvolver um Detector de Falhas Eventualmente Perfeito que pode melhorar a Disponibilidade de um sistema de base de dados através de uma arquitetura de Alta Disponibilidade. Os resultados finais mostram que é possível ter ganhos de desempenho e disponibilidade utilizando, respectivamente, métodos como replicação e detecção de falhas

    Design and discrete event simulation of power and free handling systems

    Get PDF
    Effective manufacturing systems design and implementation has become increasingly critical, with the reduction in manufacturing product lead times, and the subsequent influence on engineering projects. Tools and methodologies that can assist the design team must be both manageable and efficient to be successful. Modelling, using analytical and mathematical models, or using computer assisted simulations, are used to accomplish design objectives. This thesis will review the use of analytical and discrete event computer simulation models, applied to the design of automated power and free handling systems, using actual case studies to create and support a practical approach to design and implementation of these types of systems. The IDEF process mapping approach is used to encompass these design tools and system requirements, to recommend a generic process methodology for power and free systems design. The case studies consisted of three actual installations within the Philips Components Ltd facility in Durham, a manufacturer of television tubes. Power and free conveyor systems at PCL have assumed increased functions from the standard conveyor systems, ranging from stock handling and buffering, to type sorting and flexible product routing. In order to meet the demands of this flexible manufacturing strategy, designing a system that can meet the production objectives is critical. Design process activities and engineering considerations for the three projects were reviewed and evaluated, to capture the generic methodologies necessary for future design success. Further, the studies were intended to identify both general and specific criteria for simulating power and free conveyor handling systems, and the ingredients necessary for successful discrete event simulation. The automated handling systems were used to prove certain aspects of building, using and analysing simulation models, in relation to their anticipated benefits, including an evaluation of the factors necessary to ensure their realisation. While there exists a multitude of designs for power and free conveyor systems based on user requirements and proprietary equipment technology, the principles of designing and implementing a system can remain generic. Although specific technology can influence detailed design, a common, consistent approach to design activities was a proven requirement In all cases. Additionally, it was observed that no one design tool was sufficient to ensure maximum system success. A combination of both analytical and simulation methods was necessary to adequately optimise the systems studied, given unique and varying project constraints. It followed that the level of application of the two approaches was directly dependent on the initial engineering project objectives, and the ability to accurately identify system requirements
    • …
    corecore