4,644 research outputs found

    SHStream: Self-Healing Framework for HTTP Video-Streaming

    Get PDF
    HTTP video-streaming is leading delivery of video content over the Internet. This phenomenon is explained by the ubiquity of web browsers, the permeability of HTTP traffic and the recent video technologies around HTML5. However, the inclusion of multimedia requests imposes new requirements on web servers due to responses with lifespans that can reach dozens of minutes and timing requirements for data fragments transmitted during the response period. Consequently, web- servers require real-time performance control to avoid playback outages caused by overloading and performance anomalies. We present SHStream , a self-healing framework for web servers delivering video-streaming content that provides (1) load admit- tance to avoid server overloading; (2) prediction of performance anomalies using online data stream learning algorithms; (3) continuous evaluation and selection of the best algorithm for prediction; and (4) proactive recovery by migrating the server to other hosts using container-based virtualization techniques. Evaluation of our framework using several variants of Hoeffding trees and ensemble algorithms showed that with a small number of learning instances, it is possible to achieve approximately 98% of recall and 99% of precision for failure predictions. Additionally, proactive failover can be performed in less than 1 secon

    Reboot-based Recovery of Performance Anomalies in Adaptive Bitrate Video-Streaming Services

    Get PDF
    Performance anomalies represent one common type of failures in Internet servers. Overcoming these failures without introducing server downtimes is of the utmost importance in video-streaming services. These services have large user abandon- ment costs when failures occur after users watch a significant part of a video. Reboot is the most popular and effective technique for overcoming performance anomalies but it takes several minutes from start until the server is warmed-up again to run at its full capacity. During that period, the server is unavailable or provides limited capacity to process end-users’ requests. This paper presents a recovery technique for performance anomalies in HTTP Streaming services, which relies on Container-based Virtualization to implement an efficient multi-phase server reboot technique that minimizes the service downtime. The recovery process includes analysis of variance of request-response times to delimit the server warm-up period, after which the server is running at its full capacity. Experimental results show that the Virtual Container recovery process completes in 72 seconds, which contrasts with the 434 seconds required for full operating system recovery. Both recovery types generate service downtimes imperceptible to end-users

    Local and Global Superconductivity in Bismuth

    Get PDF
    We performed magnetization M(H,T) and magnetoresistance R(T,H) measurements on powdered (grain size ~ 149 micrometers) as well as highly oriented rhombohedral (A7) bismuth (Bi) samples consisting of single crystalline blocks of size ~ 1x1 mm2 in the plane perpendicular to the trigonal c-axis. The obtained results revealed the occurrence of (1) local superconductivity in powdered samples with Tc(0) = 8.75 \pm 0.05 K, and (2) global superconductivity at Tc(0) = 7.3 \pm 0.1 K in polycrystalline Bi triggered by low-resistance Ohmic contacts with silver (Ag) normal metal. The results provide evidence that the superconductivity in Bi is localized in a tiny volume fraction, probably at intergrain or Ag/Bi interfaces. On the other hand, the occurrence of global superconductivity observed for polycrystalline Bi can be accounted for by enhancement of the superconducting order parameter phase stiffness induced by the normal metal contacts, the scenario proposed in the context of "pseudogap regime" in cuprates [E. Berg et al., PRB 78, 094509 (2008)].Comment: 12 pages including 9 figures and 1 table, Special Issue to the 80th birthday anniversary of V. G. Peschansky, Electronic Properties of Conducting System

    Presence of stratospheric humidity in the ozone column depletion on the west coast of South America

    Get PDF
    The ozone column depletion over the western coast of South America has been previously explained, based on the existence of winds in the area of the depletion, which cause compression and thinning of the ozone layer. However, the presence of humidity and methane transported by these winds to the stratosphere where the ozone depletion is present gives evidence that these compounds also participate in the depletion of the ozone layer. These two compounds, humidity and methane, are analysed during the ozone depletion of January, 1998. It is observed that when humidity presents fluctuations, ozone has fluctuations too. A maximum of humidity corresponds to a minimum of ozone, but there is a shift in altitude between them. This shift is observed in the stratosphere and upper troposphere and corresponds to approximately 500 m. It is important to point out that during this event El Ni˜no was present and the sources of methane are the Amazon forest and the Pacific Ocean. The data for this study was obtained from NASA and HALOE

    Probabilistic-based discrete model for the seismic fragility assessment of masonry structures

    Get PDF
    Classical Finite-Element and Discrete-Element strategies are expensive to carry when analysing masonry structures in the inelastic range, under a seismic excitation, and considering uncertainty. Their application to the seismic fragility assessment of masonry structures through non-linear time-history analysis becomes thus a challenge. The paper addresses such difficulty by presenting an alternative probabilistic-based numerical strategy. The strategy couples a discrete macro-element model at a structural-scale with a homogenization model at a meso-scale. A probabilistic nature is guaranteed through a forward propagation of uncertainty through loading, material, mechanical, and geometrical parameters. An incremental dynamic analysis is adopted, in which several assumptions decrease the required computational time-costs. A random mechanical response of masonry is provided by numerical homogenization, using Latin hypercube sampling with a non-identity correlation matrix, and only a reduced number of representative random samples are transferred to the macro-scale. The approach was applied to the seismic fragility assessment of an English-bond masonry mock-up. Its effectiveness was demonstrated, and its computational attractiveness highlighted. Results may foster its use within the seismic fragility assessment of larger structures, and the opportunity to better analyze the effect of material and geometric-based uncertainties in the stochastic dynamic response of masonry structures.- (undefined
    corecore