3,585 research outputs found

    Analysis of a Multimedia Stream using Stochastic Process Algebra

    Get PDF
    It is now well recognised that the next generation of distributed systems will be distributed multimedia systems. Central to multimedia systems is quality of service, which defines the non-functional requirements on the system. In this paper we investigate how stochastic process algebra can be used in order to determine the quality of service properties of distributed multimedia systems. We use a simple multimedia stream as our basic example. We describe it in the Stochastic Process Algebra PEPA and then we analyse whether the stream satisfies a set of quality of service parameters: throughput, end-to-end latency, jitter and error rates

    Hybrid performance modelling of opportunistic networks

    Get PDF
    We demonstrate the modelling of opportunistic networks using the process algebra stochastic HYPE. Network traffic is modelled as continuous flows, contact between nodes in the network is modelled stochastically, and instantaneous decisions are modelled as discrete events. Our model describes a network of stationary video sensors with a mobile ferry which collects data from the sensors and delivers it to the base station. We consider different mobility models and different buffer sizes for the ferries. This case study illustrates the flexibility and expressive power of stochastic HYPE. We also discuss the software that enables us to describe stochastic HYPE models and simulate them.Comment: In Proceedings QAPL 2012, arXiv:1207.055

    Low-Latency Millimeter-Wave Communications: Traffic Dispersion or Network Densification?

    Full text link
    This paper investigates two strategies to reduce the communication delay in future wireless networks: traffic dispersion and network densification. A hybrid scheme that combines these two strategies is also considered. The probabilistic delay and effective capacity are used to evaluate performance. For probabilistic delay, the violation probability of delay, i.e., the probability that the delay exceeds a given tolerance level, is characterized in terms of upper bounds, which are derived by applying stochastic network calculus theory. In addition, to characterize the maximum affordable arrival traffic for mmWave systems, the effective capacity, i.e., the service capability with a given quality-of-service (QoS) requirement, is studied. The derived bounds on the probabilistic delay and effective capacity are validated through simulations. These numerical results show that, for a given average system gain, traffic dispersion, network densification, and the hybrid scheme exhibit different potentials to reduce the end-to-end communication delay. For instance, traffic dispersion outperforms network densification, given high average system gain and arrival rate, while it could be the worst option, otherwise. Furthermore, it is revealed that, increasing the number of independent paths and/or relay density is always beneficial, while the performance gain is related to the arrival rate and average system gain, jointly. Therefore, a proper transmission scheme should be selected to optimize the delay performance, according to the given conditions on arrival traffic and system service capability

    A template-based methodology for the specification and automated composition of performability models

    Get PDF
    Dependability and performance analysis of modern systems is facing great challenges: their scale is growing, they are becoming massively distributed, interconnected, and evolving. Such complexity makes model-based assessment a difficult and time-consuming task. For the evaluation of large systems, reusable submodels are typically adopted as an effective way to address the complexity and to improve the maintainability of models. When using state-based models, a common approach is to define libraries of generic submodels, and then compose concrete instances by state sharing, following predefined “patterns” that depend on the class of systems being modeled. However, such composition patterns are rarely formalized, or not even documented at all. In this paper, we address this problem using a model-driven approach, which combines a language to specify reusable submodels and composition patterns, and an automated composition algorithm. Clearly defining libraries of reusable submodels, together with patterns for their composition, allows complex models to be automatically assembled, based on a high-level description of the scenario to be evaluated. This paper provides a solution to this problem focusing on: formally defining the concept of model templates, defining a specification language for model templates, defining an automated instantiation and composition algorithm, and applying the approach to a case study of a large-scale distributed system69129330

    Failure Mitigation in Linear, Sesquilinear and Bijective Operations On Integer Data Streams Via Numerical Entanglement

    Full text link
    A new roll-forward technique is proposed that recovers from any single fail-stop failure in MM integer data streams (M3M\geq3) when undergoing linear, sesquilinear or bijective (LSB) operations, such as: scaling, additions/subtractions, inner or outer vector products and permutations. In the proposed approach, the MM input integer data streams are linearly superimposed to form MM numerically entangled integer data streams that are stored in-place of the original inputs. A series of LSB operations can then be performed directly using these entangled data streams. The output results can be extracted from any M1M-1 entangled output streams by additions and arithmetic shifts, thereby guaranteeing robustness to a fail-stop failure in any single stream computation. Importantly, unlike other methods, the number of operations required for the entanglement, extraction and recovery of the results is linearly related to the number of the inputs and does not depend on the complexity of the performed LSB operations. We have validated our proposal in an Intel processor (Haswell architecture with AVX2 support) via convolution operations. Our analysis and experiments reveal that the proposed approach incurs only 1.8%1.8\% to 2.8%2.8\% reduction in processing throughput in comparison to the failure-intolerant approach. This overhead is 9 to 14 times smaller than that of the equivalent checksum-based method. Thus, our proposal can be used in distributed systems and unreliable processor hardware, or safety-critical applications, where robustness against fail-stop failures becomes a necessity.Comment: Proc. 21st IEEE International On-Line Testing Symposium (IOLTS 2015), July 2015, Halkidiki, Greec

    Fluid passage-time calculation in large Markov models

    Get PDF
    Recent developments in the analysis of large Markov models facilitate the fast approximation of transient characteristics of the underlying stochastic process. So-called fluid analysis makes it possible to consider previously intractable models whose underlying discrete state space grows exponentially as model components are added. In this work, we show how fluid approximation techniques may be used to extract passage-time measures from performance models. We focus on two types of passage measure: passage-times involving individual components; as well as passage-times which capture the time taken for a population of components to evolve. Specifically, we show that for models of sufficient scale, passage-time distributions can be well approximated by a deterministic fluid-derived passage-time measure. Where models are not of sufficient scale, we are able to generate approximate bounds for the entire cumulative distribution function of these passage-time random variables, using moment-based techniques. Finally, we show that for some passage-time measures involving individual components the cumulative distribution function can be directly approximated by fluid techniques

    Population models from PEPA descriptions

    Get PDF
    Stochastic process algebras such as PEPA have enjoyed considerable success as CTMC-based system description languages for performance evaluation of computer and communication systems. However they have not been able to escape the problem of state space explosion, and this problem is exacerbated when other domains such as systems biology are considered. Therefore we have been investigating alternative semantics for PEPA models which give rise to a population view of the system, in terms of a set of nonlinear ordinary differential equations. This extended abstract gives an overview of this mapping

    Curriculum Guidelines for Undergraduate Programs in Data Science

    Get PDF
    The Park City Math Institute (PCMI) 2016 Summer Undergraduate Faculty Program met for the purpose of composing guidelines for undergraduate programs in Data Science. The group consisted of 25 undergraduate faculty from a variety of institutions in the U.S., primarily from the disciplines of mathematics, statistics and computer science. These guidelines are meant to provide some structure for institutions planning for or revising a major in Data Science

    Fast watermarking of MPEG-1/2 streams using compressed-domain perceptual embedding and a generalized correlator detector

    Get PDF
    A novel technique is proposed for watermarking of MPEG-1 and MPEG-2 compressed video streams. The proposed scheme is applied directly in the domain of MPEG-1 system streams and MPEG-2 program streams (multiplexed streams). Perceptual models are used during the embedding process in order to avoid degradation of the video quality. The watermark is detected without the use of the original video sequence. A modified correlation-based detector is introduced that applies nonlinear preprocessing before correlation. Experimental evaluation demonstrates that the proposed scheme is able to withstand several common attacks. The resulting watermarking system is very fast and therefore suitable for copyright protection of compressed video
    corecore