17,360 research outputs found

    The earlier the better: a theory of timed actor interfaces

    Get PDF
    Programming embedded and cyber-physical systems requires attention not only to functional behavior and correctness, but also to non-functional aspects and specifically timing and performance constraints. A structured, compositional, model-based approach based on stepwise refinement and abstraction techniques can support the development process, increase its quality and reduce development time through automation of synthesis, analysis or verification. For this purpose, we introduce in this paper a general theory of timed actor interfaces. Our theory supports a notion of refinement that is based on the principle of worst-case design that permeates the world of performance-critical systems. This is in contrast with the classical behavioral and functional refinements based on restricting or enlarging sets of behaviors. An important feature of our refinement is that it allows time-deterministic abstractions to be made of time-non-deterministic systems, improving efficiency and reducing complexity of formal analysis. We also show how our theory relates to, and can be used to reconcile a number of existing time and performance models and how their established theories can be exploited to represent and analyze interface specifications and refinement steps.\u

    Multipath streaming: fundamental limits and efficient algorithms

    Get PDF
    We investigate streaming over multiple links. A file is split into small units called chunks that may be requested on the various links according to some policy, and received after some random delay. After a start-up time called pre-buffering time, received chunks are played at a fixed speed. There is starvation if the chunk to be played has not yet arrived. We provide lower bounds (fundamental limits) on the starvation probability of any policy. We further propose simple, order-optimal policies that require no feedback. For general delay distributions, we provide tractable upper bounds for the starvation probability of the proposed policies, allowing to select the pre-buffering time appropriately. We specialize our results to: (i) links that employ CSMA or opportunistic scheduling at the packet level, (ii) links shared with a primary user (iii) links that use fair rate sharing at the flow level. We consider a generic model so that our results give insight into the design and performance of media streaming over (a) wired networks with several paths between the source and destination, (b) wireless networks featuring spectrum aggregation and (c) multi-homed wireless networks.Comment: 24 page

    Classification and Recovery of Radio Signals from Cosmic Ray Induced Air Showers with Deep Learning

    Full text link
    Radio emission from air showers enables measurements of cosmic particle kinematics and identity. The radio signals are detected in broadband Megahertz antennas among continuous background noise. We present two deep learning concepts and their performance when applied to simulated data. The first network classifies time traces as signal or background. We achieve a true positive rate of about 90% for signal-to-noise ratios larger than three with a false positive rate below 0.2%. The other network is used to clean the time trace from background and to recover the radio time trace originating from an air shower. Here we achieve a resolution in the energy contained in the trace of about 20% without a bias for 80%80\% of the traces with a signal. The obtained frequency spectrum is cleaned from signals of radio frequency interference and shows the expected shape.Comment: 20 pages, 13 figures, resubmitted to JINS

    Cache policies for cloud-based systems: To keep or not to keep

    Full text link
    In this paper, we study cache policies for cloud-based caching. Cloud-based caching uses cloud storage services such as Amazon S3 as a cache for data items that would have been recomputed otherwise. Cloud-based caching departs from classical caching: cloud resources are potentially infinite and only paid when used, while classical caching relies on a fixed storage capacity and its main monetary cost comes from the initial investment. To deal with this new context, we design and evaluate a new caching policy that minimizes the overall cost of a cloud-based system. The policy takes into account the frequency of consumption of an item and the cloud cost model. We show that this policy is easier to operate, that it scales with the demand and that it outperforms classical policies managing a fixed capacity.Comment: Proceedings of IEEE International Conference on Cloud Computing 2014 (CLOUD 14

    Reconstructing the cosmic-ray energy from the radio signal measured in one single station

    Full text link
    Short radio pulses can be measured from showers of both high-energy cosmic rays and neutrinos. While commonly several antenna stations are needed to reconstruct the energy of an air shower, we describe a novel method that relies on the radio signal measured in one antenna station only. Exploiting a broad frequency bandwidth of 8030080-300 MHz, we obtain a statistical energy resolution of better than 15\% on a realistic Monte Carlo set. This method is both a step towards energy reconstruction from the radio signal of neutrino induced showers, as well as a promising tool for cosmic-ray radio arrays. Especially for hybrid arrays where the air shower geometry is provided by an independent detector, this method provides a precise handle on the energy of the shower even with a sparse array

    Unravelling the Impact of Temporal and Geographical Locality in Content Caching Systems

    Get PDF
    To assess the performance of caching systems, the definition of a proper process describing the content requests generated by users is required. Starting from the analysis of traces of YouTube video requests collected inside operational networks, we identify the characteristics of real traffic that need to be represented and those that instead can be safely neglected. Based on our observations, we introduce a simple, parsimonious traffic model, named Shot Noise Model (SNM), that allows us to capture temporal and geographical locality of content popularity. The SNM is sufficiently simple to be effectively employed in both analytical and scalable simulative studies of caching systems. We demonstrate this by analytically characterizing the performance of the LRU caching policy under the SNM, for both a single cache and a network of caches. With respect to the standard Independent Reference Model (IRM), some paradigmatic shifts, concerning the impact of various traffic characteristics on cache performance, clearly emerge from our results.Comment: 14 pages, 11 Figures, 2 Appendice

    Analytic real-time analysis and timed automata: a hybrid methodology for the performance analysis of embedded real-time systems

    Get PDF
    This paper presents a compositional and hybrid approach for the performance analysis of distributed real-time systems. The developed methodology abstracts system components by either flow-oriented and purely analytic descriptions or by state-based models in the form of timed automata. The interaction among the heterogeneous components is modeled by streams of discrete events. In total this yields a hybrid framework for the compositional analysis of embedded systems. It supplements contemporary techniques for the following reasons: (a) state space explosion as intrinsic to formal verification is limited to the level of isolated components; (b) computed performance metrics such as buffer sizes, delays and utilization rates are not overly pessimistic, because coarse-grained analytic models are used only for components that conform to the stateless model of computation. For demonstrating the usefulness of the presented ideas, a corresponding tool-chain has been implemented. It is used to investigate the performance of a two-staged computing system, where one stage exhibits state-dependent behavior that is only coarsely coverable by a purely analytic and stateless component abstraction. Finally, experiments are performed to ascertain the scalability and the accuracy of the proposed approac
    corecore