156,570 research outputs found

    Modelling of the Internet Computer Protocol Architecture: the Next Generation Blockchain

    Get PDF
    The Internet Computer Protocol is described as a third-generation blockchain system that aims to provide secure and scalable distributed systems through blockchains and smart contracts. In this position paper, this innovative architecture is introduced and then discussed in view of its modeling and simulation aspects. In fact, a properly defined digital twin of the Internet Computer Protocol could help its design, development, and evaluation in terms of performance and resilience to specific security attacks. To this extent, we propose a multi-level simulation model that follows an agent-based paradigm. The main issues of the modeling and simulation, and the main expected outcomes, are described and discussed

    Scalable Performance Analysis of Massively Parallel Stochastic Systems

    No full text
    The accurate performance analysis of large-scale computer and communication systems is directly inhibited by an exponential growth in the state-space of the underlying Markovian performance model. This is particularly true when considering massively-parallel architectures such as cloud or grid computing infrastructures. Nevertheless, an ability to extract quantitative performance measures such as passage-time distributions from performance models of these systems is critical for providers of these services. Indeed, without such an ability, they remain unable to offer realistic end-to-end service level agreements (SLAs) which they can have any confidence of honouring. Additionally, this must be possible in a short enough period of time to allow many different parameter combinations in a complex system to be tested. If we can achieve this rapid performance analysis goal, it will enable service providers and engineers to determine the cost-optimal behaviour which satisfies the SLAs. In this thesis, we develop a scalable performance analysis framework for the grouped PEPA stochastic process algebra. Our approach is based on the approximation of key model quantities such as means and variances by tractable systems of ordinary differential equations (ODEs). Crucially, the size of these systems of ODEs is independent of the number of interacting entities within the model, making these analysis techniques extremely scalable. The reliability of our approach is directly supported by convergence results and, in some cases, explicit error bounds. We focus on extracting passage-time measures from performance models since these are very commonly the language in which a service level agreement is phrased. We design scalable analysis techniques which can handle passages defined both in terms of entire component populations as well as individual or tagged members of a large population. A precise and straightforward specification of a passage-time service level agreement is as important to the performance engineering process as its evaluation. This is especially true of large and complex models of industrial-scale systems. To address this, we introduce the unified stochastic probe framework. Unified stochastic probes are used to generate a model augmentation which exposes explicitly the SLA measure of interest to the analysis toolkit. In this thesis, we deploy these probes to define many detailed and derived performance measures that can be automatically and directly analysed using rapid ODE techniques. In this way, we tackle applicable problems at many levels of the performance engineering process: from specification and model representation to efficient and scalable analysis

    K-LITE: Learning Transferable Visual Models with External Knowledge

    Full text link
    Recent state-of-the-art computer vision systems are trained from natural language supervision, ranging from simple object category names to descriptive captions. This free form of supervision ensures high generality and usability of the learned visual models, based on extensive heuristics on data collection to cover as many visual concepts as possible. Alternatively, learning with external knowledge about images is a promising way which leverages a much more structured source of supervision. In this paper, we propose K-LITE (Knowledge-augmented Language-Image Training and Evaluation), a simple strategy to leverage external knowledge to build transferable visual systems: In training, it enriches entities in natural language with WordNet and Wiktionary knowledge, leading to an efficient and scalable approach to learning image representations that can understand both visual concepts and their knowledge; In evaluation, the natural language is also augmented with external knowledge and then used to reference learned visual concepts (or describe new ones) to enable zero-shot and few-shot transfer of the pre-trained models. We study the performance of K-LITE on two important computer vision problems, image classification and object detection, benchmarking on 20 and 13 different existing datasets, respectively. The proposed knowledge-augmented models show significant improvement in transfer learning performance over existing methods.Comment: Preprint. The first three authors contribute equall

    The Implications of Diverse Applications and Scalable Data Sets in Benchmarking Big Data Systems

    Full text link
    Now we live in an era of big data, and big data applications are becoming more and more pervasive. How to benchmark data center computer systems running big data applications (in short big data systems) is a hot topic. In this paper, we focus on measuring the performance impacts of diverse applications and scalable volumes of data sets on big data systems. For four typical data analysis applications---an important class of big data applications, we find two major results through experiments: first, the data scale has a significant impact on the performance of big data systems, so we must provide scalable volumes of data sets in big data benchmarks. Second, for the four applications, even all of them use the simple algorithms, the performance trends are different with increasing data scales, and hence we must consider not only variety of data sets but also variety of applications in benchmarking big data systems.Comment: 16 pages, 3 figure

    Temporal video transcoding from H.264/AVC-to-SVC for digital TV broadcasting

    Get PDF
    Mobile digital TV environments demand flexible video compression like scalable video coding (SVC) because of varying bandwidths and devices. Since existing infrastructures highly rely on H.264/AVC video compression, network providers could adapt the current H.264/AVC encoded video to SVC. This adaptation needs to be done efficiently to reduce processing power and operational cost. This paper proposes two techniques to convert an H.264/AVC bitstream in Baseline (P-pictures based) and Main Profile (B-pictures based) without scalability to a scalable bitstream with temporal scalability as part of a framework for low-complexity video adaptation for digital TV broadcasting. Our approaches are based on accelerating the interprediction, focusing on reducing the coding complexity of mode decision and motion estimation tasks of the encoder stage by using information available after the H. 264/AVC decoding stage. The results show that when our techniques are applied, the complexity is reduced by 98 % while maintaining coding efficiency
    corecore