6,402 research outputs found

    Implementation and Characterization of an Advanced Scheduler

    Get PDF
    Decoupled-CBQ, a CBQ derived scheduler, has been proved being a substantial improvement over CBQ. D-CBQ main advantages are a new set of rules for distributing excess bandwidth and the ability to guarantee bandwidth and delay in a separate way, whence the name "decoupled". This paper aims at the characterization of D-CBQ by means of an extended set of simulations and a real implementation into the ALTQ framework

    Improving Third-Party Relaying for LTE-A: A Realistic Simulation Approach

    Full text link
    In this article we propose solutions to diverse conflicts that result from the deployment of the (still immature) relay node (RN) technology in LTE-A networks. These conflicts and their possible solutions have been observed by implementing standard-compliant relay functionalities on the Vienna simulator. As an original experimental approach, we model realistic RN operation, taking into account that transmitters are not active all the time due to half-duplex RN operation. We have rearranged existing elements in the simulator in a manner that emulates RN behavior, rather than implementing a standalone brand-new component for the simulator. We also study analytically some of the issues observed in the interaction between the network and the RNs, to draw conclusions beyond simulation observation. The main observations of this paper are that: ii) Additional time-varying interference management steps are needed, because the LTE-A standard employs a fixed time division between eNB-RN and RN-UE transmissions (typical relay capacity or throughput research models balance them optimally, which is unrealistic nowadays); iiii) There is a trade-off between the time-division constraints of relaying and multi-user diversity; the stricter the constraints on relay scheduling are, the less flexibility schedulers have to exploit channel variation; and iiiiii) Thee standard contains a variety of parameters for relaying configuration, but not all cases of interest are covered.Comment: 17 one-column pages, 9 figures, accepted for publication in IEEE ICC 2014 MW

    Many-Task Computing and Blue Waters

    Full text link
    This report discusses many-task computing (MTC) generically and in the context of the proposed Blue Waters systems, which is planned to be the largest NSF-funded supercomputer when it begins production use in 2012. The aim of this report is to inform the BW project about MTC, including understanding aspects of MTC applications that can be used to characterize the domain and understanding the implications of these aspects to middleware and policies. Many MTC applications do not neatly fit the stereotypes of high-performance computing (HPC) or high-throughput computing (HTC) applications. Like HTC applications, by definition MTC applications are structured as graphs of discrete tasks, with explicit input and output dependencies forming the graph edges. However, MTC applications have significant features that distinguish them from typical HTC applications. In particular, different engineering constraints for hardware and software must be met in order to support these applications. HTC applications have traditionally run on platforms such as grids and clusters, through either workflow systems or parallel programming systems. MTC applications, in contrast, will often demand a short time to solution, may be communication intensive or data intensive, and may comprise very short tasks. Therefore, hardware and software for MTC must be engineered to support the additional communication and I/O and must minimize task dispatch overheads. The hardware of large-scale HPC systems, with its high degree of parallelism and support for intensive communication, is well suited for MTC applications. However, HPC systems often lack a dynamic resource-provisioning feature, are not ideal for task communication via the file system, and have an I/O system that is not optimized for MTC-style applications. Hence, additional software support is likely to be required to gain full benefit from the HPC hardware

    Fluid flow queue models for fixed-mobile network evaluation

    Get PDF
    A methodology for fast and accurate end-to-end KPI, like throughput and delay, estimation is proposed based on the service-centric traffic flow analysis and the fluid flow queuing model named CURSA-SQ. Mobile network features, like shared medium and mobility, are considered defining the models to be taken into account such as the propagation models and the fluid flow scheduling model. The developed methodology provides accurate computation of these KPIs, while performing orders of magnitude faster than discrete event simulators like ns-3. Finally, this methodology combined to its capacity for performance estimation in MPLS networks enables its application for near real-time converged fixed-mobile networks operation as it is proven in three use case scenarios

    High-Throughput Computing on High-Performance Platforms: A Case Study

    Full text link
    The computing systems used by LHC experiments has historically consisted of the federation of hundreds to thousands of distributed resources, ranging from small to mid-size resource. In spite of the impressive scale of the existing distributed computing solutions, the federation of small to mid-size resources will be insufficient to meet projected future demands. This paper is a case study of how the ATLAS experiment has embraced Titan---a DOE leadership facility in conjunction with traditional distributed high- throughput computing to reach sustained production scales of approximately 52M core-hours a years. The three main contributions of this paper are: (i) a critical evaluation of design and operational considerations to support the sustained, scalable and production usage of Titan; (ii) a preliminary characterization of a next generation executor for PanDA to support new workloads and advanced execution modes; and (iii) early lessons for how current and future experimental and observational systems can be integrated with production supercomputers and other platforms in a general and extensible manner
    corecore