2,267 research outputs found

    General queuing model for optimal seamless delivery of payload processing in multi-core processors

    Get PDF
    This is a pre-print of an article published in The Journal of Supercomputing. The final authenticated version is available online at: https://doi.org/10.1007/s11227-017-2109-4.Recent developments in unmanned aerial systems (UAS) provide new opportunities in remote sensing application. In contrast to satellite and conventional (manned) aerial tasks, UAS flights can be operated in a very short period of time. UAS can also be more specifically focused toward a given task such as crop reconnaissance or electric line tower inspection. For some applications, the delivery time of the remote sensing results is crucial. The current three-phase procedure of data acquisition, data downloading and data processing, performed sequentially in time, represents a drawback that reduces the benefits of using unmanned aerial systems. In this paper, we present a parallel processing strategy, based on queuing theory, in which the data processing phase is performed on board in parallel with data acquisition. The unmanned aerial system payload has been enlarged with low-cost, lightweight, multi-core boards to facilitate remote sensing data processing during flight. The storage of the raw sensing data is also done for possible further analysis; however, the ultimate decision support information can be seamless delivered to the customer upon landing. Furthermore, text alarms and limited imagery can also be provided during flight.Peer ReviewedPostprint (author's final draft

    Stochastic scheduling and workload allocation : QoS support and profitable brokering in computing grids

    No full text
    Abstract: The Grid can be seen as a collection of services each of which performs some functionality. Users of the Grid seek to use combinations of these services to perform the overall task they need to achieve. In general this can be seen as aset of services with a workflow document describing how these services should be combined. The user may also have certain constraints on the workflow operations, such as execution time or cost ----t~ th~ user, specified in the form of a Quality of Service (QoS) document. The users . submit their workflow to a brokering service along with the QoS document. The brokering service's task is to map any given workflow to a subset of the Grid services taking the QoS and state of the Grid into account -- service availability and performance. We propose an approach for generating constraint equations describing the workflow, the QoS requirements and the state of the Grid. This set of equations may be solved using Mixed-Integer Linear Programming (MILP), which is the traditional method. We further develop a novel 2-stage stochastic MILP which is capable of dealing with the volatile nature of the Grid and adapting the selection of the services during the lifetime of the workflow. We present experimental results comparing our approaches, showing that the . 2-stage stochastic programming approach performs consistently better than other traditional approaches. Next we addresses workload allocation techniques for Grid workflows in a multi-cluster Grid We model individual clusters as MIMIk. queues and obtain a numerical solutio~ for missed deadlines (failures) of tasks of Grid workflows. We also present an efficient algorithm for obtaining workload allocations of clusters. Next we model individual cluster resources as G/G/l queues and solve an optimisation problem that minimises QoS requirement violation, provides QoS guarantee and outperforms reservation based scheduling algorithms. Both approaches are evaluated through an experimental simulation and the results confirm that the proposed workload allocation strategies combined with traditional scheduling algorithms performs considerably better in terms of satisfying QoS requirements of Grid workflows than scheduling algorithms that don't employ such workload allocation techniques. Next we develop a novel method for Grid brokers that aims at maximising profit whilst satisfying end-user needs with a sufficient guarantee in a volatile utility Grid. We develop a develop a 2-stage stochastic MILP which is capable of dealing with the volatile nature . of the Grid and obtaining cost bounds that ensure that end-user cost is minimised or satisfied and broker's profit is maximised with sufficient guarantee. These bounds help brokers know beforehand whether the budget limits of end-users can be satisfied and. if not then???????? obtain appropriate future leases from service providers. Experimental results confirm the efficacy of our approach.Imperial Users onl

    Design of testbed and emulation tools

    Get PDF
    The research summarized was concerned with the design of testbed and emulation tools suitable to assist in projecting, with reasonable accuracy, the expected performance of highly concurrent computing systems on large, complete applications. Such testbed and emulation tools are intended for the eventual use of those exploring new concurrent system architectures and organizations, either as users or as designers of such systems. While a range of alternatives was considered, a software based set of hierarchical tools was chosen to provide maximum flexibility, to ease in moving to new computers as technology improves and to take advantage of the inherent reliability and availability of commercially available computing systems

    Workload Schedulers - Genesis, Algorithms and Comparisons

    Get PDF
    In this article we provide brief descriptions of three classes of schedulers: Operating Systems Process Schedulers, Cluster Systems, Jobs Schedulers and Big Data Schedulers. We describe their evolution from early adoptions to modern implementations, considering both the use and features of algorithms. In summary, we discuss differences between all presented classes of schedulers and discuss their chronological development. In conclusion, we highlight similarities in the focus of scheduling strategies design, applicable to both local and distributed systems

    EbbRT: a customizable operating system for cloud applications

    Full text link
    Efficient use of hardware requires operating system components be customized to the application workload. Our general purpose operating systems are ill-suited for this task. We present Genesis, a new operating system that enables per-application customizations for cloud applications. Genesis achieves this through a novel heterogeneous distributed structure, a partitioned object model, and an event-driven execution environment. This paper describes the design and prototype implementation of Genesis, and evaluates its ability to improve the performance of common cloud applications. The evaluation of the Genesis prototype demonstrates memcached, run within a VM, can outperform memcached run on an unvirtualized Linux. The prototype evaluation also demonstrates an 14% performance improvement of a V8 JavaScript engine benchmark, and a node.js webserver that achieves a 50% reduction in 99th percentile latency compared to it run on Linux

    Analytical Modeling of High Performance Reconfigurable Computers: Prediction and Analysis of System Performance.

    Get PDF
    The use of a network of shared, heterogeneous workstations each harboring a Reconfigurable Computing (RC) system offers high performance users an inexpensive platform for a wide range of computationally demanding problems. However, effectively using the full potential of these systems can be challenging without the knowledge of the system’s performance characteristics. While some performance models exist for shared, heterogeneous workstations, none thus far account for the addition of Reconfigurable Computing systems. This dissertation develops and validates an analytic performance modeling methodology for a class of fork-join algorithms executing on a High Performance Reconfigurable Computing (HPRC) platform. The model includes the effects of the reconfigurable device, application load imbalance, background user load, basic message passing communication, and processor heterogeneity. Three fork-join class of applications, a Boolean Satisfiability Solver, a Matrix-Vector Multiplication algorithm, and an Advanced Encryption Standard algorithm are used to validate the model with homogeneous and simulated heterogeneous workstations. A synthetic load is used to validate the model under various loading conditions including simulating heterogeneity by making some workstations appear slower than others by the use of background loading. The performance modeling methodology proves to be accurate in characterizing the effects of reconfigurable devices, application load imbalance, background user load and heterogeneity for applications running on shared, homogeneous and heterogeneous HPRC resources. The model error in all cases was found to be less than five percent for application runtimes greater than thirty seconds and less than fifteen percent for runtimes less than thirty seconds. The performance modeling methodology enables us to characterize applications running on shared HPRC resources. Cost functions are used to impose system usage policies and the results of vii the modeling methodology are utilized to find the optimal (or near-optimal) set of workstations to use for a given application. The usage policies investigated include determining the computational costs for the workstations and balancing the priority of the background user load with the parallel application. The applications studied fall within the Master-Worker paradigm and are well suited for a grid computing approach. A method for using NetSolve, a grid middleware, with the model and cost functions is introduced whereby users can produce optimal workstation sets and schedules for Master-Worker applications running on shared HPRC resources

    Strong Temporal Isolation among Containers in OpenStack for NFV Services

    Get PDF
    In this paper, the problem of temporal isolation among containerized software components running in shared cloud infrastructures is tackled, proposing an approach based on hierarchical real-time CPU scheduling. This allows for reserving a precise share of the available computing power for each container deployed in a multi-core server, so to provide it with a stable performance, independently from the load of other co-located containers. The proposed technique enables the use of reliable modeling techniques for end-to-end service chains that are effective in controlling the application-level performance. An implementation of the technique within the well-known OpenStack cloud orchestration software is presented, focusing on a use-case framed in the context of network function virtualization. The modified OpenStack is capable of leveraging the special real-time scheduling features made available in the underlying Linux operating system through a patch to the in-kernel process scheduler. The effectiveness of the technique is validated by gathering performance data from two applications running in a real test-bed with the mentioned modifications to OpenStack and the Linux kernel. A performance model is developed that tightly models the application behavior under a variety of conditions. Extensive experimentation shows that the proposed mechanism is successful in guaranteeing isolation of individual containerized activities on the platform
    • …
    corecore