118,069 research outputs found

    Real time resource scheduling within a distributed collaborative design environment

    Get PDF
    Operational design co-ordination is provided by a Virtual Integration Platform (VIP) that is capable of scheduling and allocating design activities to organisationally and geographically distributed designers. To achieve this, the platform consists of a number of components that contribute to the engineering management and co-ordination of data, resources, activities, requirements and processes. The information required to schedule and allocate activities to designers is defined in terms of: the designers' capability to perform particular design activities; commitment in terms of the design activities that it is currently performing, and capacity to perform more than one design activity at the same time as well as the effect of increased capacity on capability. Previous approaches have been developed by the authors to automatically allocate resources to activities [1-3], however these approaches have generally been applied either within the context of real-time allocation of computational resources using automated design tools, or in the planning of human resources within future design projects and not for the real-time allocation of activities to a combination of human and computational resources. The procedure presented here is based upon this previous research and involves: the determination of the design activities that need to be undertaken on the basis of the goals that need to be achieved; identification of the resources that can undertake these design activities; and, the use of a genetic algorithm to optimally allocate the activities to the resources. Since the focus of the procedure is toward the real-time allocation of design activities to designers, additional human issues with respect to scheduling are considered. These human issues aspects include: consideration of the improvement in performance as a result of the experience gained from undertaking the activity; provision of a training period to allow inexperienced designers the opportunity to improve their performance without their performance being assessed; and the course of action to take when a designer is either unwilling or unable to perform an activity

    Survey of dynamic scheduling in manufacturing systems

    Get PDF

    Modeling and Real-Time Scheduling of DC Platform Supply Vessel for Fuel Efficient Operation

    Full text link
    DC marine architecture integrated with variable speed diesel generators (DGs) has garnered the attention of the researchers primarily because of its ability to deliver fuel efficient operation. This paper aims in modeling and to autonomously perform real-time load scheduling of dc platform supply vessel (PSV) with an objective to minimize specific fuel oil consumption (SFOC) for better fuel efficiency. Focus has been on the modeling of various components and control routines, which are envisaged to be an integral part of dc PSVs. Integration with photovoltaic-based energy storage system (ESS) has been considered as an option to cater for the short time load transients. In this context, this paper proposes a real-time transient simulation scheme, which comprises of optimized generation scheduling of generators and ESS using dc optimal power flow algorithm. This framework considers real dynamics of dc PSV during various marine operations with possible contingency scenarios, such as outage of generation systems, abrupt load changes, and unavailability of ESS. The proposed modeling and control routines with real-time transient simulation scheme have been validated utilizing the real-time marine simulation platform. The results indicate that the coordinated treatment of renewable based ESS with DGs operating with optimized speed yields better fuel savings. This has been observed in improved SFOC operating trajectory for critical marine missions. Furthermore, SFOC minimization at multiple suboptimal points with its treatment in the real-time marine system is also highlighted

    A Modeling Framework for Schedulability Analysis of Distributed Avionics Systems

    Get PDF
    This paper presents a modeling framework for schedulability analysis of distributed integrated modular avionics (DIMA) systems that consist of spatially distributed ARINC-653 modules connected by a unified AFDX network. We model a DIMA system as a set of stopwatch automata (SWA) in UPPAAL to analyze its schedulability by classical model checking (MC) and statistical model checking (SMC). The framework has been designed to enable three types of analysis: global SMC, global MC, and compositional MC. This allows an effective methodology including (1) quick schedulability falsification using global SMC analysis, (2) direct schedulability proofs using global MC analysis in simple cases, and (3) strict schedulability proofs using compositional MC analysis for larger state space. The framework is applied to the analysis of a concrete DIMA system.Comment: In Proceedings MARS/VPT 2018, arXiv:1803.0866

    A Tale of Two Data-Intensive Paradigms: Applications, Abstractions, and Architectures

    Full text link
    Scientific problems that depend on processing large amounts of data require overcoming challenges in multiple areas: managing large-scale data distribution, co-placement and scheduling of data with compute resources, and storing and transferring large volumes of data. We analyze the ecosystems of the two prominent paradigms for data-intensive applications, hereafter referred to as the high-performance computing and the Apache-Hadoop paradigm. We propose a basis, common terminology and functional factors upon which to analyze the two approaches of both paradigms. We discuss the concept of "Big Data Ogres" and their facets as means of understanding and characterizing the most common application workloads found across the two paradigms. We then discuss the salient features of the two paradigms, and compare and contrast the two approaches. Specifically, we examine common implementation/approaches of these paradigms, shed light upon the reasons for their current "architecture" and discuss some typical workloads that utilize them. In spite of the significant software distinctions, we believe there is architectural similarity. We discuss the potential integration of different implementations, across the different levels and components. Our comparison progresses from a fully qualitative examination of the two paradigms, to a semi-quantitative methodology. We use a simple and broadly used Ogre (K-means clustering), characterize its performance on a range of representative platforms, covering several implementations from both paradigms. Our experiments provide an insight into the relative strengths of the two paradigms. We propose that the set of Ogres will serve as a benchmark to evaluate the two paradigms along different dimensions.Comment: 8 pages, 2 figure

    A Taxonomy of Workflow Management Systems for Grid Computing

    Full text link
    With the advent of Grid and application technologies, scientists and engineers are building more and more complex applications to manage and process large data sets, and execute scientific experiments on distributed resources. Such application scenarios require means for composing and executing complex workflows. Therefore, many efforts have been made towards the development of workflow management systems for Grid computing. In this paper, we propose a taxonomy that characterizes and classifies various approaches for building and executing workflows on Grids. We also survey several representative Grid workflow systems developed by various projects world-wide to demonstrate the comprehensiveness of the taxonomy. The taxonomy not only highlights the design and engineering similarities and differences of state-of-the-art in Grid workflow systems, but also identifies the areas that need further research.Comment: 29 pages, 15 figure

    Global Grids and Software Toolkits: A Study of Four Grid Middleware Technologies

    Full text link
    Grid is an infrastructure that involves the integrated and collaborative use of computers, networks, databases and scientific instruments owned and managed by multiple organizations. Grid applications often involve large amounts of data and/or computing resources that require secure resource sharing across organizational boundaries. This makes Grid application management and deployment a complex undertaking. Grid middlewares provide users with seamless computing ability and uniform access to resources in the heterogeneous Grid environment. Several software toolkits and systems have been developed, most of which are results of academic research projects, all over the world. This chapter will focus on four of these middlewares--UNICORE, Globus, Legion and Gridbus. It also presents our implementation of a resource broker for UNICORE as this functionality was not supported in it. A comparison of these systems on the basis of the architecture, implementation model and several other features is included.Comment: 19 pages, 10 figure

    The Cathedral and the bazaar: (de)centralising certitude in river basin management

    Get PDF
    • …
    corecore