536 research outputs found

    Unified multiobjective optimization scheme for aeroassisted vehicle trajectory planning

    Get PDF
    In this work, a multiobjective aeroassisted trajectory optimization problem with mission priority constraints is constructed and studied. To effectively embed the priority requirements into the optimization model, a specific transformation technique is applied and the original problem is then transcribed to a single-objective formulation. The resulting single-objective programming model is solved via an evolutionary optimization algorithm. Such a design is unlike most traditional approaches where the nondominated sorting procedure is required to be performed to rank all the objectives. Moreover, in order to enhance the local search ability of the optimization process, a hybrid gradient-based operator is introduced. Simulation results indicate that the proposed design can produce feasible and high-quality flight trajectories. Comparative simulations with other typical methods are also performed, and the results show that the proposed approach can achieve a better performance in terms of satisfying the prespecified priority requirements

    Datacenter Traffic Control: Understanding Techniques and Trade-offs

    Get PDF
    Datacenters provide cost-effective and flexible access to scalable compute and storage resources necessary for today's cloud computing needs. A typical datacenter is made up of thousands of servers connected with a large network and usually managed by one operator. To provide quality access to the variety of applications and services hosted on datacenters and maximize performance, it deems necessary to use datacenter networks effectively and efficiently. Datacenter traffic is often a mix of several classes with different priorities and requirements. This includes user-generated interactive traffic, traffic with deadlines, and long-running traffic. To this end, custom transport protocols and traffic management techniques have been developed to improve datacenter network performance. In this tutorial paper, we review the general architecture of datacenter networks, various topologies proposed for them, their traffic properties, general traffic control challenges in datacenters and general traffic control objectives. The purpose of this paper is to bring out the important characteristics of traffic control in datacenters and not to survey all existing solutions (as it is virtually impossible due to massive body of existing research). We hope to provide readers with a wide range of options and factors while considering a variety of traffic control mechanisms. We discuss various characteristics of datacenter traffic control including management schemes, transmission control, traffic shaping, prioritization, load balancing, multipathing, and traffic scheduling. Next, we point to several open challenges as well as new and interesting networking paradigms. At the end of this paper, we briefly review inter-datacenter networks that connect geographically dispersed datacenters which have been receiving increasing attention recently and pose interesting and novel research problems.Comment: Accepted for Publication in IEEE Communications Surveys and Tutorial

    A User-Friendly Wrapper for DSIDES (Decision Support in the Design of Engineering Systems)

    Get PDF
    When dealing with complex systems, we need to consider that these systems have behaviors that are hard to predict or control, and uncertainties are always present since computational models are abstracts of reality. It is recognized that in many situations, it may not be possible to simultaneously optimize all objectives due to inherent conflicts, resource limitations, or uncertainty. As George E.P. Box said: "All models are wrong, but some are useful." The consequences of these observations are significant. We need to accept that our models might not capture everything and that uncertainties are a part of the picture. Hence, we must accept and deal with uncertainty instead of ignoring it and find solutions that are relatively insensitive to the uncertainties. When choosing a method to work with, we need to consider the quality of our data. To make this all work, we need a method to find solutions that achieve a reasonable compromise or balance among the objectives and identify a set of solutions that are relatively insensitive to uncertainties. Also, be able to facilitate the exploration of solution space to support human decision-making. This ties into the problems we face in supporting decisions for complex systems. These problems involve choosing between options and making compromises. The compromise Decision Support Problem (cDSP) construct and the Adaptive Linear Programming algorithm has been developed as a result, which was first introduced by Mistree and co-authors (1993). It is a domain-independent, multiobjective decision model based on mathematical and goal programming. They effectively deal with multiobjective problems involving bounds, linear and nonlinear constraints, goals, and consisting of Boolean and continuous variables. The requirements for this construct are: 1) Identify a set of solutions that are relatively insensitive to uncertainties 2) Facilitate the exploration of solution space to support human decision-making Mistree and co-authors also designed a computer program to implement cDSP construct. It has been written in FORTRAN to identify robust satisficing solutions to design problems when the models are abstractions of reality. It is called DSIDES (Decision Support in the Design of Engineering Systems). DSIDES is a software tool developed to help engineers and designers make better decisions in the design of complex engineering systems and provides decision support for the design of complex engineering systems. In this thesis, our primary objective is to enhance the accessibility and user-friendliness of DSIDES by designing a user-friendly wrapper. Three key areas of focus are included in this thesis: 1) Exploration of cDSP Construct: In this part, the examination of the cDSP (Compromise Decision Support Problem) construct, including its structural components and the formulation of problem statements within the cDSP framework, has been discussed. 2) Comprehensive Analysis of the DSIDES Wrapper: A detailed exploration of the DSIDES wrapper and a step-by-step walkthrough of the wrapper's functionalities are covered. 3) DSIDES Software Program Manuals: Program manuals for the DSIDES software has been created. These manuals are helpful resources for individuals seeking to enhance, expand, or modify the software. Based on these key areas of focus, there are three different parts to this thesis: 1) Part One: DSIDES Software and cDSP Construct: An Introduction. 2) Part Two: Designing the User-Friendly Wrapper for DSIDES. 3) Part Three: Program Manuals and Improvement of DSIDES. In the following sections, all three parts and their related details are discussed, respectively

    Optimization modeling for the operation of closed-loop supply chains.

    Get PDF
    Environmentally conscious manufacturing and remanufacturing/recycling of endof- life products are steadily growing in importance. The problem of managing the waste generated due to the disposal of many types of products has many aspects. The main driving forces for solving this growing problem are the rapid diminishment of raw material resources, decreasing space in landfills and increasing levels of pollution. The drivers associated with these forces are governmental regulations which require that the manufacturers take back the end-of-life products and customer perspectives on environmental issues. This research considers the problem of increasing levels of electronic and electrical equipments waste. The implementation of closed-loop supply chains can be beneficial both economically and ecologically for these problems. Relevant literature to understand various issues involved in the operation of reverse logistics systems and closed-loop supply chains is reviewed. Upon reviewing the issues involved in closed-loop supply chains, the problem is considered as an ill-structured problem. A problem structuring technique called Why-What\u27s Stopping Analysis is used to analyze the problem from various perspectives. Also, since a closed-loop supply chain involves multiple objectives, two techniques for categorizing the objectives into fundamental and means objectives are presented: Fundamental Objective Hierarchy and Means Objective Network techniques, respectively. A Goal Program (GP) modeling approach is used to handle many of the objectives identified by the previously mentioned techniques. In this research a consolidated objective function is defined which includes all of the deviational variables considered in various goals defined in the model. The consolidated goal is to minimize the weighted sum of all deviational variables. A non preemptive goal programming approach has been used with goals being assigned different weights according to their priorities. The values of the deviational variables help the decision maker to see which of the different goals are satisfied with the existing values of parameters and which of the goals aren\u27t. The goal program has been run with both uniform and variable demand values in all the periods. In the absence of real data, all the parameter values considered for this research have been assumed. The major contributions of the research are as follows: each member of the supply chain has its own individual objective and the related constraints which is a more realistic approach, the model considers multiple products, and the model considers operations at the product, subassembly, part, and material levels. All the above contributions make this research as the first approach of its kind which has never been attempted (based on literature reviewed) and the goal programming methodology used is also a well accepted approach among all the multi-objective programming approaches. Results show the effect of varying the priority/weight associated with a goal. Results also show that values of the deviational variables (positive or negative) help a decision maker to analyze the model. The goal programming approach is considered to be the most effective approach in terms of defining the mathematical model, analyzing the output, and modifying the model (if needed)

    A resource allocation mechanism based on cost function synthesis in complex systems

    Get PDF
    While the management of resources in computer systems can greatly impact the usefulness and integrity of the system, finding an optimal solution to the management problem is unfortunately NP hard. Adding to the complexity, today\u27s \u27modern\u27 systems - such as in multimedia, medical, and military systems - may be, and often are, comprised of interacting real and non-real-time components. In addition, these systems can be driven by a host of non-functional objectives – often differing not only in nature, importance, and form, but also in dimensional units and range, and themselves interacting in complex ways. We refer to systems exhibiting such characteristics as Complex Systems (CS). We present a method for handling the multiple non-functional system objectives in CS, by addressing decomposition, quantification, and evaluation issues. Our method will result in better allocations, improve objective satisfaction, improve the overall performance of the system, and reduce cost -in a global sense. Moreover, we consider the problem of formulating the cost of an allocation driven by system objectives. We start by discussing issues and relationships among global objectives, their decomposition, and cost functions for evaluation of system objective. Then, as an example of objective and cost function development, we introduce the concept of deadline balancing. Next, we proceed by proving the existence of combining models and their underlying conditions. Then, we describe a hierarchical model for system objective function synthesis. This synthesis is performed solely for the purpose of measuring the level of objective satisfaction in a proposed hardware to software allocation, not for design of individual software modules. Then, Examples are given to show how the model applies to actual multi-objective problems. In addition the concept of deadline balancing is extended to a new scheduling concept, namely Inter-Completion-Time Scheduling (ICTS. Finally, experiments based on simulation have been conducted to capture various properties of the synthesis approach as well as ICTS. A prototype implementation of the cost functions synthesis and evaluation environment is described, highlighting the applicability and usefulness of the synthesis in realistic applications

    Composition and synchronization of real-time components upon one processor

    Get PDF
    Many industrial systems have various hardware and software functions for controlling mechanics. If these functions act independently, as they do in legacy situations, their overall performance is not optimal. There is a trend towards optimizing the overall system performance and creating a synergy between the different functions in a system, which is achieved by replacing more and more dedicated, single-function hardware by software components running on programmable platforms. This increases the re-usability of the functions, but their synergy requires also that (parts of) the multiple software functions share the same embedded platform. In this work, we look at the composition of inter-dependent software functions on a shared platform from a timing perspective. We consider platforms comprised of one preemptive processor resource and, optionally, multiple non-preemptive resources. Each function is implemented by a set of tasks; the group of tasks of a function that executes on the same processor, along with its scheduler, is called a component. The tasks of a component typically have hard timing constraints. Fulfilling these timing constraints of a component requires analysis. Looking at a single function, co-operative scheduling of the tasks within a component has already proven to be a powerful tool to make the implementation of a function more predictable. For example, co-operative scheduling can accelerate the execution of a task (making it easier to satisfy timing constraints), it can reduce the cost of arbitrary preemptions (leading to more realistic execution-time estimates) and it can guarantee access to other resources without the need for arbitration by other protocols. Since timeliness is an important functional requirement, (re-)use of a component for composition and integration on a platform must deal with timing. To enable us to analyze and specify the timing requirements of a particular component in isolation from other components, we reserve and enforce the availability of all its specified resources during run-time. The real-time systems community has proposed hierarchical scheduling frameworks (HSFs) to implement this isolation between components. After admitting a component on a shared platform, a component in an HSF keeps meeting its timing constraints as long as it behaves as specified. If it violates its specification, it may be penalized, but other components are temporally isolated from the malignant effects. A component in an HSF is said to execute on a virtual platform with a dedicated processor at a speed proportional to its reserved processor supply. Three effects disturb this point of view. Firstly, processor time is supplied discontinuously. Secondly, the actual processor is faster. Thirdly, the HSF no longer guarantees the isolation of an individual component when two arbitrary components violate their specification during access to non-preemptive resources, even when access is arbitrated via well-defined real-time protocols. The scientific contributions of this work focus on these three issues. Our solutions to these issues cover the system design from component requirements to run-time allocation. Firstly, we present a novel scheduling method that enables us to integrate the component into an HSF. It guarantees that each integrated component executes its tasks exactly in the same order regardless of a continuous or a discontinuous supply of processor time. Using our method, the component executes on a virtual platform and it only experiences that the processor speed is different from the actual processor speed. As a result, we can focus on the traditional scheduling problem of meeting deadline constraints of tasks on a uni-processor platform. For such platforms, we show how scheduling tasks co-operatively within a component helps to meet the deadlines of this component. We compare the strength of these cooperative scheduling techniques to theoretically optimal schedulers. Secondly, we standardize the way of computing the resource requirements of a component, even in the presence of non-preemptive resources. We can therefore apply the same timing analysis to the components in an HSF as to the tasks inside, regardless of their scheduling or their protocol being used for non-preemptive resources. This increases the re-usability of the timing analysis of components. We also make non-preemptive resources transparent during the development cycle of a component, i.e., the developer of a component can be unaware of the actual protocol being used in an HSF. Components can therefore be unaware that access to non-preemptive resources requires arbitration. Finally, we complement the existing real-time protocols for arbitrating access to non-preemptive resources with mechanisms to confine temporal faults to those components in the HSF that share the same non-preemptive resources. We compare the overheads of sharing non-preemptive resources between components with and without mechanisms for confinement of temporal faults. We do this by means of experiments within an HSF-enabled real-time operating system

    A graph based process model measurement framework using scheduling theory

    Get PDF
    Software development processes, as a means of ensuring software quality and productivity, have been widely accepted within the software development community; software process modeling, on the other hand, continues to be a subject of interest in the research community. Even with organizations that have achieved higher SEI maturity levels, processes are by and large described in documents and reinforced as guidelines or laws governing software development activities. The lack of industry-wide adaptation of software process modeling as part of development activities can be attributed to two major reasons: lack of forecast power in the (software) process modeling and lack of integration mechanism for the described process to seamlessly interact with daily development activities. This dissertation describes a research through which a framework has been established where processes can be manipulated, measured, and dynamically modified by interacting with project management techniques and activities in an integrated process modeling environment, thus closing the gap between process modeling and software development. In this research, processes are described using directed graphs, similar to the techniques with CPM. This way, the graphs can be manipulated visually while the properties of the graphs-can be used to check their validity. The partial ordering and the precedence relationship of the tasks in the graphs are similar to the one studied in other researches [Delcambre94] [Mills96]. Measurements of the effectiveness of the processes are added in this research. These measurements provide bases for the judgment when manipulating the graphs to produce or modify a process. Software development can be considered as activities related to three sets: a set of tasks (τ), a set of resources (ρ), and a set of constraints (y). The process, P, is then a function of all the sets interacting with each other: P = {τ, ρ, y). The interactions of these sets can be described in terms of different machine models using scheduling theory. While trying to produce an optimal solution satisfying a set of prescribed conditions using the analytical method would lead to a practically non-feasible formulation, many heuristic algorithms in scheduling theory combined with manual manipulation of the tasks can help to produce a reasonable good process, the effectiveness of which is reflected through a set of measurement criteria, in particular, the make-span, the float, and the bottlenecks. Through an integrated process modeling environment, these measurements can be obtained in real time, thus providing a feedback loop during the process execution. This feedback loop is essential for risk management and control

    Using DEA Factor Efficiency Scores to Eliminate Subjectivity in Goal Programming

    Get PDF
    Many real-world problems require decision makers to consider multiple criteria when performing an analysis. One popular method used to analyze multicriteria decision problems is goal programming. When applying goal programming, it is often difficult if not impossible to determine the target values and unit penalty weights with any level of confidence. Thus, in many situations, managers and decision makers may be forced to specify these parameters subjectively. In this paper, we present a model framework designed to eliminate the arbitrary assignment of target values and unit penalty weights when applying goal programming to solve multicriteria decision problems. In particular, when neither of these parameters is available, we show how to integrate factor efficiency scores determined from data envelopment analysis into the model. We discuss an application of the methodology to ambulatory surgery centers and demonstrate the model framework via a product mix example

    Languages and Tools for Real-Time Systems: Problems, Solutions and Opportunities

    Get PDF
    This report summarizes two talks I gave at the ACM SIGPLAN Workshop on Language, Compiler, and Tool Support for Real-Time Systems, which took place on June 21, 1994, in in Orlando, Florida. The workshop was held in concert with ACM SIGPLAN Conference on Programming Languages Design and Implementation. The first talk ("Statements about Real-Time: Truth or Bull?") was given in the early morning. At the behest of the workshop's organizers, its primary function was to seed the ongoing discourse and provoke some debate. Besides asking controversial questions, and positing opinions, the talk also identified some several fertile research areas that might interest PLDI attendees . The second talk ("Languages and Transformations: Some Solutions") was more technical, and it reviewed our research on program optimizations for real-time domains. However, I tried as much as possible to revisit the research problems raised in the morning talk, and present some possible approaches to them. The following paragraphs contain the text from my viewgraphs, laced with some commentary. Since so much work has been done in real-time systems - and even more in programming languages - my references are by necessity incomplete. (Also cross-referenced as UMIACS-TR-94-117

    Comparative Evaluation of Generalized River/Reservoir System Models

    Get PDF
    This report reviews user-oriented generalized reservoir/river system models. The terms reservoir/river system, reservoir system, reservoir operation, or river basin management "model" or "modeling system" are used synonymously to refer to computer modeling systems that simulate the storage, flow, and diversion of water in a system of reservoirs and river reaches. Generalized means that a computer modeling system is designed for application to a range of concerns dealing with river basin systems of various configurations and locations, rather than being site-specific customized to a particular system. User-oriented implies the modeling system is designed for use by professional practitioners (model-users) other than the original model developers and is thoroughly tested and well documented. User-oriented generalized modeling systems should be convenient to obtain, understand, and use and should work correctly, completely, and efficiently. Modeling applications often involve a system of several simulation models, utility software products, and databases used in combination. A reservoir/river system model is itself a modeling system, which often serves as a component of a larger modeling system that may include watershed hydrology and river hydraulics models, water quality models, databases and various software tools for managing time series, spatial, and other types of data. Reservoir/river system models are based on volume-balance accounting procedures for tracking the movement of water through a system of reservoirs and river reaches. The model computes reservoir storage contents, evaporation, water supply withdrawals, hydroelectric energy generation, and river flows for specified system operating rules and input sequences of stream inflows and net evaporation rates. The hydrologic period-of-analysis and computational time step may vary greatly depending on the application. Storage and flow hydrograph ordinates for a flood event occurring over a few days may be determined at intervals of an hour or less. Water supply capabilities may be modeled with a monthly time step and several decade long period-of-analysis capturing the full range of fluctuating wet and dry periods including extended drought. Stream inflows are usually generated outside of the reservoir/river system model and provided as input to the model. However, reservoir/river system models may also include capabilities for modeling watershed precipitation-runoff processes to generate inflows to the river/reservoir system. Some reservoir/river system models simulate water quality constituents along with water quantities. Some models include features for economic evaluation of system performance based on cost and benefit functions expressed as a function of flow and storage
    corecore