438 research outputs found

    Scheduling with processing set restrictions : a survey

    Get PDF
    2008-2009 > Academic research: refereed > Publication in refereed journalAccepted ManuscriptPublishe

    Distributionally robust views on queues and related stochastic models

    Get PDF
    This dissertation explores distribution-free methods for stochastic models. Traditional approaches operate on the premise of complete knowledge about the probability distributions of the underlying random variables that govern these models. In contrast, this work adopts a distribution-free perspective, assuming only partial knowledge of these distributions, often limited to generalized moment information. Distributionally robust analysis seeks to determine the worst-case model performance. It involves optimization over a set of probability distributions that comply with this partial information, a task tantamount to solving a semiinfinite linear program. To address such an optimization problem, a solution approach based on the concept of weak duality is used. Through the proposed weak-duality argument, distribution-free bounds are derived for a wide range of stochastic models. Further, these bounds are applied to various distributionally robust stochastic programs and used to analyze extremal queueing modelsā€”central themes in applied probability and mathematical optimization

    Distributionally robust views on queues and related stochastic models

    Get PDF
    This dissertation explores distribution-free methods for stochastic models. Traditional approaches operate on the premise of complete knowledge about the probability distributions of the underlying random variables that govern these models. In contrast, this work adopts a distribution-free perspective, assuming only partial knowledge of these distributions, often limited to generalized moment information. Distributionally robust analysis seeks to determine the worst-case model performance. It involves optimization over a set of probability distributions that comply with this partial information, a task tantamount to solving a semiinfinite linear program. To address such an optimization problem, a solution approach based on the concept of weak duality is used. Through the proposed weak-duality argument, distribution-free bounds are derived for a wide range of stochastic models. Further, these bounds are applied to various distributionally robust stochastic programs and used to analyze extremal queueing modelsā€”central themes in applied probability and mathematical optimization

    Temporal analysis and scheduling of hard real-time radios running on a multi-processor

    Get PDF
    On a multi-radio baseband system, multiple independent transceivers must share the resources of a multi-processor, while meeting each its own hard real-time requirements. Not all possible combinations of transceivers are known at compile time, so a solution must be found that either allows for independent timing analysis or relies on runtime timing analysis. This thesis proposes a design flow and software architecture that meets these challenges, while enabling features such as independent transceiver compilation and dynamic loading, and taking into account other challenges such as ease of programming, efficiency, and ease of validation. We take data flow as the basic model of computation, as it fits the application domain, and several static variants (such as Single-Rate, Multi-Rate and Cyclo-Static) have been shown to possess strong analytical properties. Traditional temporal analysis of data flow can provide minimum throughput guarantees for a self-timed implementation of data flow. Since transceivers may need to guarantee strictly periodic execution and meet latency requirements, we extend the analysis techniques to show that we can enforce strict periodicity for an actor in the graph; we also provide maximum latency analysis techniques for periodic, sporadic and bursty sources. We propose a scheduling strategy and an automatic scheduling flow that enable the simultaneous execution of multiple transceivers with hard-realtime requirements, described as Single-Rate Data Flow (SRDF) graphs. Each transceiver has its own execution rate and starts and stops independently from other transceivers, at times unknown at compile time, on a multiprocessor. We show how to combine scheduling and mapping decisions with the input application data flow graph to generate a worst-case temporal analysis graph. We propose algorithms to find a mapping per transceiver in the form of clusters of statically-ordered actors, and a budget for either a Time Division Multiplex (TDM) or Non-Preemptive Non-Blocking Round Robin (NPNBRR) scheduler per cluster per transceiver. The budget is computed such that if the platform can provide it, then the desired minimum throughput and maximum latency of the transceiver are guaranteed, while minimizing the required processing resources. We illustrate the use of these techniques to map a combination of WLAN and TDS-CDMA receivers onto a prototype Software-Defined Radio platform. The functionality of transceivers for standards with very dynamic behavior ā€“ such as WLAN ā€“ cannot be conveniently modeled as an SRDF graph, since SRDF is not capable of expressing variations of actor firing rules depending on the values of input data. Because of this, we propose a restricted, customized data flow model of computation, Mode-Controlled Data Flow (MCDF), that can capture the data-value dependent behavior of a transceiver, while allowing rigorous temporal analysis, and tight resource budgeting. We develop a number of analysis techniques to characterize the temporal behavior of MCDF graphs, in terms of maximum latencies and throughput. We also provide an extension to MCDF of our scheduling strategy for SRDF. The capabilities of MCDF are then illustrated with a WLAN 802.11a receiver model. Having computed budgets for each transceiver, we propose a way to use these budgets for run-time resource mapping and admissibility analysis. During run-time, at transceiver start time, the budget for each cluster of statically-ordered actors is allocated by a resource manager to platform resources. The resource manager enforces strict admission control, to restrict transceivers from interfering with each otherā€™s worst-case temporal behaviors. We propose algorithms adapted from Vector Bin-Packing to enable the mapping at start time of transceivers to the multi-processor architecture, considering also the case where the processors are connected by a network on chip with resource reservation guarantees, in which case we also find routing and resource allocation on the network-on-chip. In our experiments, our resource allocation algorithms can keep 95% of the system resources occupied, while suffering from an allocation failure rate of less than 5%. An implementation of the framework was carried out on a prototype board. We present performance and memory utilization figures for this implementation, as they provide insights into the costs of adopting our approach. It turns out that the scheduling and synchronization overhead for an unoptimized implementation with no hardware support for synchronization of the framework is 16.3% of the cycle budget for a WLAN receiver on an EVP processor at 320 MHz. However, this overhead is less than 1% for mobile standards such as TDS-CDMA or LTE, which have lower rates, and thus larger cycle budgets. Considering that clock speeds will increase and that the synchronization primitives can be optimized to exploit the addressing modes available in the EVP, these results are very promising

    A new hybrid meta-heuristic algorithm for solving single machine scheduling problems

    Get PDF
    A dissertation submitted in partial ful lment of the degree of Master of Science in Engineering (Electrical) (50/50) in the Faculty of Engineering and the Built Environment Department of Electrical and Information Engineering May 2017Numerous applications in a wide variety of elds has resulted in a rich history of research into optimisation for scheduling. Although it is a fundamental form of the problem, the single machine scheduling problem with two or more objectives is known to be NP-hard. For this reason we consider the single machine problem a good test bed for solution algorithms. While there is a plethora of research into various aspects of scheduling problems, little has been done in evaluating the performance of the Simulated Annealing algorithm for the fundamental problem, or using it in combination with other techniques. Speci cally, this has not been done for minimising total weighted earliness and tardiness, which is the optimisation objective of this work. If we consider a mere ten jobs for scheduling, this results in over 3.6 million possible solution schedules. It is thus of de nite practical necessity to reduce the search space in order to nd an optimal or acceptable suboptimal solution in a shorter time, especially when scaling up the problem size. This is of particular importance in the application area of packet scheduling in wireless communications networks where the tolerance for computational delays is very low. The main contribution of this work is to investigate the hypothesis that inserting a step of pre-sampling by Markov Chain Monte Carlo methods before running the Simulated Annealing algorithm on the pruned search space can result in overall reduced running times. The search space is divided into a number of sections and Metropolis-Hastings Markov Chain Monte Carlo is performed over the sections in order to reduce the search space for Simulated Annealing by a factor of 20 to 100. Trade-o s are found between the run time and number of sections of the pre-sampling algorithm, and the run time of Simulated Annealing for minimising the percentage deviation of the nal result from the optimal solution cost. Algorithm performance is determined both by computational complexity and the quality of the solution (i.e. the percentage deviation from the optimal). We nd that the running time can be reduced by a factor of 4.5 to ensure a 2% deviation from the optimal, as compared to the basic Simulated Annealing algorithm on the full search space. More importantly, we are able to reduce the complexity of nding the optimal from O(n:n!) for a complete search to O(nNS) for Simulated Annealing to O(n(NMr +NS)+m) for the input variables n jobs, NS SA iterations, NM Metropolis- Hastings iterations, r inner samples and m sections.MT 201

    Job-shop scheduling with approximate methods

    Get PDF
    Imperial Users onl

    Optimal state estimation and control of space systems under severe uncertainty

    Get PDF
    This thesis presents novel methods and algorithms for state estimation and optimal control under generalised models of uncertainty. Tracking, scheduling, conjunction assessment, as well as trajectory design and analysis, are typically carried out either considering the nominal scenario only or under assumptions and approximations of the underlying uncertainty to keep the computation tractable. However, neglecting uncertainty or not quantifying it properly may result in lengthy design iterations, mission failures, inaccurate estimation of the satellite state, and poorly assessed risk metrics. To overcome these challenges, this thesis proposes approaches to incorporate proper uncertainty treatment in state estimation, navigation and tracking, and trajectory design. First, epistemic uncertainty is introduced as a generalised model to describe partial probabilistic models, ignorance, scarce or conflicting information, and, overall, a larger umbrella of uncertainty structures. Then, new formulations for state estimation, optimal control, and scheduling under mixed aleatory and epistemic uncertainties are proposed to generalise and robustify their current deterministic or purely aleatory counterparts. Practical solution approaches are developed to numerically solve such problems efficiently. Specifically, a polynomial reinitialisation approach for efficient uncertainty propagation is developed to mitigate the stochastic dimensionality in multi-segment problems. For state estimation and navigation, two robust filtering approaches are presented: a generalisation of the particle filtering to epistemic uncertainty exploiting samplesā€™ precomputations; a sequential filtering approach employing a combination of variational inference and importance sampling. For optimal control under uncertainty, direct shooting-like transcriptions with a tunable high-fidelity polynomial representation of the dynamical flow are developed. Uncertainty quantification, orbit determination, and navigation analysis are incorporated in the main optimisation loop to design trajectories that are simultaneously optimal and robust. The methods developed in this thesis are finally applied to a variety of novel test cases, ranging from LEO to deep-space missions, from trajectory design to space traffic management. The epistemic state estimation is employed in the robust estimation of debrisā€™ conjunction analyses and incorporated in a robust Bayesian framework capable of autonomous decision-making. An optimisation-based scheduling method is presented to efficiently allocate resources to heterogeneous ground stations and fusing information coming from different sensors, and it is applied to the optimal tracking of a satellite in highly perturbed very-low Earth orbit, and a low-resource deep-space spacecraft. The optimal control methods are applied to the robust optimisation of an interplanetary low-thrust trajectory to Apophis, and to the robust redesign of a leg of the Europa Clipper tour with an initial infeasibility on the probability of impact with Jupiterā€™s moon.This thesis presents novel methods and algorithms for state estimation and optimal control under generalised models of uncertainty. Tracking, scheduling, conjunction assessment, as well as trajectory design and analysis, are typically carried out either considering the nominal scenario only or under assumptions and approximations of the underlying uncertainty to keep the computation tractable. However, neglecting uncertainty or not quantifying it properly may result in lengthy design iterations, mission failures, inaccurate estimation of the satellite state, and poorly assessed risk metrics. To overcome these challenges, this thesis proposes approaches to incorporate proper uncertainty treatment in state estimation, navigation and tracking, and trajectory design. First, epistemic uncertainty is introduced as a generalised model to describe partial probabilistic models, ignorance, scarce or conflicting information, and, overall, a larger umbrella of uncertainty structures. Then, new formulations for state estimation, optimal control, and scheduling under mixed aleatory and epistemic uncertainties are proposed to generalise and robustify their current deterministic or purely aleatory counterparts. Practical solution approaches are developed to numerically solve such problems efficiently. Specifically, a polynomial reinitialisation approach for efficient uncertainty propagation is developed to mitigate the stochastic dimensionality in multi-segment problems. For state estimation and navigation, two robust filtering approaches are presented: a generalisation of the particle filtering to epistemic uncertainty exploiting samplesā€™ precomputations; a sequential filtering approach employing a combination of variational inference and importance sampling. For optimal control under uncertainty, direct shooting-like transcriptions with a tunable high-fidelity polynomial representation of the dynamical flow are developed. Uncertainty quantification, orbit determination, and navigation analysis are incorporated in the main optimisation loop to design trajectories that are simultaneously optimal and robust. The methods developed in this thesis are finally applied to a variety of novel test cases, ranging from LEO to deep-space missions, from trajectory design to space traffic management. The epistemic state estimation is employed in the robust estimation of debrisā€™ conjunction analyses and incorporated in a robust Bayesian framework capable of autonomous decision-making. An optimisation-based scheduling method is presented to efficiently allocate resources to heterogeneous ground stations and fusing information coming from different sensors, and it is applied to the optimal tracking of a satellite in highly perturbed very-low Earth orbit, and a low-resource deep-space spacecraft. The optimal control methods are applied to the robust optimisation of an interplanetary low-thrust trajectory to Apophis, and to the robust redesign of a leg of the Europa Clipper tour with an initial infeasibility on the probability of impact with Jupiterā€™s moon
    • ā€¦
    corecore