459 research outputs found

    Bayesian Search Under Dynamic Disaster Scenarios

    Get PDF
    Search and Rescue (SAR) is a hard decision making context where there is available a limited amount of resources that should be strategically allocated over the search region in order to find missing people opportunely. In this thesis, we consider those SAR scenarios where the search region is being affected by some type of dynamic threat such as a wilder or a hurricane. In spite of the large amount of SAR missions that consistently take place under these circumstances, and being Search Theory a research area dating back from more than a half century, to the best of our knowledge, this kind of search problem has not being considered in any previous research. Here we propose a bi-objective mathematical optimization model and three solution methods for the problem: (1) Epsilon-constraint; (2) Lexicographic; and (3) Ant Colony based heuristic. One of the objectives of our model pursues the allocation of resources in riskiest zones. This objective attempts to find victims located at the closest regions to the threat, presenting a high risk of being reached by the disaster. In contrast, the second objective is oriented to allocate resources in regions where it is more likely to find the victim. Furthermore, we implemented a receding horizon approach oriented to provide our planning methodology with the ability to adapt to disaster's behavior based on updated information gathered during the mission. All our products were validated through computational experiments.MaestríaMagister en Ingeniería Industria

    Mission-Phasing Techniques for Constrained Agents in Stochastic Environments.

    Full text link
    Resource constraints restrict the set of actions that an agent can take, such that the agent might not be able to perform all its desired tasks. Computational time limitations restrict the number of states that an agent can model and reason over, such that the agent might not be able to formulate a policy that can respond to all possible eventualities. This work argues that, in either situation, one effective way of improving the agent's performance is to adopt a phasing strategy. Resource-constrained agents can choose to reconfigure resources and switch action sets for handling upcoming events better when moving from phase to phase; time-limited agents can choose to focus computation on high-value phases and to exploit additional computation time during the execution of earlier phases to improve solutions for future phases. This dissertation consists of two parts, corresponding to the aforementioned resource constraints and computational time limitations. The first part of the dissertation focuses on the development of automated resource-driven mission-phasing techniques for agents operating in resource-constrained environments. We designed a suite of algorithms which not only can find solutions to optimize the use of predefined phase-switching points, but can also automatically determine where to establish such points, accounting for the cost of creating them, in complex stochastic environments. By formulating the coupled problems of mission decomposition, resource configuration, and policy formulation into a single compact mathematical formulation, the presented algorithms can effectively exploit problem structure and often considerably reduce computational cost for finding exact solutions. The second part of this dissertation is the design of computation-driven mission-phasing techniques for time-critical systems. We developed a new deliberation scheduling approach, which can simultaneously solve the coupled problems of deciding both when to deliberate given its cost, and which phase decision procedures to execute during deliberation intervals. Meanwhile, we designed a heuristic search method to effectively utilize the allocated time within each phase. As illustrated in experimental results, the computation-driven mission-phasing techniques, which extend problem decomposition techniques with the across-phase deliberation scheduling and inner-phase heuristic search methods mentioned above, can help an agent generate a better policy within time limit.Ph.D.Computer Science & EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/60650/1/jianhuiw_1.pd

    Datacenter management for on-site intermittent and uncertain renewable energy sources

    Get PDF
    Les technologies de l'information et de la communication sont devenues, au cours des dernières années, un pôle majeur de consommation énergétique avec les conséquences environnementales associées. Dans le même temps, l'émergence du Cloud computing et des grandes plateformes en ligne a causé une augmentation en taille et en nombre des centres de données. Pour réduire leur impact écologique, alimenter ces centres avec des sources d'énergies renouvelables (EnR) apparaît comme une piste de solution. Cependant, certaines EnR telles que les énergies solaires et éoliennes sont liées aux conditions météorologiques, et sont par conséquent intermittentes et incertaines. L'utilisation de batteries ou d'autres dispositifs de stockage est souvent envisagée pour compenser ces variabilités de production. De par leur coût important, économique comme écologique, ainsi que les pertes énergétiques engendrées, l'utilisation de ces dispositifs sans intégration supplémentaire est insuffisante. La consommation électrique d'un centre de données dépend principalement de l'utilisation des ressources de calcul et de communication, qui est déterminée par la charge de travail et les algorithmes d'ordonnancement utilisés. Pour utiliser les EnR efficacement tout en préservant la qualité de service du centre, une gestion coordonnée des ressources informatiques, des sources électriques et du stockage est nécessaire. Il existe une grande diversité de centres de données, ayant différents types de matériel, de charge de travail et d'utilisation. De la même manière, suivant les EnR, les technologies de stockage et les objectifs en termes économiques ou environnementaux, chaque infrastructure électrique est modélisée et gérée différemment des autres. Des travaux existants proposent des méthodes de gestion d'EnR pour des couples bien spécifiques de modèles électriques et informatiques. Cependant, les multiples combinaisons de ces deux parties rendent difficile l'extrapolation de ces approches et de leurs résultats à des infrastructures différentes. Cette thèse explore de nouvelles méthodes pour résoudre ce problème de coordination. Une première contribution reprend un problème d'ordonnancement de tâches en introduisant une abstraction des sources électriques. Un algorithme d'ordonnancement est proposé, prenant les préférences des sources en compte, tout en étant conçu pour être indépendant de leur nature et des objectifs de l'infrastructure électrique. Une seconde contribution étudie le problème de planification de l'énergie d'une manière totalement agnostique des infrastructures considérées. Les ressources informatiques et la gestion de la charge de travail sont encapsulées dans une boîte noire implémentant un ordonnancement sous contrainte de puissance. La même chose s'applique pour le système de gestion des EnR et du stockage, qui agit comme un algorithme d'optimisation d'engagement de sources pour répondre à une demande. Une optimisation coopérative et multiobjectif, basée sur un algorithme évolutionnaire, utilise ces deux boîtes noires afin de trouver les meilleurs compromis entre les objectifs électriques et informatiques. Enfin, une troisième contribution vise les incertitudes de production des EnR pour une infrastructure plus spécifique. En utilisant une formulation en processus de décision markovien (MDP), la structure du problème de décision sous-jacent est étudiée. Pour plusieurs variantes du problème, des méthodes sont proposées afin de trouver les politiques optimales ou des approximations de celles-ci avec une complexité raisonnable.In recent years, information and communication technologies (ICT) became a major energy consumer, with the associated harmful ecological consequences. Indeed, the emergence of Cloud computing and massive Internet companies increased the importance and number of datacenters around the world. In order to mitigate economical and ecological cost, powering datacenters with renewable energy sources (RES) began to appear as a sustainable solution. Some of the commonly used RES, such as solar and wind energies, directly depends on weather conditions. Hence they are both intermittent and partly uncertain. Batteries or other energy storage devices (ESD) are often considered to relieve these issues, but they result in additional energy losses and are too costly to be used alone without more integration. The power consumption of a datacenter is closely tied to the computing resource usage, which in turn depends on its workload and on the algorithms that schedule it. To use RES as efficiently as possible while preserving the quality of service of a datacenter, a coordinated management of computing resources, electrical sources and storage is required. A wide variety of datacenters exists, each with different hardware, workload and purpose. Similarly, each electrical infrastructure is modeled and managed uniquely, depending on the kind of RES used, ESD technologies and operating objectives (cost or environmental impact). Some existing works successfully address this problem by considering a specific couple of electrical and computing models. However, because of this combined diversity, the existing approaches cannot be extrapolated to other infrastructures. This thesis explores novel ways to deal with this coordination problem. A first contribution revisits batch tasks scheduling problem by introducing an abstraction of the power sources. A scheduling algorithm is proposed, taking preferences of electrical sources into account, though designed to be independent from the type of sources and from the goal of the electrical infrastructure (cost, environmental impact, or a mix of both). A second contribution addresses the joint power planning coordination problem in a totally infrastructure-agnostic way. The datacenter computing resources and workload management is considered as a black-box implementing a scheduling under variable power constraint algorithm. The same goes for the electrical sources and storage management system, which acts as a source commitment optimization algorithm. A cooperative multiobjective power planning optimization, based on a multi-objective evolutionary algorithm (MOEA), dialogues with the two black-boxes to find the best trade-offs between electrical and computing internal objectives. Finally, a third contribution focuses on RES production uncertainties in a more specific infrastructure. Based on a Markov Decision Process (MDP) formulation, the structure of the underlying decision problem is studied. For several variants of the problem, tractable methods are proposed to find optimal policies or a bounded approximation

    Lot-Sizing Problem for a Multi-Item Multi-level Capacitated Batch Production System with Setup Carryover, Emission Control and Backlogging using a Dynamic Program and Decomposition Heuristic

    Get PDF
    Wagner and Whitin (1958) develop an algorithm to solve the dynamic Economic Lot-Sizing Problem (ELSP), which is widely applied in inventory control, production planning, and capacity planning. The original algorithm runs in O(T^2) time, where T is the number of periods of the problem instance. Afterward few linear-time algorithms have been developed to solve the Wagner-Whitin (WW) lot-sizing problem; examples include the ELSP and equivalent Single Machine Batch-Sizing Problem (SMBSP). This dissertation revisits the algorithms for ELSPs and SMBSPs under WW cost structure, presents a new efficient linear-time algorithm, and compares the developed algorithm against comparable ones in the literature. The developed algorithm employs both lists and stacks data structure, which is completely a different approach than the rest of the algorithms for ELSPs and SMBSPs. Analysis of the developed algorithm shows that it executes fewer number of basic actions throughout the algorithm and hence it improves the CPU time by a maximum of 51.40% for ELSPs and 29.03% for SMBSPs. It can be concluded that the new algorithm is faster than existing algorithms for both ELSPs and SMBSPs. Lot-sizing decisions are crucial because these decisions help the manufacturer determine the quantity and time to produce an item with a minimum cost. The efficiency and productivity of a system is completely dependent upon the right choice of lot-sizes. Therefore, developing and improving solution procedures for lot-sizing problems is key. This dissertation addresses the classical Multi-Level Capacitated Lot-Sizing Problem (MLCLSP) and an extension of the MLCLSP with a Setup Carryover, Backlogging and Emission control. An item Dantzig Wolfe (DW) decomposition technique with an embedded Column Generation (CG) procedure is used to solve the problem. The original problem is decomposed into a master problem and a number of subproblems, which are solved using dynamic programming approach. Since the subproblems are solved independently, the solution of the subproblems often becomes infeasible for the master problem. A multi-step iterative Capacity Allocation (CA) heuristic is used to tackle this infeasibility. A Linear Programming (LP) based improvement procedure is used to refine the solutions obtained from the heuristic method. A comparative study of the proposed heuristic for the first problem (MLCLSP) is conducted and the results demonstrate that the proposed heuristic provide less optimality gap in comparison with that obtained in the literature. The Setup Carryover Assignment Problem (SCAP), which consists of determining the setup carryover plan of multiple items for a given lot-size over a finite planning horizon is modelled as a problem of finding Maximum Weighted Independent Set (MWIS) in a chain of cliques. The SCAP is formulated using a clique constraint and it is proved that the incidence matrix of the SCAP has totally unimodular structure and the LP relaxation of the proposed SCAP formulation always provides integer optimum solution. Moreover, an alternative proof that the relaxed ILP guarantees integer solution is presented in this dissertation. Thus, the SCAP and the special case of the MWIS in a chain of cliques are solvable in polynomial time

    Condition-based hazard rate estimation and optimal maintenance scheduling for electrical transmission system

    Get PDF
    The effectiveness of expending maintenance resources can vary dramatically depending on the target and timing of the maintenance activities. The objective of the work to develop a method of allocating economic resources and scheduling maintenance tasks among bulk transmission system equipment, so as to optimize the effect of maintenance with respect to the mitigation of component failure consequences. Techniques including condition-based failure rate estimation of electric transmission system components, analysis of failure consequences in power system, probabilistic modeling and risk assessment, and optimization are integrated in the work. Hidden Markov model is a good tool to estimate instantaneous status for deteriorating components. The maintenance selection and scheduling approach for bulk transmission equipment is based on the cumulative long-term risk caused by failure of each piece of equipment;This approach not only accounts for equipment failure probability and equipment damage, but it also accounts for the outage consequence in term of system related security problems. Various types of maintenance activities are studied and their relationship to the failure modes and system security improvement are investigated. An optimizer is developed to select and schedule the maintenance for large networks with various types of resource constraints, together with methods of resource reallocation;Finally, a strategy of incorporating maintenance activities among different transmission owners is developed. The objective of our work is to allocate resources economically and strategically so as to provide best performance of maintenance for electrical transmission system. These strategies can also be applied to problems inherent to resource intensive asset management in many similar types of infrastructures such as gas pipelines, airlines, and telecommunications

    Smooth path planning with Pythagorean-hodoghraph spline curves geometric design and motion control

    Get PDF
    This thesis addresses two significative problems regarding autonomous systems, namely path and trajectory planning. Path planning deals with finding a suitable path from a start to a goal position by exploiting a given representation of the environment. Trajectory planning schemes govern the motion along the path by generating appropriate reference (path) points. We propose a two-step approach for the construction of planar smooth collision-free navigation paths. Obstacle avoidance techniques that rely on classical data structures are initially considered for the identification of piecewise linear paths that do not intersect with the obstacles of a given scenario. In the second step of the scheme we rely on spline interpolation algorithms with tension parameters to provide a smooth planar control strategy. In particular, we consider Pythagorean–hodograph (PH) curves, since they provide an exact computation of fundamental geometric quantities. The vertices of the previously produced piecewise linear paths are interpolated by using a G1 or G2 interpolation scheme with tension based on PH splines. In both cases, a strategy based on the asymptotic analysis of the interpolation scheme is developed in order to get an automatic selection of the tension parameters. To completely describe the motion along the path we present a configurable trajectory planning strategy for the offline definition of time-dependent C2 piece-wise quintic feedrates. When PH spline curves are considered, the corresponding accurate and efficient CNC interpolator algorithms can be exploited

    New Product Introduction in the Pharmaceutical Industry

    Get PDF

    Activity Report: Automatic Control 2013

    Get PDF

    Improvement of Geometric Quality Inspection and Process Efficiency in Additive Manufacturing

    Get PDF
    Additive manufacturing (AM) has been known for its ability of producing complex geometries in flexible production environments. In recent decades, it has attracted increasing attention and interest of different industrial sectors. However, there are still some technical challenges hindering the wide application of AM. One major barrier is the limited dimensional accuracy of AM produced parts, especially for industrial sectors such as aerospace and biomedical engineering, where high geometric accuracy is required. Nevertheless, traditional quality inspection techniques might not perform well due to the complexity and flexibility of AM fabricated parts. Another issue, which is brought up from the growing demand for large-scale 3D printing in these industry sectors, is the limited fabrication speed of AM processes. However, how to improve the fabrication efficiency without sacrificing the geometric quality is still a challenging problem that has not been well addressed. In this work, new geometric inspection methods are proposed for both offline and online inspection paradigms, and a layer-by-layer toolpath optimization model is proposed to further improve the fabrication efficiency of AM processes without degrading the resolution. First, a novel Location-Orientation-Shape (LOS) distribution derived from 3D scanning output is proposed to improve the offline inspection in detecting and distinguishing positional and dimensional non-conformities of features. Second, the online geometric inspection is improved by a multi-resolution alignment and inspection framework based on wavelet decomposition and design of experiments (DOE). The new framework is able to improve the alignment accuracy and to distinguish different sources of error based on the shape deviation of each layer. In addition, a quickest change point detection method is used to identify the layer where the earliest change of systematic deviation distribution occurs during the printing process. Third, to further improve the printing efficiency without sacrificing the quality of each layer, a toolpath allocation and scheduling optimization model is proposed based on a concurrent AM process that allows multiple extruders to work collaboratively on the same layer. For each perspective of improvements, numerical studies are provided to emphasize the theoretical and practical meanings of proposed methodologies

    Sample Path Analysis of Integrate-and-Fire Neurons

    Get PDF
    Computational neuroscience is concerned with answering two intertwined questions that are based on the assumption that spatio-temporal patterns of spikes form the universal language of the nervous system. First, what function does a specific neural circuitry perform in the elaboration of a behavior? Second, how do neural circuits process behaviorally-relevant information? Non-linear system analysis has proven instrumental in understanding the coding strategies of early neural processing in various sensory modalities. Yet, at higher levels of integration, it fails to help in deciphering the response of assemblies of neurons to complex naturalistic stimuli. If neural activity can be assumed to be primarily driven by the stimulus at early stages of processing, the intrinsic activity of neural circuits interacts with their high-dimensional input to transform it in a stochastic non-linear fashion at the cortical level. As a consequence, any attempt to fully understand the brain through a system analysis approach becomes illusory. However, it is increasingly advocated that neural noise plays a constructive role in neural processing, facilitating information transmission. This prompts to gain insight into the neural code by studying the stochasticity of neuronal activity, which is viewed as biologically relevant. Such an endeavor requires the design of guiding theoretical principles to assess the potential benefits of neural noise. In this context, meeting the requirements of biological relevance and computational tractability, while providing a stochastic description of neural activity, prescribes the adoption of the integrate-and-fire model. In this thesis, founding ourselves on the path-wise description of neuronal activity, we propose to further the stochastic analysis of the integrate-and fire model through a combination of numerical and theoretical techniques. To begin, we expand upon the path-wise construction of linear diffusions, which offers a natural setting to describe leaky integrate-and-fire neurons, as inhomogeneous Markov chains. Based on the theoretical analysis of the first-passage problem, we then explore the interplay between the internal neuronal noise and the statistics of injected perturbations at the single unit level, and examine its implications on the neural coding. At the population level, we also develop an exact event-driven implementation of a Markov network of perfect integrate-and-fire neurons with both time delayed instantaneous interactions and arbitrary topology. We hope our approach will provide new paradigms to understand how sensory inputs perturb neural intrinsic activity and accomplish the goal of developing a new technique for identifying relevant patterns of population activity. From a perturbative perspective, our study shows how injecting frozen noise in different flavors can help characterize internal neuronal noise, which is presumably functionally relevant to information processing. From a simulation perspective, our event-driven framework is amenable to scrutinize the stochastic behavior of simple recurrent motifs as well as temporal dynamics of large scale networks under spike-timing-dependent plasticity
    corecore