225 research outputs found

    Towards Prescriptive Analytics in Cyber-Physical Systems

    Get PDF
    More and more of our physical world today is being monitored and controlled by so-called cyber-physical systems (CPSs). These are compositions of networked autonomous cyber and physical agents such as sensors, actuators, computational elements, and humans in the loop. Today, CPSs are still relatively small-scale and very limited compared to CPSs to be witnessed in the future. Future CPSs are expected to be far more complex, large-scale, wide-spread, and mission-critical, and found in a variety of domains such as transportation, medicine, manufacturing, and energy, where they will bring many advantages such as the increased efficiency, sustainability, reliability, and security. To unleash their full potential, CPSs need to be equipped with, among other features, the support for automated planning and control, where computing agents collaboratively and continuously plan and control their actions in an intelligent and well-coordinated manner to secure and optimize a physical process, e.g., electricity flow in the power grid. In today’s CPSs, the control is typically automated, but the planning is solely performed by humans. Unfortunately, it is intractable and infeasible for humans to plan every action in a future CPS due to the complexity, scale, and volatility of a physical process. Due to these properties, the control and planning has to be continuous and automated in future CPSs. Humans may only analyse and tweak the system’s operation using the set of tools supporting prescriptive analytics that allows them (1) to make predictions, (2) to get the suggestions of the most prominent set of actions (decisions) to be taken, and (3) to analyse the implications as if such actions were taken. This thesis considers the planning and control in the context of a large-scale multi-agent CPS. Based on the smart-grid use-case, it presents a so-called PrescriptiveCPS – which is (the conceptual model of) a multi-agent, multi-role, and multi-level CPS automatically and continuously taking and realizing decisions in near real-time and providing (human) users prescriptive analytics tools to analyse and manage the performance of the underlying physical system (or process). Acknowledging the complexity of CPSs, this thesis provides contributions at the following three levels of scale: (1) the level of a (full) PrescriptiveCPS, (2) the level of a single PrescriptiveCPS agent, and (3) the level of a component of a CPS agent software system. At the CPS level, the contributions include the definition of PrescriptiveCPS, according to which it is the system of interacting physical and cyber (sub-)systems. Here, the cyber system consists of hierarchically organized inter-connected agents, collectively managing instances of so-called flexibility, decision, and prescription models, which are short-lived, focus on the future, and represent a capability, an (user’s) intention, and actions to change the behaviour (state) of a physical system, respectively. At the agent level, the contributions include the three-layer architecture of an agent software system, integrating the number of components specially designed or enhanced to support the functionality of PrescriptiveCPS. At the component level, the most of the thesis contribution is provided. The contributions include the description, design, and experimental evaluation of (1) a unified multi-dimensional schema for storing flexibility and prescription models (and related data), (2) techniques to incrementally aggregate flexibility model instances and disaggregate prescription model instances, (3) a database management system (DBMS) with built-in optimization problem solving capability allowing to formulate optimization problems using SQL-like queries and to solve them “inside a database”, (4) a real-time data management architecture for processing instances of flexibility and prescription models under (soft or hard) timing constraints, and (5) a graphical user interface (GUI) to visually analyse the flexibility and prescription model instances. Additionally, the thesis discusses and exemplifies (but provides no evaluations of) (1) domain-specific and in-DBMS generic forecasting techniques allowing to forecast instances of flexibility models based on historical data, and (2) powerful ways to analyse past, current, and future based on so-called hypothetical what-if scenarios and flexibility and prescription model instances stored in a database. Most of the contributions at this level are based on the smart-grid use-case. In summary, the thesis provides (1) the model of a CPS with planning capabilities, (2) the design and experimental evaluation of prescriptive analytics techniques allowing to effectively forecast, aggregate, disaggregate, visualize, and analyse complex models of the physical world, and (3) the use-case from the energy domain, showing how the introduced concepts are applicable in the real world. We believe that all this contribution makes a significant step towards developing planning-capable CPSs in the future.Mehr und mehr wird heute unsere physische Welt überwacht und durch sogenannte Cyber-Physical-Systems (CPS) geregelt. Dies sind Kombinationen von vernetzten autonomen cyber und physischen Agenten wie Sensoren, Aktoren, Rechenelementen und Menschen. Heute sind CPS noch relativ klein und im Vergleich zu CPS der Zukunft sehr begrenzt. Zukünftige CPS werden voraussichtlich weit komplexer, größer, weit verbreiteter und unternehmenskritischer sein sowie in einer Vielzahl von Bereichen wie Transport, Medizin, Fertigung und Energie – in denen sie viele Vorteile wie erhöhte Effizienz, Nachhaltigkeit, Zuverlässigkeit und Sicherheit bringen – anzutreffen sein. Um ihr volles Potenzial entfalten zu können, müssen CPS unter anderem mit der Unterstützung automatisierter Planungs- und Steuerungsfunktionalität ausgestattet sein, so dass Agents ihre Aktionen gemeinsam und kontinuierlich auf intelligente und gut koordinierte Weise planen und kontrollieren können, um einen physischen Prozess wie den Stromfluss im Stromnetz sicherzustellen und zu optimieren. Zwar sind in den heutigen CPS Steuerung und Kontrolle typischerweise automatisiert, aber die Planung wird weiterhin allein von Menschen durchgeführt. Leider ist diese Aufgabe nur schwer zu bewältigen, und es ist für den Menschen schlicht unmöglich, jede Aktion in einem zukünftigen CPS auf Basis der Komplexität, des Umfangs und der Volatilität eines physikalischen Prozesses zu planen. Aufgrund dieser Eigenschaften müssen Steuerung und Planung in CPS der Zukunft kontinuierlich und automatisiert ablaufen. Der Mensch soll sich dabei ganz auf die Analyse und Einflussnahme auf das System mit Hilfe einer Reihe von Werkzeugen konzentrieren können. Derartige Werkzeuge erlauben (1) Vorhersagen, (2) Vorschläge der wichtigsten auszuführenden Aktionen (Entscheidungen) und (3) die Analyse und potentiellen Auswirkungen der zu fällenden Entscheidungen. Diese Arbeit beschäftigt sich mit der Planung und Kontrolle im Rahmen großer Multi-Agent-CPS. Basierend auf dem Smart-Grid als Anwendungsfall wird ein sogenanntes PrescriptiveCPS vorgestellt, welches einem Multi-Agent-, Multi-Role- und Multi-Level-CPS bzw. dessen konzeptionellem Modell entspricht. Diese PrescriptiveCPS treffen und realisieren automatisch und kontinuierlich Entscheidungen in naher Echtzeit und stellen Benutzern (Menschen) Prescriptive-Analytics-Werkzeuge und Verwaltung der Leistung der zugrundeliegenden physischen Systeme bzw. Prozesse zur Verfügung. In Anbetracht der Komplexität von CPS leistet diese Arbeit Beiträge auf folgenden Ebenen: (1) Gesamtsystem eines PrescriptiveCPS, (2) PrescriptiveCPS-Agenten und (3) Komponenten eines CPS-Agent-Software-Systems. Auf CPS-Ebene umfassen die Beiträge die Definition von PrescriptiveCPS als ein System von wechselwirkenden physischen und cyber (Sub-)Systemen. Das Cyber-System besteht hierbei aus hierarchisch organisierten verbundenen Agenten, die zusammen Instanzen sogenannter Flexibility-, Decision- und Prescription-Models verwalten, welche von kurzer Dauer sind, sich auf die Zukunft konzentrieren und Fähigkeiten, Absichten (des Benutzers) und Aktionen darstellen, die das Verhalten des physischen Systems verändern. Auf Agenten-Ebene umfassen die Beiträge die Drei-Ebenen-Architektur eines Agentensoftwaresystems sowie die Integration von Komponenten, die insbesondere zur besseren Unterstützung der Funktionalität von PrescriptiveCPS entwickelt wurden. Der Schwerpunkt dieser Arbeit bilden die Beiträge auf der Komponenten-Ebene, diese umfassen Beschreibung, Design und experimentelle Evaluation (1) eines einheitlichen multidimensionalen Schemas für die Speicherung von Flexibility- and Prescription-Models (und verwandten Daten), (2) der Techniken zur inkrementellen Aggregation von Instanzen eines Flexibilitätsmodells und Disaggregation von Prescription-Models, (3) eines Datenbankmanagementsystem (DBMS) mit integrierter Optimierungskomponente, die es erlaubt, Optimierungsprobleme mit Hilfe von SQL-ähnlichen Anfragen zu formulieren und sie „in einer Datenbank zu lösen“, (4) einer Echtzeit-Datenmanagementarchitektur zur Verarbeitung von Instanzen der Flexibility- and Prescription-Models unter (weichen oder harten) Zeitvorgaben und (5) einer grafische Benutzeroberfläche (GUI) zur Visualisierung und Analyse von Instanzen der Flexibility- and Prescription-Models. Darüber hinaus diskutiert und veranschaulicht diese Arbeit beispielhaft ohne detaillierte Evaluation (1) anwendungsspezifische und im DBMS integrierte Vorhersageverfahren, die die Vorhersage von Instanzen der Flexibility- and Prescription-Models auf Basis historischer Daten ermöglichen, und (2) leistungsfähige Möglichkeiten zur Analyse von Vergangenheit, Gegenwart und Zukunft auf Basis sogenannter hypothetischer „What-if“-Szenarien und der in der Datenbank hinterlegten Instanzen der Flexibility- and Prescription-Models. Die meisten der Beiträge auf dieser Ebene basieren auf dem Smart-Grid-Anwendungsfall. Zusammenfassend befasst sich diese Arbeit mit (1) dem Modell eines CPS mit Planungsfunktionen, (2) dem Design und der experimentellen Evaluierung von Prescriptive-Analytics-Techniken, die eine effektive Vorhersage, Aggregation, Disaggregation, Visualisierung und Analyse komplexer Modelle der physischen Welt ermöglichen und (3) dem Anwendungsfall der Energiedomäne, der zeigt, wie die vorgestellten Konzepte in der Praxis Anwendung finden. Wir glauben, dass diese Beiträge einen wesentlichen Schritt in der zukünftigen Entwicklung planender CPS darstellen.Mere og mere af vores fysiske verden bliver overvåget og kontrolleret af såkaldte cyber-fysiske systemer (CPSer). Disse er sammensætninger af netværksbaserede autonome IT (cyber) og fysiske (physical) agenter, såsom sensorer, aktuatorer, beregningsenheder, og mennesker. I dag er CPSer stadig forholdsvis små og meget begrænsede i forhold til de CPSer vi kan forvente i fremtiden. Fremtidige CPSer forventes at være langt mere komplekse, storstilede, udbredte, og missionskritiske, og vil kunne findes i en række områder såsom transport, medicin, produktion og energi, hvor de vil give mange fordele, såsom øget effektivitet, bæredygtighed, pålidelighed og sikkerhed. For at frigøre CPSernes fulde potentiale, skal de bl.a. udstyres med støtte til automatiseret planlægning og kontrol, hvor beregningsagenter i samspil og løbende planlægger og styrer deres handlinger på en intelligent og velkoordineret måde for at sikre og optimere en fysisk proces, såsom elforsyningen i elnettet. I nuværende CPSer er styringen typisk automatiseret, mens planlægningen udelukkende er foretaget af mennesker. Det er umuligt for mennesker at planlægge hver handling i et fremtidigt CPS på grund af kompleksiteten, skalaen, og omskifteligheden af en fysisk proces. På grund af disse egenskaber, skal kontrol og planlægning være kontinuerlig og automatiseret i fremtidens CPSer. Mennesker kan kun analysere og justere systemets drift ved hjælp af det sæt af værktøjer, der understøtter præskriptive analyser (prescriptive analytics), der giver dem mulighed for (1) at lave forudsigelser, (2) at få forslagene fra de mest fremtrædende sæt handlinger (beslutninger), der skal tages, og (3) at analysere konsekvenserne, hvis sådanne handlinger blev udført. Denne afhandling omhandler planlægning og kontrol i forbindelse med store multi-agent CPSer. Baseret på en smart-grid use case, præsenterer afhandlingen det såkaldte PrescriptiveCPS hvilket er (den konceptuelle model af) et multi-agent, multi-rolle, og multi-level CPS, der automatisk og kontinuerligt tager beslutninger i nær-realtid og leverer (menneskelige) brugere præskriptiveanalyseværktøjer til at analysere og håndtere det underliggende fysiske system (eller proces). I erkendelse af kompleksiteten af CPSer, giver denne afhandling bidrag til følgende tre niveauer: (1) niveauet for et (fuldt) PrescriptiveCPS, (2) niveauet for en enkelt PrescriptiveCPS agent, og (3) niveauet for en komponent af et CPS agent software system. På CPS-niveau, omfatter bidragene definitionen af PrescriptiveCPS, i henhold til hvilken det er det system med interagerende fysiske- og IT- (under-) systemer. Her består IT-systemet af hierarkisk organiserede forbundne agenter der sammen styrer instanser af såkaldte fleksibilitet (flexibility), beslutning (decision) og præskriptive (prescription) modeller, som henholdsvis er kortvarige, fokuserer på fremtiden, og repræsenterer en kapacitet, en (brugers) intention, og måder til at ændre adfærd (tilstand) af et fysisk system. På agentniveau omfatter bidragene en tre-lags arkitektur af et agent software system, der integrerer antallet af komponenter, der er specielt konstrueret eller udbygges til at understøtte funktionaliteten af PrescriptiveCPS. Komponentniveauet er hvor afhandlingen har sit hovedbidrag. Bidragene omfatter beskrivelse, design og eksperimentel evaluering af (1) et samlet multi- dimensionelt skema til at opbevare fleksibilitet og præskriptive modeller (og data), (2) teknikker til trinvis aggregering af fleksibilitet modelinstanser og disaggregering af præskriptive modelinstanser (3) et database management system (DBMS) med indbygget optimeringsproblemløsning (optimization problem solving) der gør det muligt at formulere optimeringsproblemer ved hjælp af SQL-lignende forespørgsler og at løse dem "inde i en database", (4) en realtids data management arkitektur til at behandle instanser af fleksibilitet og præskriptive modeller under (bløde eller hårde) tidsbegrænsninger, og (5) en grafisk brugergrænseflade (GUI) til visuelt at analysere fleksibilitet og præskriptive modelinstanser. Derudover diskuterer og eksemplificerer afhandlingen (men giver ingen evalueringer af) (1) domæne-specifikke og in-DBMS generiske prognosemetoder der gør det muligt at forudsige instanser af fleksibilitet modeller baseret på historiske data, og (2) kraftfulde måder at analysere tidligere-, nutids- og fremtidsbaserede såkaldte hypotetiske hvad-hvis scenarier og fleksibilitet og præskriptive modelinstanser gemt i en database. De fleste af bidragene på dette niveau er baseret på et smart-grid brugsscenarie. Sammenfattende giver afhandlingen (1) modellen for et CPS med planlægningsmulighed, (2) design og eksperimentel evaluering af præskriptive analyse teknikker der gør det muligt effektivt at forudsige, aggregere, disaggregere, visualisere og analysere komplekse modeller af den fysiske verden, og (3) brugsscenariet fra energiområdet, der viser, hvordan de indførte begreber kan anvendes i den virkelige verden. Vi mener, at dette bidrag udgør et betydeligt skridt i retning af at udvikle CPSer til planlægningsbrug i fremtiden

    Branching strategies for mixed-integer programs containing logical constraints and decomposable structure

    Get PDF
    Decision-making optimisation problems can include discrete selections, e.g. selecting a route, arranging non-overlapping items or designing a network of items. Branch-and-bound (B&B), a widely applied divide-and-conquer framework, often solves such problems by considering a continuous approximation, e.g. replacing discrete variable domains by a continuous superset. Such approximations weaken the logical relations, e.g. for discrete variables corresponding to Boolean variables. Branching in B&B reintroduces logical relations by dividing the search space. This thesis studies designing B&B branching strategies, i.e. how to divide the search space, for optimisation problems that contain both a logical and a continuous structure. We begin our study with a large-scale, industrially-relevant optimisation problem where the objective consists of machine-learnt gradient-boosted trees (GBTs) and convex penalty functions. GBT functions contain if-then queries which introduces a logical structure to this problem. We propose decomposition-based rigorous bounding strategies and an iterative heuristic that can be embedded into a B&B algorithm. We approach branching with two strategies: a pseudocost initialisation and strong branching that target the structure of GBT and convex penalty aspects of the optimisation objective, respectively. Computational tests show that our B&B approach outperforms state-of-the-art solvers in deriving rigorous bounds on optimality. Our second project investigates how satisfiability modulo theories (SMT) derived unsatisfiable cores may be utilised in a B&B context. Unsatisfiable cores are subsets of constraints that explain an infeasible result. We study two-dimensional bin packing (2BP) and develop a B&B algorithm that branches on SMT unsatisfiable cores. We use the unsatisfiable cores to derive cuts that break 2BP symmetries. Computational results show that our B&B algorithm solves 20% more instances when compared with commercial solvers on the tested instances. Finally, we study convex generalized disjunctive programming (GDP), a framework that supports logical variables and operators. Convex GDP includes disjunctions of mathematical constraints, which motivate branching by partitioning the disjunctions. We investigate separation by branching, i.e. eliminating solutions that prevent rigorous bound improvement, and propose a greedy algorithm for building the branches. We propose three scoring methods for selecting the next branching disjunction. We also analyse how to leverage infeasibility to expedite the B&B search. Computational results show that our scoring methods can reduce the number of explored B&B nodes by an order of magnitude when compared with scoring methods proposed in literature. Our infeasibility analysis further reduces the number of explored nodes.Open Acces

    New benchmarking techniques in resource allocation problems: theory and applications in cloud systems

    Get PDF
    Motivated by different e-commerce applications such as allocating virtual machines to servers and online ad placement, we study new models that aim to capture unstudied tensions faced by decision-makers. In online/sequential models, future information is often unavailable to decision-makers---e.g., the exact demand of a product for next week. Sometimes, these unknowns have regularity, and decision-makers can fit random models. Other times, decision-makers must be prepared for any possible outcome. In practice, several solutions are based on classical models that do not fully consider these unknowns. One reason for this is our present technical limitations. Exploring new models with adequate sources of uncertainty could be beneficial for both the theory and the practice of decision-making. For example, cloud companies such as Amazon WS face highly unpredictable demands of resources. New management planning that considers these tensions have improved capacity and cut costs for the cloud providers. As a result, cloud companies can now offer new services at lower prices benefiting thousands of users. In this thesis, we study three different models, each motivated by an application in cloud computing and online advertising. From a technical standpoint, we apply either worst-case analysis with limited information from the system or adaptive analysis with stochastic results learned after making an irrevocable decision. A central aspect of this work is dynamic benchmarks as opposed to static or offline ones. Static and offline viewpoints are too conservative and have limited interpretation in some dynamic settings. A dynamic criterion, such as the value of an optimal sequential policy, allows comparisons with the best that one could do in dynamic scenarios. Another aspect of this work is multi-objective criteria in dynamic settings, where two or more competing goals must be satisfied under an uncertain future. We tackle the challenges introduced by these new perspectives with fresh theoretical analyses, drawing inspiration from linear and nonlinear optimization and stochastic processes.Ph.D

    Optimizing Resource Allocation in Surgery Delivery Systems

    Full text link
    This thesis focuses on developing mathematical models to optimize processes related to surgery delivery systems. Surgical services account for a large portion of hospital revenue and expenses; moreover, increased demand is expected in the future due in part to the aging population in many countries. Achieving high efficiency in this system is challenging due to the uncertain service durations, the interaction of different stages of the system (e.g., surgery, recovery), and competing criteria (e.g., patient wait time, employee satisfaction, the availability and utilization of healthcare professionals, operating rooms (ORs), and recovery beds). Moreover, solutions must overcome an enormous barrier of computational complexity. Considering the complexity of the problem, and the numerous resources involved in delivering surgical care, this thesis focuses on three aspects of surgery delivery systems: short term scheduling (operational level decisions, e.g., daily sequencing of surgeries), service group team design and staff allocation (strategic level team design decisions on the order of years, and tactical level shift allocation decisions, e.g., monthly), and OR capacity reservation (strategic level decisions, e.g., what OR capacity reservation policy to use in the following years). To optimize scheduling policies on an operational level, we developed a 2-phase approximation method, where the first phase determines the number of ORs to open for the day, and assigns surgeons to ORs. The second phase performs surgical case sequencing considering recovery resource availability. For both phases of the approximation, we provide provable worst-case performance guarantees; furthermore, we use numerical experiments to show the methods also have excellent average case performance. We further developed a mixed integer programming (MIP) model for comparison to the approximation method. We evaluated the performance of the approximation compared to the MIP model in deterministic and stochastic settings, using a discrete even simulation (DES) for the latter. On the strategic and tactical levels, we focus on staffing decisions for surgical nurses. These decisions present a challenge due to nurse availability, skill requirements, hospital regulations, and stochastic surgical demand. We present a MIP to group services into teams, and achieve fairness in training time and overnight surgical volume, and balance size across teams. Once teams are created, we use a MIP-based heuristic to assign shifts to services and teams to ensure coverage of surgical demand. We analyze the performance of the heuristic, and present results that provide insight into optimal surgical nurse staff planning decisions. We show that the newly designed teams are more balanced with respect to the performance metrics, and coverage of surgical demand can be improved. Finally, on the strategic level, we use DES to evaluate OR capacity reservation heuristics. OR capacity reservation is a challenging problem due to uncertain demand for surgery and surgery durations. Using our DES model, we evaluate two categories of approximation methods to gain insights into the problem: first come, first served based heuristics, which are used as benchmarks, and appointment slot reservation heuristics, similar to those used in outpatient clinics. We compare the heuristics based on the mean percent of patients that exceed a predefined surgery access target, mean patient wait time, and mean OR utilization. This research was conducted in collaboration with hospitals, and the problems considered are common to many hospitals. Based on data from these hospitals, we provide evidence that significant improvements could be achieved in the three major decision making levels.PHDIndustrial & Operations EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttps://deepblue.lib.umich.edu/bitstream/2027.42/137041/1/mbam_1.pd

    Combinatorial optimisation for sustainable cloud computing

    Get PDF
    Enabled by both software and hardware advances, cloud computing has emerged as an efficient way to leverage economies of scale for building large computational infrastructures over a global network. While the cost of computation has dropped significantly for end users, the infrastructure supporting cloud computing systems has considerable economic and ecological costs. A key challenge for sustainable cloud computing systems in the near future is to maintain control over these costs. Amid the complexity of cloud computing systems, a cost analysis reveals a complex relationship between the infrastructure supporting actual computation on a physical level and how these physical assets are utilised. The central question tackled in this dissertation is how to best utilise these assets through efficient workload management policies. In recent years, workload consolidation has emerged as an effective approach to increase the efficiency of cloud systems. We propose to address aspects of this challenge by leveraging techniques from the realm of mathematical modeling and combinatorial optimisation. We introduce a novel combinatorial optimisation problem suitable for modeling core consolidation problems arising in workload management in data centres. This problem extends on the well-known bin packing problem. We develop competing models and optimisation techniques to solve this offline packing problem with state-of-the-art solvers. We then cast this newly defined combinatorial optimisation problem in an semi-online setting for which we propose an efficient assignment policy that is able to produce solutions for the semi-online problem in a competitive computational time. Stochastic aspects, which are often faced by cloud providers, are introduced in a richer model. We then show how predictive methods can help decision makers dealing with uncertainty in such dynamic and heterogeneous systems. We explore a similar but relaxed problem falling within the scope of proactive consolidation. This is a relaxed consolidation problem in which one decides which, when and where workload should be migrated to retain minimum energy cost. Finally, we discuss ongoing efforts to model and characterise the combinatorial hardness of bin packing instances, which in turn will be useful to study the various packing problems found in cloud computing environments

    Distributed and Multiprocessor Scheduling

    Get PDF
    This chapter discusses CPU scheduling in parallel and distributed systems. CPU scheduling is part of a broader class of resource allocation problems, and is probably the most carefully studied such problem. The main motivation for multiprocessor scheduling is the desire for increased speed in the execution of a workload. Parts of the workload, called tasks, can be spread across several processors and thus be executed more quickly than on a single processor. In this chapter, we will examine techniques for providing this facility. The scheduling problem for multiprocessor systems can be generally stated as \How can we execute a set of tasks T on a set of processors P subject to some set of optimizing criteria C? The most common goal of scheduling is to minimize the expected runtime of a task set. Examples of other scheduling criteria include minimizing the cost, minimizing communication delay, giving priority to certain users\u27 processes, or needs for specialized hardware devices. The scheduling policy for a multiprocessor system usually embodies a mixture of several of these criteria. Section 2 outlines general issues in multiprocessor scheduling and gives background material, including issues specific to either parallel or distributed scheduling. Section 3 describes the best practices from prior work in the area, including a broad survey of existing scheduling algorithms and mechanisms. Section 4 outlines research issues and gives a summary. Section 5 lists the terms defined in this chapter, while sections 6 and 7 give references to important research publications in the area
    • …
    corecore