290 research outputs found

    Proceedings of the 1994 Monterey Workshop, Increasing the Practical Impact of Formal Methods for Computer-Aided Software Development: Evolution Control for Large Software Systems Techniques for Integrating Software Development Environments

    Get PDF
    Office of Naval Research, Advanced Research Projects Agency, Air Force Office of Scientific Research, Army Research Office, Naval Postgraduate School, National Science Foundatio

    Performance Analysis and Capacity Planning of Multi-stage Stochastic Order Fulfilment Systems with Levelled Order Release and Order Deadlines

    Get PDF
    Kundenorientierte Auftragsbearbeitungsprozesse in Logistik- und Produktionssystemen sind heutzutage mit einem kontinuierlich steigenden Auftragsvolumen zunehmend kleinvolumiger AuftrĂ€ge, hohen Kundenanforderungen hinsichtlich kurzfristiger und individueller Lieferfristen und einer stark stochastisch schwankenden Kundennachfrage konfrontiert. Um trotz der volatilen Kundennachfrage eine effiziente Auftragsbearbeitung und die Einhaltung der kundenindividuellen Lieferfristen gewĂ€hrleisten zu können, muss die Arbeitslast kundenorientierter Auftragsbearbeitungsprozesse auf geeignete Weise geglĂ€ttet werden. Hopp und Spearman (2004) unterscheiden zur Kompensation von Schwankungen in Produktionssystemen zwischen den Dimensionen Bestand, Zeit und KapazitĂ€t. Diese stellen auch einen guten Ausgangspunkt fĂŒr die Entwicklung von GlĂ€ttungskonzepten fĂŒr stochastische, kundenorientierte Bearbeitungsprozesse dar. In dieser Arbeit werden die Potentiale der Dimensionen Zeit und KapazitĂ€t in der Strategie der nivellierten Auftragseinlastung zusammengefĂŒhrt, um die Arbeitslast mehrstufiger, stochastischer Auftragsbearbeitungsprozesse mit kundenindividuellen FĂ€lligkeitsfristen auf taktischer Ebene zeitlich zu glĂ€tten. Ziel dieser Arbeit ist (1) die Entwicklung eines GlĂ€ttungskonzeptes, der so genannten Strategie der nivellierten Auftragseinlastung, (2) die Entwicklung eines zeitdiskreten analytischen Modells zur Leistungsanalyse und (3) die Entwicklung eines Algorithmus zur KapazitĂ€tsplanung unter GewĂ€hrleistung bestimmter Leistungsanforderungen fĂŒr mehrstufige, stochastische Auftragsbearbeitungsprozesse mit nivellierter Auftragseinlastung und kundenindividuellen FĂ€lligkeitsfristen. Die Strategie der nivellierten Auftragseinlastung zeichnet sich durch die Bereitstellung zeitlich konstanter KapazitĂ€ten fĂŒr die Auftragsbearbeitung und eine Auftragsbearbeitung gemĂ€ĂŸ aufsteigender FĂ€lligkeitsfristen aus. Auf diese Weise wird der zeitliche Spielraum jedes Auftrags zwischen dessen Auftragseingang und dessen FĂ€lligkeitsfrist systematisch zur Kompensation der stochastischen Nachfrageschwankungen genutzt. Die verbleibende VariabilitĂ€t wird in AbhĂ€ngigkeit der Leistungsanforderungen der Kunden durch die Höhe der bereitgestellten KapazitĂ€t kompensiert. Das analytische Modell zur Leistungsanalyse mehrstufiger, stochastischer Auftragsbearbeitungsprozesse mit nivellierter Auftragseinlastung und kundenindividuellen FĂ€lligkeitsfristen bildet die Auftragsbearbeitung als zeitdiskrete Markov-Kette ab und berechnet verschiedene stochastische und deterministische LeistungskenngrĂ¶ĂŸen auf Basis deren asymptotischer Zustandsverteilung. Diese KenngrĂ¶ĂŸen, wie beispielsweise Durchsatz, Servicegrad, Auslastung, Anzahl Lost Sales sowie Zeitpuffer und RĂŒckstandsdauer eines Auftrags, ermöglichen eine umfassende und exakte Leistungsanalyse von mehrstufigen, stochastischen Auftragsbearbeitungsprozessen mit nivellierter Auftragseinlastung und kundenindividuellen FĂ€lligkeitsfristen. Der Zusammenhang zwischen der bereitgestellten KapazitĂ€t und der damit erreichbaren LeistungsfĂ€higkeit kann nicht explizit durch eine mathematische Gleichung beschrieben werden, sondern ist implizit durch das analytische Modell gegeben. Daher ist das Entscheidungsproblem der KapazitĂ€tsplanung unter GewĂ€hrleistung bestimmter Leistungsanforderungen ein Blackbox-Optimierungsproblem. Die problemspezifischen Konfigurationen der Blackbox-Optimierungsalgorithmen Mesh Adaptive Direct Search und Surrogate Optimisation Integer ermöglichen eine zielgerichtete Bestimmung des minimalen prozessspezifischen KapazitĂ€tsbedarfs, der zur GewĂ€hrleistung der Leistungsanforderungen der Kunden bereitgestellt werden muss. Diese werden anhand einer oder mehrerer LeistungskenngrĂ¶ĂŸen des Auftragsbearbeitungsprozesses spezifiziert. Numerische Untersuchungen zur Beurteilung der LeistungsfĂ€higkeit der Strategie der nivellierten Auftragseinlastung zeigen, dass in Systemen mit einer Auslastung grĂ¶ĂŸer als 0,6 durch den Einsatz der Strategie der nivellierten Auftragseinlastung ein deutlich höherer α\alpha- und ÎČ\beta-Servicegrad erreicht werden kann als mit First come first serve. Außerdem ist der KapazitĂ€tsbedarf zur GewĂ€hrleistung eines bestimmten α\alpha-Servicegrads bei Einsatz der Strategie der nivellierten Auftragseinlastung höchstens so hoch wie bei Einsatz von First come first serve

    Performance Analysis and Capacity Planning of Multi-stage Stochastic Order Fulfilment Systems with Levelled Order Release and Order Deadlines

    Get PDF
    Order fulfilment systems are forced to manage a volatile customer demand while meeting customer-required short order deadlines. To handle these challenges, we introduce the Strategy of Levelled Order Release (LOR) for workload balancing over time. The contributions of this work are (1) the workload balancing concept LOR, (2) a discrete-time Markov chain for performance analysis, and (3) an algorithm for capacity planning under performance constraints in order fulfilment systems with LOR

    Tolerùncia a falhas em sistemas de comunicação de tempo-real flexíveis

    Get PDF
    Nas Ășltimas dĂ©cadas, os sistemas embutidos distribuĂ­dos, tĂȘm sido usados em variados domĂ­nios de aplicação, desde o controlo de processos industriais atĂ© ao controlo de aviĂ”es e automĂłveis, sendo expectĂĄvel que esta tendĂȘncia se mantenha e atĂ© se intensifique durante os prĂłximos anos. Os requisitos de confiabilidade de algumas destas aplicaçÔes sĂŁo extremamente importantes, visto que o nĂŁo cumprimento de serviços de uma forma previsĂ­vel e pontual pode causar graves danos econĂłmicos ou atĂ© pĂŽr em risco vidas humanas. A adopção das melhores prĂĄticas de projecto no desenvolvimento destes sistemas nĂŁo elimina, por si sĂł, a ocorrĂȘncia de falhas causadas pelo comportamento nĂŁo determinĂ­stico do ambiente onde o sistema embutido distribuĂ­do operarĂĄ. Desta forma, Ă© necessĂĄrio incluir mecanismos de tolerĂąncia a falhas que impeçam que eventuais falhas possam comprometer todo o sistema. Contudo, para serem eficazes, os mecanismos de tolerĂąncia a falhas necessitam ter conhecimento a priori do comportamento correcto do sistema de modo a poderem ser capazes de distinguir os modos correctos de funcionamento dos incorrectos. Tradicionalmente, quando se projectam mecanismos de tolerĂąncia a falhas, o conhecimento a priori significa que todos os possĂ­veis modos de funcionamento sĂŁo conhecidos na fase de projecto, nĂŁo os podendo adaptar nem fazer evoluir durante a operação do sistema. Como consequĂȘncia, os sistemas projectados de acordo com este princĂ­pio ou sĂŁo completamente estĂĄticos ou permitem apenas um pequeno nĂșmero de modos de operação. Contudo, Ă© desejĂĄvel que os sistemas disponham de alguma flexibilidade de modo a suportarem a evolução dos requisitos durante a fase de operação, simplificar a manutenção e reparação, bem como melhorar a eficiĂȘncia usando apenas os recursos do sistema que sĂŁo efectivamente necessĂĄrios em cada instante. AlĂ©m disto, esta eficiĂȘncia pode ter um impacto positivo no custo do sistema, em virtude deste poder disponibilizar mais funcionalidades com o mesmo custo ou a mesma funcionalidade a um menor custo. PorĂ©m, flexibilidade e confiabilidade tĂȘm sido encarados como conceitos conflituais. Isto deve-se ao facto de flexibilidade implicar a capacidade de permitir a evolução dos requisitos que, por sua vez, podem levar a cenĂĄrios de operação imprevisĂ­veis e possivelmente inseguros. Desta fora, Ă© comummente aceite que apenas um sistema completamente estĂĄtico pode ser tornado confiĂĄvel, o que significa que todos os aspectos operacionais tĂȘm de ser completamente definidos durante a fase de projecto. Num sentido lato, esta constatação Ă© verdadeira. Contudo, se os modos como o sistema se adapta a requisitos evolutivos puderem ser restringidos e controlados, entĂŁo talvez seja possĂ­vel garantir a confiabilidade permanente apesar das alteraçÔes aos requisitos durante a fase de operação. A tese suportada por esta dissertação defende que Ă© possĂ­vel flexibilizar um sistema, dentro de limites bem definidos, sem comprometer a sua confiabilidade e propĂ”e alguns mecanismos que permitem a construção de sistemas de segurança crĂ­tica baseados no protocolo Controller Area Network (CAN). Mais concretamente, o foco principal deste trabalho incide sobre o protocolo Flexible Time-Triggered CAN (FTT-CAN), que foi especialmente desenvolvido para disponibilizar um grande nĂ­vel de flexibilidade operacional combinando, nĂŁo sĂł as vantagens dos paradigmas de transmissĂŁo de mensagens baseados em eventos e em tempo, mas tambĂ©m a flexibilidade associada ao escalonamento dinĂąmico do trĂĄfego cuja transmissĂŁo Ă© despoletada apenas pela evolução do tempo. Este facto condiciona e torna mais complexo o desenvolvimento de mecanismos de tolerĂąncia a falhas para FTT-CAN do que para outros protocolos como por exemplo, TTCAN ou FlexRay, nos quais existe um conhecimento estĂĄtico, antecipado e comum a todos os nodos, do escalonamento de mensagens cuja transmissĂŁo Ă© despoletada pela evolução do tempo. Contudo, e apesar desta complexidade adicional, este trabalho demonstra que Ă© possĂ­vel construir mecanismos de tolerĂąncia a falhas para FTT-CAN preservando a sua flexibilidade operacional. É tambĂ©m defendido nesta dissertação que um sistema baseado no protocolo FTT-CAN e equipado com os mecanismos de tolerĂąncia a falhas propostos Ă© passĂ­vel de ser usado em aplicaçÔes de segurança crĂ­tica. Esta afirmação Ă© suportada, no Ăąmbito do protocolo FTT-CAN, atravĂ©s da definição de uma arquitectura tolerante a falhas integrando nodos com modos de falha tipo falha-silĂȘncio e nodos mestre replicados. Os vĂĄrios problemas resultantes da replicação dos nodos mestre sĂŁo, tambĂ©m eles, analisados e vĂĄrias soluçÔes sĂŁo propostas para os obviar. Concretamente, Ă© proposto um protocolo que garante a consistĂȘncia das estruturas de dados replicadas a quando da sua actualização e um outro protocolo que permite a transferĂȘncia dessas estruturas de dados para um nodo mestre que se encontre nĂŁo sincronizado com os restantes depois de inicializado ou reinicializado de modo assĂ­ncrono. AlĂ©m disto, esta dissertação tambĂ©m discute o projecto de nodos FTT-CAN que exibam um modo de falha do tipo falha-silĂȘncio e propĂ”e duas soluçÔes baseadas em componentes de hardware localizados no interface de rede de cada nodo, para resolver este problema. Uma das soluçÔes propostas baseiase em bus guardians que permitem a imposição de comportamento falhasilĂȘncio nos nodos escravos e suportam o escalonamento dinĂąmico de trĂĄfego na rede. A outra solução baseia-se num interface de rede que arbitra o acesso de dois microprocessadores ao barramento. Este interface permite que a replicação interna de um nodo seja efectuada de forma transparente e assegura um comportamento falha-silĂȘncio quer no domĂ­nio temporal quer no domĂ­nio do valor ao permitir transmissĂ”es do nodo apenas quando ambas as rĂ©plicas coincidam no conteĂșdo das mensagens e nos instantes de transmissĂŁo. Esta Ășltima solução estĂĄ mais adaptada para ser usada nos nodos mestre, contudo tambĂ©m poderĂĄ ser usada nos nodos escravo, sempre que tal se revele fundamental.Distributed embedded systems (DES) have been widely used in the last few decades in several application fields, ranging from industrial process control to avionics and automotive systems. In fact, it is expectable that this trend will continue over the years to come. In some of these application domains the dependability requirements are of utmost importance since failing to provide services in a timely and predictable manner may cause important economic losses or even put human life in risk. The adoption of the best practices in the design of distributed embedded systems does not fully avoid the occurrence of faults, arising from the nondeterministic behavior of the environment where each particular DES operates. Thus, fault-tolerance mechanisms need to be included in the DES to prevent possible faults leading to system failure. To be effective, fault-tolerance mechanisms require an a priori knowledge of the correct system behavior to be capable of distinguishing them from the erroneous ones. Traditionally, when designing fault-tolerance mechanisms, the a priori knowledge means that all possible operational modes are known at system design time and cannot adapt nor evolve during runtime. As a consequence, systems designed according to this principle are either fully static or allow a small number of operational modes only. Flexibility, however, is a desired property in a system in order to support evolving requirements, simplify maintenance and repair, and improve the efficiency in using system resources by using only the resources that are effectively required at each instant. This efficiency might impact positively on the system cost because with the same resources one can add more functionality or one can offer the same functionality with fewer resources. However, flexibility and dependability are often regarded as conflicting concepts. This is so because flexibility implies the ability to deal with evolving requirements that, in turn, can lead to unpredictable and possibly unsafe operating scenarios. Therefore, it is commonly accepted that only a fully static system can be made dependable, meaning that all operating conditions are completely defined at pre-runtime. In the broad sense and assuming unbounded flexibility this assessment is true, but if one restricts and controls the ways the system could adapt to evolving requirements, then it might be possible to enforce continuous dependability. This thesis claims that it is possible to provide a bounded degree of flexibility without compromising dependability and proposes some mechanisms to build safety-critical systems based on the Controller Area Network (CAN). In particular, the main focus of this work is the Flexible Time-Triggered CAN protocol (FTT-CAN), which was specifically developed to provide such high level of operational flexibility, not only combining the advantages of time- and event-triggered paradigms but also providing flexibility to the time-triggered traffic. This fact makes the development of fault-tolerant mechanisms more complex in FTT-CAN than in other protocols, such as TTCAN or FlexRay, in which there is a priori static common knowledge of the time-triggered message schedule shared by all nodes. Nevertheless, as it is demonstrated in this work, it is possible to build fault-tolerant mechanisms for FTT-CAN that preserve its high level of operational flexibility, particularly concerning the time-triggered traffic. With such mechanisms it is argued that FTT-CAN is suitable for safetycritical applications, too. This claim was validated in the scope of the FTT-CAN protocol by presenting a fault-tolerant system architecture with replicated masters and fail-silent nodes. The specific problems and mechanisms related with master replication, particularly a protocol to enforce consistency during updates of replicated data structures and another protocol to transfer these data structures to an unsynchronized node upon asynchronous startup or restart, are also addressed. Moreover, this thesis also discusses the implementations of fail-silence in FTTCAN nodes and proposes two solutions, both based on hardware components that are attached to the node network interface. One solution relies on bus guardians that allow enforcing fail-silence in the time domain. These bus guardians are adapted to support dynamic traffic scheduling and are fit for use in FTT-CAN slave nodes, only. The other solution relies on a special network interface, with duplicated microprocessor interface, that supports internal replication of the node, transparently. In this case, fail-silence can be assured both in the time and value domain since transmissions are carried out only if both internal nodes agree on the transmission instant and message contents. This solution is well adapted for use in the masters but it can also be used, if desired, in slave nodes

    Performance Analysis and Capacity Planning of Multi-stage Stochastic Order Fulfilment Systems with Levelled Order Release and Order Deadlines

    Get PDF
    Order fulfilment systems are forced to manage a volatile customer demand while meeting customer-required short order deadlines. To handle these challenges, we introduce the Strategy of Levelled Order Release (LOR) for workload balancing over time. The contributions of this work are (1) the workload balancing concept LOR, (2) a discrete-time Markov chain for performance analysis, and (3) an algorithm for capacity planning under performance constraints in order fulfilment systems with LOR

    Design and Real-World Evaluation of Dependable Wireless Cyber-Physical Systems

    Get PDF
    The ongoing effort for an efficient, sustainable, and automated interaction between humans, machines, and our environment will make cyber-physical systems (CPS) an integral part of the industry and our daily lives. At their core, CPS integrate computing elements, communication networks, and physical processes that are monitored and controlled through sensors and actuators. New and innovative applications become possible by extending or replacing static and expensive cable-based communication infrastructures with wireless technology. The flexibility of wireless CPS is a key enabler for many envisioned scenarios, such as intelligent factories, smart farming, personalized healthcare systems, autonomous search and rescue, and smart cities. High dependability, efficiency, and adaptivity requirements complement the demand for wireless and low-cost solutions in such applications. For instance, industrial and medical systems should work reliably and predictably with performance guarantees, even if parts of the system fail. Because emerging CPS will feature mobile and battery-driven devices that can execute various tasks, the systems must also quickly adapt to frequently changing conditions. Moreover, as applications become ever more sophisticated, featuring compact embedded devices that are deployed densely and at scale, efficient designs are indispensable to achieve desired operational lifetimes and satisfy high bandwidth demands. Meeting these partly conflicting requirements, however, is challenging due to imperfections of wireless communication and resource constraints along several dimensions, for example, computing, memory, and power constraints of the devices. More precisely, frequent and correlated message losses paired with very limited bandwidth and varying delays for the message exchange significantly complicate the control design. In addition, since communication ranges are limited, messages must be relayed over multiple hops to cover larger distances, such as an entire factory. Although the resulting mesh networks are more robust against interference, efficient communication is a major challenge as wireless imperfections get amplified, and significant coordination effort is needed, especially if the networks are dynamic. CPS combine various research disciplines, which are often investigated in isolation, ignoring their complex interaction. However, to address this interaction and build trust in the proposed solutions, evaluating CPS using real physical systems and wireless networks paired with formal guarantees of a system’s end-to-end behavior is necessary. Existing works that take this step can only satisfy a few of the abovementioned requirements. Most notably, multi-hop communication has only been used to control slow physical processes while providing no guarantees. One of the reasons is that the current communication protocols are not suited for dynamic multi-hop networks. This thesis closes the gap between existing works and the diverse needs of emerging wireless CPS. The contributions address different research directions and are split into two parts. In the first part, we specifically address the shortcomings of existing communication protocols and make the following contributions to provide a solid networking foundation: ‱ We present Mixer, a communication primitive for the reliable many-to-all message exchange in dynamic wireless multi-hop networks. Mixer runs on resource-constrained low-power embedded devices and combines synchronous transmissions and network coding for a highly scalable and topology-agnostic message exchange. As a result, it supports mobile nodes and can serve any possible traffic patterns, for example, to efficiently realize distributed control, as required by emerging CPS applications. ‱ We present Butler, a lightweight and distributed synchronization mechanism with formally guaranteed correctness properties to improve the dependability of synchronous transmissions-based protocols. These protocols require precise time synchronization provided by a specific node. Upon failure of this node, the entire network cannot communicate. Butler removes this single point of failure by quickly synchronizing all nodes in the network without affecting the protocols’ performance. In the second part, we focus on the challenges of integrating communication and various control concepts using classical time-triggered and modern event-based approaches. Based on the design, implementation, and evaluation of the proposed solutions using real systems and networks, we make the following contributions, which in many ways push the boundaries of previous approaches: ‱ We are the first to demonstrate and evaluate fast feedback control over low-power wireless multi-hop networks. Essential for this achievement is a novel co-design and integration of communication and control. Our wireless embedded platform tames the imperfections impairing control, for example, message loss and varying delays, and considers the resulting key properties in the control design. Furthermore, the careful orchestration of control and communication tasks enables real-time operation and makes our system amenable to an end-to-end analysis. Due to this, we can provably guarantee closed-loop stability for physical processes with linear time-invariant dynamics. ‱ We propose control-guided communication, a novel co-design for distributed self-triggered control over wireless multi-hop networks. Self-triggered control can save energy by transmitting data only when needed. However, there are no solutions that bring those savings to multi-hop networks and that can reallocate freed-up resources, for example, to other agents. Our control system informs the communication system of its transmission demands ahead of time so that communication resources can be allocated accordingly. Thus, we can transfer the energy savings from the control to the communication side and achieve an end-to-end benefit. ‱ We present a novel co-design of distributed control and wireless communication that resolves overload situations in which the communication demand exceeds the available bandwidth. As systems scale up, featuring more agents and higher bandwidth demands, the available bandwidth will be quickly exceeded, resulting in overload. While event-triggered control and self-triggered control approaches reduce the communication demand on average, they cannot prevent that potentially all agents want to communicate simultaneously. We address this limitation by dynamically allocating the available bandwidth to the agents with the highest need. Thus, we can formally prove that our co-design guarantees closed-loop stability for physical systems with stochastic linear time-invariant dynamics.:Abstract Acknowledgements List of Abbreviations List of Figures List of Tables 1 Introduction 1.1 Motivation 1.2 Application Requirements 1.3 Challenges 1.4 State of the Art 1.5 Contributions and Road Map 2 Mixer: Efficient Many-to-All Broadcast in Dynamic Wireless Mesh Networks 2.1 Introduction 2.2 Overview 2.3 Design 2.4 Implementation 2.5 Evaluation 2.6 Discussion 2.7 Related Work 3 Butler: Increasing the Availability of Low-Power Wireless Communication Protocols 3.1 Introduction 3.2 Motivation and Background 3.3 Design 3.4 Analysis 3.5 Implementation 3.6 Evaluation 3.7 Related Work 4 Feedback Control Goes Wireless: Guaranteed Stability over Low-Power Multi-Hop Networks 4.1 Introduction 4.2 Related Work 4.3 Problem Setting and Approach 4.4 Wireless Embedded System Design 4.5 Control Design and Analysis 4.6 Experimental Evaluation 4.A Control Details 5 Control-Guided Communication: Efficient Resource Arbitration and Allocation in Multi-Hop Wireless Control Systems 5.1 Introduction 5.2 Problem Setting 5.3 Co-Design Approach 5.4 Wireless Communication System Design 5.5 Self-Triggered Control Design 5.6 Experimental Evaluation 6 Scaling Beyond Bandwidth Limitations: Wireless Control With Stability Guarantees Under Overload 6.1 Introduction 6.2 Problem and Related Work 6.3 Overview of Co-Design Approach 6.4 Predictive Triggering and Control System 6.5 Adaptive Communication System 6.6 Integration and Stability Analysis 6.7 Testbed Experiments 6.A Proof of Theorem 4 6.B Usage of the Network Bandwidth for Control 7 Conclusion and Outlook 7.1 Contributions 7.2 Future Directions Bibliography List of Publication

    Optimal and heuristic repairable stocking and expediting in a fluctuating demand environment

    Get PDF
    We consider a single stock point for a repairable item. The repairable item is a critical component that is used in a fleet of technical systems such as trains, planes or manufacturing equipment. A number of spare repairables is purchased at the same time as the technical systems they support. Demand for those items is a Markov modulated Poisson process of which the underlying Markov process can be observed. Backorders occur when demand for a ready-for-use item cannot be fulfilled immediately. Since backorders render a system unavailable for use, there is a penalty per backorder per unit time. Upon failure, defective items are sent to a repair shop that offers the possibility of expediting repair. Expedited repairs have shorter lead times than regular repairs but are also more costly. For this system, two important decisions have to be taken: How many spare repairables to purchase initially and when to expedite repairs. We formulate the decision to use regular or expedited repair as a Markov decision process and characterize the optimal repair expediting policy for the infinite horizon average and discounted cost criteria. We find that the optimal policy may take two forms. The first form is to never expedite repair. The second form is a type of threshold policy. We provide necessary and sufficient closed-form conditions that determine what form is optimal. We also propose a heuristic repair expediting policy which we call the world driven threshold (WDT) policy. This policy is optimal in special cases and shares essential characteristics with the optimal policy otherwise. Because of its simpler structure, the WDT policy is fit for use in practice. We show how to compute optimal repairable stocking decisions in combination with either the optimal or a good WDT expediting policy. In a numerical study, we show that the WDT heuristic performs very close to optimal with an optimality gap below 0.76% for all instances in our test bed. We also compare it to more naive heuristics that do not explicitly use information regarding demand fluctuations and find that the WDT heuristic outperforms these naive heuristics by 11.85% on average and as much as 63.67% in some cases. This shows there is great value in leveraging knowledge about demand fluctuations in making repair expediting decisions

    Proceedings of Monterey Workshop 2001 Engineering Automation for Sofware Intensive System Integration

    Get PDF
    The 2001 Monterey Workshop on Engineering Automation for Software Intensive System Integration was sponsored by the Office of Naval Research, Air Force Office of Scientific Research, Army Research Office and the Defense Advance Research Projects Agency. It is our pleasure to thank the workshop advisory and sponsors for their vision of a principled engineering solution for software and for their many-year tireless effort in supporting a series of workshops to bring everyone together.This workshop is the 8 in a series of International workshops. The workshop was held in Monterey Beach Hotel, Monterey, California during June 18-22, 2001. The general theme of the workshop has been to present and discuss research works that aims at increasing the practical impact of formal methods for software and systems engineering. The particular focus of this workshop was "Engineering Automation for Software Intensive System Integration". Previous workshops have been focused on issues including, "Real-time & Concurrent Systems", "Software Merging and Slicing", "Software Evolution", "Software Architecture", "Requirements Targeting Software" and "Modeling Software System Structures in a fastly moving scenario".Office of Naval ResearchAir Force Office of Scientific Research Army Research OfficeDefense Advanced Research Projects AgencyApproved for public release, distribution unlimite

    SALSA: A Formal Hierarchical Optimization Framework for Smart Grid

    Get PDF
    The smart grid, by the integration of advanced control and optimization technologies, provides the traditional grid with an indisputable opportunity to deliver and utilize the electricity more efficiently. Building smart grid applications is a challenging task, which requires a formal modeling, integration, and validation framework for various smart grid domains. The design flow of such applications must adapt to the grid requirements and ensure the security of supply and demand. This dissertation, by proposing a formal framework for customers and operations domains in the smart grid, aims at delivering a smooth way for: i) formalizing their interactions and functionalities, ii) upgrading their components independently, and iii) evaluating their performance quantitatively and qualitatively.The framework follows an event-driven demand response program taking no historical data and forecasting service into account. A scalable neighborhood of prosumers (inside the customers domain), which are equipped with smart appliances, photovoltaics, and battery energy storage systems, are considered. They individually schedule their appliances and sell/purchase their surplus/demand to/from the grid with the purposes of maximizing their comfort and profit at each instant of time. To orchestrate such trade relations, a bilateral multi-issue negotiation approach between a virtual power plant (on behalf of prosumers) and an aggregator (inside the operations domain) in a non-cooperative environment is employed. The aggregator, with the objectives of maximizing its profit and minimizing the grid purchase, intends to match prosumers' supply with demand. As a result, this framework particularly addresses the challenges of: i) scalable and hierarchical load demand scheduling, and ii) the match between the large penetration of renewable energy sources being produced and consumed. It is comprised of two generic multi-objective mixed integer nonlinear programming models for prosumers and the aggregator. These models support different scheduling mechanisms and electricity consumption threshold policies.The effectiveness of the framework is evaluated through various case studies based on economic and environmental assessment metrics. An interactive web service for the framework has also been developed and demonstrated
    • 

    corecore