126 research outputs found

    Scenario Approach for Parametric Markov Models

    Get PDF
    In this paper, we propose an approximating framework for analyzing parametric Markov models. Instead of computing complex rational functions encoding the reachability probability and the reward values of the parametric model, we exploit the scenario approach to synthesize a relatively simple polynomial approximation. The approximation is probably approximately correct (PAC), meaning that with high confidence, the approximating function is close to the actual function with an allowable error. With the PAC approximations, one can check properties of the parametric Markov models. We show that the scenario approach can also be used to check PRCTL properties directly – without synthesizing the polynomial at first hand. We have implemented our algorithm in a prototype tool and conducted thorough experiments. The experimental results demonstrate that our tool is able to compute polynomials for more benchmarks than state-of-the-art tools such as PRISM and Storm, confirming the efficacy of our PAC-based synthesis.</p

    Modeling and Optimizing Space Networks for Improved Communication Capacity.

    Full text link
    There are a growing number of individual and constellation small-satellite missions seeking to download large quantities of science, observation, and surveillance data. The existing ground station infrastructure to support these missions constrains the potential data throughput because the stations are low-cost, are not always available because they are independently owned and operated, and their ability to collect data is often inefficient. The constraints of the small satellite form factor (e.g. mass, size, power) coupled with the ground network limitations lead to significant operational and communication scheduling challenges. Faced with these challenges, our goal is to maximize capacity, defined as the amount of data that is successfully downloaded from space to ground communication nodes. In this thesis, we develop models, tools, and optimization algorithms for spacecraft and ground network operations. First, we develop an analytical modeling framework and a high-fidelity simulation environment that capture the interaction of on-board satellite energy and data dynamics, ground stations, and the external space environment. Second, we perform capacity-based assessments to identify excess and deficient resources for comparison to mission-specific requirements. Third, we formulate and solve communication scheduling problems that maximize communication capacity for a satellite downloading to a network of globally and functionally heterogeneous ground stations. Numeric examples demonstrate the applicability of the models and tools to assess and optimize real-world existing and upcoming small satellite mission scenarios that communicate to global ground station networks as well as generic communication scheduling problem instances. We study properties of optimal satellite communication schedules and sensitivity of communication capacity to various deterministic and stochastic satellite vehicle and network parameters. The models, tools, and optimization techniques we develop lay the ground work for our larger goals: optimal satellite vehicle design and autonomous real-time operational scheduling of heterogeneous satellite missions and ground station networks.PhDAerospace EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/97912/1/saracs_1.pd

    Bidding Strategy for Networked Microgrids in the Day-Ahead Electricity Market

    Get PDF
    In recent years, microgrids have drawn increasing attention from both academic and industrial sectors due to their enormous potential benefits to the power systems. Microgrids are essentially highly-customized small-scale power systems. Microgrids’ islanding capability enables microgrids to conduct more flexible and energy-efficient operations. Microgrids have proved to be able to provide reliable and environmental-friendly electricity to quality-sensitive or off-grid consumers. In addition, during the grid-connected operation mode, microgrids can also provide support to the utility grid. World-widely continuous microgrid deployments indicate a paradigm shift from traditional centralized large-scale systems toward more distributed and customized small-scale systems. However, microgrids can cause as many problems as it solves. More efforts are needed to address these problems caused by microgrids integration. Considering there will be multiple microgrids in future power systems, the coordination problems between individual microgrids remain to be solved. Aiming at facilitating the promotion of microgrids, this thesis investigates the system-level modeling methods for coordination between multiple microgrids in the context of participating in the market. Firstly, this thesis reviews the background and recent development of microgrid coordination models. Problems of existing studies are identified. Motivated by these problems, the research objectives and structure of this thesis are presented. Secondly, this thesis examines and compares the most common frameworks for optimization under uncertainty. An improved unit commitment model considering uncertain sub-hour wind power ramp behaviors is presented to illustrate the reformulation and solution method of optimization models with uncertainty. Next, the price-maker bidding strategy for collaborative networked microgrids is presented. Multiple microgrids are coordinated as a single dispatchable entity and participate in the market as a price-maker. The market-clearing process is modeled using system residual supply/demand price-quota curves. Multiple uncertainty sources in the bidding model are mitigated with a hybrid stochastic-robust optimization framework. What’s more, this thesis further considers the privacy concerns of individual microgrids in the coordination process. Therefore a privacy-preserving solution method based on Dantzig-Wolfe decomposition is proposed to solve the bidding problem. Both computational and economic performances of the proposed model are compared with the performances of conventional centralized coordination framework. Lastly, this thesis provides suggestions on future research directions of coordination problems among multiple microgrids

    Digital system bus integrity

    Get PDF
    This report summarizes and describes the results of a study of current or emerging multiplex data buses as applicable to digital flight systems, particularly with regard to civil aircraft. Technology for pre-1995 and post-1995 timeframes has been delineated and critiqued relative to the requirements envisioned for those periods. The primary emphasis has been an assured airworthiness of the more prevalent type buses, with attention to attributes such as fault tolerance, environmental susceptibility, and problems under continuing investigation. Additionally, the capacity to certify systems relying on such buses has been addressed

    Power and Reliability Management of SoCs

    Get PDF
    Today's embedded systems integrate multiple IP cores for processing, communication, and sensing on a single die as systems-on-chip (SoCs). Aggressive transistor scaling, decreased voltage margins and increased processor power and temperature have made reliability assessment a much more significant issue. Although reliability of devices and interconnect has been broadly studied, in this work, we study a tradeoff between reliability and power consumption for component-based SoC designs. We specifically focus on hard error rates as they cause a device to permanently stop operating. We also present a joint reliability and power management optimization problem whose solution is an optimal management policy. When careful joint policy optimization is performed, we obtain a significant improvement in energy consumption (40%) in tandem with meeting a reliability constraint for all SoC operating temperatures

    Improved performance for network simulation

    Get PDF
    Over the course of designing and implementing two discrete event simulators, the commercial simulator packages CSIM and DesmoJ were leveraged to allow for rapid development of both wired and wireless network models. However, the two resulting simulators demonstrated poor scalability due to the use of multi-threading to maintain state for simulation elements. By using a simple single-process discrete event simulation engine, the running-time showed a marked decrease when compared to multi-threaded simulators.In one case study, we simulate a simple two-link MPLS network which employs two congestion control mechanisms for inelastic traffic, namely preemption and adaptation. Performance metrics measured include: the per-class blocking probability, customer average fraction of time streams travel on the preferred path, customer average fraction of time at the maximum subscription rate, the customer average rate of adaptation, and the time average rate of preemption. We compare the performance of preemption and adaptation individually and collectively against the base case where neither congestion mechanism is used. At the cost of increased number of rate adaptations and preemption events for a range of regimes, we show that the combined use of preemption and adaptation improves the quality of service and alignment of high priority traffic while increasing the effective network capacity. As a performance enhancement to the simulator developed to conducted these experiments, we switched to a single-process discrete evnt simulation engine in place of multi-threaded simulator. We note a large improvement for the running time as the simulation time and capacity increase.A second case study was conducted on a wireless simulator. In an effort to simplify the simulator and improve performance we again moved from a commercial thread-based simulator (CSIM) to a single-process discrete event simulation engine. Results of the runningtime vs network size for the single-process simulator showed a constant-time improvement over the thread-based simulator. To further improve performance, a complementary technique known as model abstraction is also applied. Model abstraction is a technique that reduces execution time by removing unnecessary simulation detail. In this thesis we propose three abstractions of the IEEE 802.11 protocol. The Goodput Ratio vs Transmission Power and End-to-end delay vs offered load performance metrics are compared against the OPNET commercial simulator.M.S., Computer Engineering -- Drexel University, 200

    Adaptive methods for linear dynamic systems in the frequency domain with application to global optimization

    Get PDF
    Designers often seek to improve their designs by considering several discrete modifications. These modifications may require changes in materials and geometry, as well as the addition or removal of individual components. In general, if the modifications are applied one at a time, none of them may sufficiently improve the performance. Also, the total number of modifications that may be included in the final design is often limited due to cost or other constraints. The designer must therefore determine the optimal combination of modifications in order to complete the design. While this design challenge arises fairly commonly in practice, very little research has studied it in its full generality. This work assumes that the mathematical description of the design and its modifications are frequency dependent matrices. Such matrices typically arise due to finite element analysis as well as other modeling techniques. Computing performance metrics related to steady-state forced response, also known as performing a frequency sweep, involves factorizing these matrices many times. Additionally, determining the globally optimum design in this case involves an exhaustive search of the combinations of modifications. These factors lead to prohibitively long run times particularly as the size of the system grows. The research presented here seeks to reduce these costs, making such a search feasible. Several innovative techniques have been developed and tested over the course of the research, focused in two primary areas: adaptive frequency sweeps and efficient combinatorial optimization. The frequency sweep methods rely on an adaptive bisection of the frequency range and either a subspace approximation based on implicit interpolatory model order reduction or an elementwise approximation using piecewise multi-point Padé interpolants. Additionally, a strategy for augmenting the adaptive methods with the system's modal information is presented. For combinatorial optimization, an approximation algorithm is developed that capitalizes on any presence of dynamic uncoupling between modifications. The net effect of this work is to allow designers and researchers to develop new dynamic systems and perform analyses faster and more efficiently than ever before

    An admission control scheme for IEEE 802.11e wireless local area networks

    Get PDF
    Includes bibliographical references (leaves 80-84).Recent times has seen a tremendous increase in the deployment and use of 802.11 Wireless Local Area Networks (WLANs). These networks are easy to deploy and maintain, while providing reasonably high data rates at a low cost. In the paradigm of Next-Generation-Networks (NGNs), WLANs can be seen as an important access network technology to support IP multimedia services. However a traditional WLAN does not provide Quality of Service (QoS) support since it was originally designed for best effort operation. The IEEE 802. 11e standard was introduced to overcome the lack of QoS support for the legacy IEEE 802 .11 WLANs. It enhances the Media Access Control (MAC) layer operations to incorporate service differentiation. However, there is a need to prevent overloading of wireless channels, since the QoS experienced by traffic flows is degraded with heavily loaded channels. An admission control scheme for IEEE 802.11e WLANs would be the best solution to limit the amount of multimedia traffic so that channel overloading can be prevented. Some of the work in the literature proposes admission control solutions to protect the QoS of real-time traffic for IEEE 802.11e Enhanced Distributed Channel Access (EDCA). However, these solutions often under-utilize the resources of the wireless channels. A measurement-aided model-based admission control scheme for IEEE 802.11e EDCA WLANs is proposed to provide reasonable bandwidth guarantees to all existing flows. The admission control scheme makes use of bandwidth estimations that allows the bandwidth guarantees of all the flows that are admitted into the network to be protected. The bandwidth estimations are obtained using a developed analytical model of IEEE 802.11e EDCA channels. The admission control scheme also aims to accept the maximum amount of flows that can be accommodated by the network's resources. Through simulations, the performance of the proposed admission control scheme is evaluated using NS-2. Results show that accurate bandwidth estimations can be obtained when comparing the estimated achievable bandwidth to actual simulated bandwidth. The results also validate that the bandwidth needs of all admitted traffic are always satisfied when the admission control scheme is applied. It was also found that the admission control scheme allows the maximum amount of flows to be admitted into the network, according the network's capacity

    Methodologies for the analysis of value from delay-tolerant inter-satellite networking

    Get PDF
    In a world that is becoming increasingly connected, both in the sense of people and devices, it is of no surprise that users of the data enabled by satellites are exploring the potential brought about from a more connected Earth orbit environment. Lower data latency, higher revisit rates and higher volumes of information are the order of the day, and inter-connectivity is one of the ways in which this could be achieved. Within this dissertation, three main topics are investigated and built upon. First, the process of routing data through intermittently connected delay-tolerant networks is examined and a new routing protocol introduced, called Spae. The consideration of downstream resource limitations forms the heart of this novel approach which is shown to provide improvements in data routing that closely match that of a theoretically optimal scheme. Next, the value of inter-satellite networking is derived in such a way that removes the difficult task of costing the enabling inter-satellite link technology. Instead, value is defined as the price one should be willing to pay for the technology while retaining a mission value greater than its non-networking counterpart. This is achieved through the use of multi-attribute utility theory, trade-space analysis and system modelling, and demonstrated in two case studies. Finally, the effects of uncertainty in the form of sub-system failure are considered. Inter-satellite networking is shown to increase a system's resilience to failure through introduction of additional, partially failed states, made possible by data relay. The lifetime value of a system is then captured using a semi-analytical approach exploiting Markov chains, validated with a numerical Monte Carlo simulation approach. It is evident that while inter-satellite networking may offer more value in general, it does not necessarily result in a decrease in the loss of utility over the lifetime.In a world that is becoming increasingly connected, both in the sense of people and devices, it is of no surprise that users of the data enabled by satellites are exploring the potential brought about from a more connected Earth orbit environment. Lower data latency, higher revisit rates and higher volumes of information are the order of the day, and inter-connectivity is one of the ways in which this could be achieved. Within this dissertation, three main topics are investigated and built upon. First, the process of routing data through intermittently connected delay-tolerant networks is examined and a new routing protocol introduced, called Spae. The consideration of downstream resource limitations forms the heart of this novel approach which is shown to provide improvements in data routing that closely match that of a theoretically optimal scheme. Next, the value of inter-satellite networking is derived in such a way that removes the difficult task of costing the enabling inter-satellite link technology. Instead, value is defined as the price one should be willing to pay for the technology while retaining a mission value greater than its non-networking counterpart. This is achieved through the use of multi-attribute utility theory, trade-space analysis and system modelling, and demonstrated in two case studies. Finally, the effects of uncertainty in the form of sub-system failure are considered. Inter-satellite networking is shown to increase a system's resilience to failure through introduction of additional, partially failed states, made possible by data relay. The lifetime value of a system is then captured using a semi-analytical approach exploiting Markov chains, validated with a numerical Monte Carlo simulation approach. It is evident that while inter-satellite networking may offer more value in general, it does not necessarily result in a decrease in the loss of utility over the lifetime
    • …
    corecore