143 research outputs found

    On implementing dynamically reconfigurable architectures

    Get PDF
    Dynamically reconfigurable architectures have the ability to change their structure at each step of a computation. This dissertation studies various aspects of implementing dynamic reconfiguration, ranging from hardware building blocks and low-level architectures to modeling issues and high-level algorithm design. First we derive conditions under which classes of communication sets can be optimally scheduled on the circuit-switched tree (CST). Then we present a method to configure the CST to perform in constant time all communications scheduled for a step. This results in a constant time implementation of a step of a segmentable bus, a fundamental dynamically reconfigurable structure. We introduce a new bus delay measure (bends-cost) and define the bends-cost LR-Mesh; the LR-Mesh is a widely used reconfigurable model. Unlike the (idealized) LR-Mesh, which ignores bus delay, the bends-cost LR-Mesh uses the number of bends in a bus to estimate its delay. We present an implementation for which the bends-cost is an accurate estimate of the actual delay. We present algorithms to simulate various LR-Mesh configuration classes on the bends-cost LR-Mesh. For semimonotonic configurations, a Θ(N)*Θ(N) bends-cost LR-Mesh with bus delay at most D can simulate a step of the idealized N*N LR-Mesh in O((log N/(log D-log Δ))2) time (where Δ is the delay of an N-element segmentable bus), while employing about the same number of processors. For some special cases this time reduces to O(log N/(log D-log Δ)). If D=Nε, for an arbitrarily small constant ε \u3e 0, then the running times of bends-cost LR-Mesh algorithms are within a constant of their idealized counterparts. We also prove that with a polynomial blowup in the number of processors and D=Nε, the bends-cost LR-Mesh can simulate any step of an idealized LR-Mesh in constant time, thereby establishing that these models have the same power. We present an implementation (in VHDL) of the Enhanced Self Reconfigurable Gate Array (E-SRGA) architecture and perform a cost-benefit study for different dynamic reconfiguration features. This study shows our approach to be feasible

    Time-Optimal Algorithms on Meshes With Multiple Broadcasting

    Get PDF
    The mesh-connected computer architecture has emerged as a natural choice for solving a large number of computational tasks in image processing, computational geometry, and computer vision. However, due to its large communication diameter, the mesh tends to be slow when it comes to handling data transfer operations over long distances. In an attempt to overcome this problem, mesh-connected computers have recently been augmented by the addition of various types of bus systems. One such system known as the mesh with multiple broadcasting involves enhancing the mesh architecture by the addition of row and column buses. The mesh with multiple broadcasting has proven to be feasible to implement in VLSI, and is used in the DAP family of computers. In recent years, efficient algorithms to solve a number of computational problems on meshes with multiple broadcasting have been proposed in the literature. The problems considered in this thesis are semigroup computations, sorting, multiple search, various convexity-related problems, and some tree problems. Based on the size of the input data for the problem under consideration, existing results can be broadly classified into sparse and dense. Specifically, for a given √n x √n mesh with multiple broadcasting, we refer to problems involving m∈O(nm \in O(\sqrt{n}) items as sparse, while the case £ O(n) will be referred to as dense. Finally, the case corresponding to 2 ≤ m ≤ n is be termed general. The motivation behind the current work is twofold. First, time-optimal solutions are proposed for the problems listed above. Secondly, an attempt is made to remove the artificial limitation of problems studied to sparse and dense cases. To establish the time-optimality of the algorithms presented in this work, we use some existing lower bound techniques along with new ones that we develop. We solve the semigroup computation problem for the general case and present a novel lower bound argument. We solve the multiple search problem in the general case and present some surprising applications to computational geometry. In the case of sorting, the general case is defined to be slightly different. For the specified range of the size of input, we present a time and VLSI-optimal algorithm. We also present time lower bound results and matching algorithms for a number of convexity related and tree problems in the sparse case

    A Computational Paradigm on Network-Based Models of Computation

    Get PDF
    The maturation of computer science has strengthened the need to consolidate isolated algorithms and techniques into general computational paradigms. The main goal of this dissertation is to provide a unifying framework which captures the essence of a number of problems in seemingly unrelated contexts in database design, pattern recognition, image processing, VLSI design, computer vision, and robot navigation. The main contribution of this work is to provide a computational paradigm which involves the unifying framework, referred to as the multiple Query problem, along with a generic solution to the Multiple Query problem. To demonstrate the applicability of the paradigm, a number of problems from different areas of computer science are solved by formulating them in this framework. Also, to show practical relevance, two fundamental problems were implemented in the C language using MPI. The code can be ported onto many commercially available parallel computers; in particular, the code was tested on an IBM-SP2 and on a network of workstations

    Simulating a Mesh with Separable Buses by a Mesh with Partitioned Buses

    No full text
    This paper studies the simulation problem of meshes with separable buses (MSB) by meshes with multiple partitioned buses (MMPB). The MSB and the MMPB are the mesh connected computers enhanced by the addition of broadcasting buses along every row and column. The broadcasting buses of the MSB, called separable buses, can be dynamically sectioned into smaller bus segments by program control, while those of the MMPB, called partitioned buses, are statically partitioned in advance. In the MSB model, each row/column has only one separable bus, while in the MMPB model, each row/column has L partitioned buses (L ≥ 2). We consider the simulation and the scalingsimulation of the MSB by the MMPB, and show that the MMPB of size n × n can simulate the MSB of size n ×

    Efficient Algorithms for a Mesh-Connected Computer with Additional Global Bandwidth

    Full text link
    This thesis shows that adding additional global bandwidths to a mesh-connected computer can greatly improve the performance. The goal of this project is to design algorithms for mesh-connected computers augmented with limited global bandwidth, so that we can further enhance our understanding of the parallel/serial nature of the problems on evolving parallel architectures. We do this by first solving several problems associated with fundamental data movement, then summarize ways to resolve different situations one may observe in data movement in parallel computing. This can help us to understand whether the problem is easily parallelizable on different parallel models. We give efficient algorithms to solve several fundamental problems, which include sorting, counting, fast Fourier transform, finding a minimum spanning tree, finding a convex hull, etc. We show that adding a small amount of global bandwidth makes a practical design that combines aspects of mesh and fully connected models to achieve the benefits of each. Most of the algorithms are optimal. For future work, we believe that algorithms with peak-power constrains can make our model well adapted to the recent architectures in high performance computing.PHDComputer Science & EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttps://deepblue.lib.umich.edu/bitstream/2027.42/150001/1/anyujie_1.pd

    Architectures and algorithms for voltage control in power distribution systems

    Get PDF
    In this thesis, we propose a hierarchical control architecture for voltage in power distribution networks where there is a separation between the slow time-scale, in which the settings of conventional voltage regulation devices are adjusted, and the fast time-scale, in which voltage regulation through active/reactive power injection shaping is accomplished. Slow time-scale devices will generally be existing hardware, e.g., voltage regulation transformers, which will be dispatched at appropriate time intervals to reduce the wear on their mechanical parts. In contrast, fast time-scale devices are considered to be devices that connect to the grid through power electronics, e.g., photovoltaic (PV) installations. In the slow time-scale control, we propose a method to optimally set the tap position of voltage regulation transformers. We formulate a rank-constrained semidefinite program (SDP), which is then relaxed to obtain a convex optimization that is solved distributively with the Alternating-Direction Method of Multipliers (ADMM). In the fast time-scale control, we propose the following schemes: (i) a feedback-based approach to regulate system voltages, and (ii) an optimization-based approach that maintains the desired operating state through a quadratic program developed from a linear distribution system model. Finally, we showcase the operation of the two time-scale control architecture in an unbalanced three-phase distribution system. The test system in the case studies is derived from the IEEE 123-bus test system and has a high penetration of residential PV installations and electric vehicles (EVs). We provide several examples that demonstrate the interaction between the two time-scales and the impact of the proposed control on component behaviors

    Reasoning Under Uncertainty in Cyber-Physical Systems: Toward Efficient and Secure Operation

    Full text link
    The increased sensing, processing, communication, and control capabilities introduced by cyber-physical systems bring many potential improvements to the operation of society's systems, but also introduce questions as to how one can ensure their efficient and secure operation. This dissertation investigates three questions related to decision-making under uncertainty in cyber-physical systems settings. First, in the context of power systems and electricity markets, how can one design algorithms that guide self-interested agents to a socially optimal and physically feasible outcome, subject to the fact that agents only possess localized information of the system and can only react to local signals? The proposed algorithms, investigated in the context of two distinct models, are iterative in nature and involve the exchange of messages between agents. The first model consists of a network of interconnected power systems controlled by a collection of system operators. Each system operator possesses knowledge of its own localized region and aims to prescribe the cost minimizing set of net injections for its buses. By using relative voltage angles as messages, system operators iteratively communicate to reach a social-cost minimizing and physically feasible set of injections for the whole network. The second model consists of a market operator and market participants (distribution, generation, and transmission companies). Using locational marginal pricing, the market operator is able to guide the market participants to a competitive equilibrium, which, under an assumption on the positivity of prices, is shown to be a globally optimal solution to the non-convex social-welfare maximization problem. Common to both algorithms is the use of a quadratic power flow approximation that preserves important non-linearities (power losses) while maintaining desirable mathematical properties that permit convergence under natural conditions. Second, when a system is under attack from a malicious agent, what models are appropriate for performing real-time and scalable threat assessment and response selection when we only have partial information about the attacker's intent and capabilities? The proposed model, termed the dynamic security model, is based on a type of attack graph, termed a condition dependency graph, and describes how an attacker can infiltrate a cyber network. By embedding a state space on the graph, the model is able to quantify the attacker's progression. Consideration of multiple attacker types, corresponding to attack strategies, allows one to model the defender's uncertainty of the attacker's true strategy/intent. Using noisy security alerts, the defender maintains a belief over both the capabilities/progression of the attacker (via a security state) and its strategy (attacker type). An online, tree-based search method, termed the online defense algorithm, is developed that takes advantage of the model's structure, permitting scalable computation of defense policies. Finally, in partially observable sequential decision-making environments, specifically partially observable Markov decision processes (POMDPs), under what conditions do optimal policies possess desirable structure? Motivated by the dynamic security model, we investigate settings where the underlying state space is partially ordered (i.e. settings where one cannot always say whether one state is better or worse than another state). The contribution lies in the derivation of natural conditions on the problem's parameters such that optimal policies are monotone in the belief for a class of two-action POMDPs. The extension to the partially ordered setting requires defining a new stochastic order, termed the generalized monotone likelihood ratio, and a corresponding class of order-preserving matrices, termed generalized totally positive of order 2.PHDElectrical Engineering: SystemsUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttps://deepblue.lib.umich.edu/bitstream/2027.42/144026/1/miehling_1.pd

    A World-Class University-Industry Consortium for Wind Energy Research, Education, and Workforce Development: Final Technical Report

    Full text link

    Topological changes in data-driven dynamic security assessment for power system control

    Get PDF
    The integration of renewable energy sources into the power system requires new operating paradigms. The higher uncertainty in generation and demand makes the operations much more dynamic than in the past. Novel operating approaches that consider these new dynamics are needed to operate the system close to its physical limits and fully utilise the existing grid assets. Otherwise, expensive investments in redundant grid infrastructure become necessary. This thesis reviews the key role of digitalisation in the shift toward a decarbonised and decentralised power system. Algorithms based on advanced data analytic techniques and machine learning are investigated to operate the system assets at the full capacity while continuously assessing and controlling security. The impact of topological changes on the performance of these data-driven approaches is studied and algorithms to mitigate this impact are proposed. The relevance of this study resides in the increasingly higher frequency of topological changes in modern power systems and in the need to improve the reliability of digitalised approaches against such changes to reduce the risks of relying on them. A novel physics-informed approach to select the most relevant variables (or features) to the dynamic security of the system is first proposed and then used in two different three-stages workflows. In the first workflow, the proposed feature selection approach allows to train classification models from machine learning (or classifiers) close to real-time operation improving their accuracy and robustness against uncertainty. In the second workflow, the selected features are used to define a new metric to detect high-impact topological changes and train new classifiers in response to such changes. Subsequently, the potential of corrective control for a dynamically secure operation is investigated. By using a neural network to learn the safety certificates for the post-fault system, the corrective control is combined with preventive control strategies to maintain the system security and at the same time reduce operational costs and carbon emissions. Finally, exemplary changes in assumptions for data-driven dynamic security assessment when moving from high inertia to low inertia systems are questioned, confirming that using machine learning based models will make significantly more sense in future systems. Future research directions in terms of data generation and model reliability of advanced digitalised approaches for dynamic security assessment and control are finally indicated.Open Acces
    • …
    corecore