1,323 research outputs found

    Multilevel Aggregation Methods for Small-World Graphs with Application to Random-Walk Ranking

    Get PDF
    We describe multilevel aggregation in the specific context of using Markov chains to rank the nodes of graphs. More generally, aggregation is a graph coarsening technique that has a wide range of possible uses regarding information retrieval applications. Aggregation successfully generates efficient multilevel methods for solving nonsingular linear systems and various eigenproblems from discretized partial differential equations, which tend to involve mesh-like graphs. Our primary goal is to extend the applicability of aggregation to similar problems on small-world graphs, with a secondary goal of developing these methods for eventual applicability towards many other tasks such as using the information in the hierarchies for node clustering or pattern recognition. The nature of small-world graphs makes it difficult for many coarsening approaches to obtain useful hierarchies that have complexity on the order of the number of edges in the original graph while retaining the relevant properties of the original graph. Here, for a set of synthetic graphs with the small-world property, we show how multilevel hierarchies formed with non-overlapping strength-based aggregation have optimal or near optimal complexity. We also provide an example of how these hierarchies are employed to accelerate convergence of methods that calculate the stationary probability vector of large, sparse, irreducible, slowly-mixing Markov chains on such small-world graphs. The stationary probability vector of a Markov chain allows one to rank the nodes in a graph based on the likelihood that a long random walk visits each node. These ranking approaches have a wide range of applications including information retrieval and web ranking, performance modeling of computer and communication systems, analysis of social networks, dependability and security analysis, and analysis of biological systems

    Non-intrusive load monitoring solutions for low- and very low-rate granularity

    Get PDF
    Strathclyde theses - ask staff. Thesis no. : T15573Large-scale smart energy metering deployment worldwide and the integration of smart meters within the smart grid are enabling two-way communication between the consumer and energy network, thus ensuring an improved response to demand. Energy disaggregation or non-intrusive load monitoring (NILM), namely disaggregation of the total metered electricity consumption down to individual appliances using purely algorithmic tools, is gaining popularity as an added-value that makes the most of meter data.In this thesis, the first contribution tackles low-rate NILM problem by proposing an approach based on graph signal processing (GSP) that does not require any training.Note that Low-rate NILM refers to NILM of active power measurements only, at rates from 1 second to 1 minute. Adaptive thresholding, signal clustering and pattern matching are implemented via GSP concepts and applied to the NILM problem. Then for further demonstration of GSP potential, GSP concepts are applied at both, physical signal level via graph-based filtering and data level, via effective semi-supervised GSP-based feature matching. The proposed GSP-based NILM-improving methods are generic and can be used to improve the results of various event-based NILM approaches. NILM solutions for very low data rates (15-60 min) cannot leverage on low to highrates NILM approaches. Therefore, the third contribution of this thesis comprises three very low-rate load disaggregation solutions, based on supervised (i) K-nearest neighbours relying on features such as statistical measures of the energy signal, time usage profile of appliances and reactive power consumption (if available); unsupervised(ii) optimisation performing minimisation of error between aggregate and the sum of estimated individual loads, where energy consumed by always-on load is heuristically estimated prior to further disaggregation and appliance models are built only by manufacturer information; and (iii) GSP as a variant of aforementioned GSP-based solution proposed for low-rate load disaggregation, with an additional graph of time-of-day information.Large-scale smart energy metering deployment worldwide and the integration of smart meters within the smart grid are enabling two-way communication between the consumer and energy network, thus ensuring an improved response to demand. Energy disaggregation or non-intrusive load monitoring (NILM), namely disaggregation of the total metered electricity consumption down to individual appliances using purely algorithmic tools, is gaining popularity as an added-value that makes the most of meter data.In this thesis, the first contribution tackles low-rate NILM problem by proposing an approach based on graph signal processing (GSP) that does not require any training.Note that Low-rate NILM refers to NILM of active power measurements only, at rates from 1 second to 1 minute. Adaptive thresholding, signal clustering and pattern matching are implemented via GSP concepts and applied to the NILM problem. Then for further demonstration of GSP potential, GSP concepts are applied at both, physical signal level via graph-based filtering and data level, via effective semi-supervised GSP-based feature matching. The proposed GSP-based NILM-improving methods are generic and can be used to improve the results of various event-based NILM approaches. NILM solutions for very low data rates (15-60 min) cannot leverage on low to highrates NILM approaches. Therefore, the third contribution of this thesis comprises three very low-rate load disaggregation solutions, based on supervised (i) K-nearest neighbours relying on features such as statistical measures of the energy signal, time usage profile of appliances and reactive power consumption (if available); unsupervised(ii) optimisation performing minimisation of error between aggregate and the sum of estimated individual loads, where energy consumed by always-on load is heuristically estimated prior to further disaggregation and appliance models are built only by manufacturer information; and (iii) GSP as a variant of aforementioned GSP-based solution proposed for low-rate load disaggregation, with an additional graph of time-of-day information

    Parallel fast multipole methods for the simulation of extremely large electromagnetic scattering problems

    Get PDF

    Site-Based Partitioning and Repartitioning Techniques for Parallel PageRank Computation

    Get PDF
    Cataloged from PDF version of article.The PageRank algorithm is an important component in effective web search. At the core of this algorithm are repeated sparse matrix-vector multiplications where the involved web matrices grow in parallel with the growth of the web and are stored in a distributed manner due to space limitations. Hence, the PageRank computation, which is frequently repeated, must be performed in parallel with high-efficiency and low-preprocessing overhead while considering the initial distributed nature of the web matrices. Our contributions in this work are twofold. We first investigate the application of state-of-the-art sparse matrix partitioning models in order to attain high efficiency in parallel PageRank computations with a particular focus on reducing the preprocessing overhead they introduce. For this purpose, we evaluate two different compression schemes on the web matrix using the site information inherently available in links. Second, we consider the more realistic scenario of starting with an initially distributed data and extend our algorithms to cover the repartitioning of such data for efficient PageRank computation. We report performance results using our parallelization of a state-of-the-art PageRank algorithm on two different PC clusters with 40 and 64 processors. Experiments show that the proposed techniques achieve considerably high speedups while incurring a preprocessing overhead of several iterations (for some instances even less than a single iteration) of the underlying sequential PageRank algorithm. © 2011 IEEE

    Broadband Multilevel Fast Multipole Methods

    Get PDF
    Numerical simulations of electromagnetic fields are very important for a plethora of modern applications like antenna design, wireless communication systems, optical systems, high-frequency circuits and so on. As a consequence, there is much interest in finding algorithms that make these simulations as computationally efficient as possible. One of the leading classes of algorithms consists of the so-called Fast Multipole Methods. These methods use a subdivision of the geometry into boxes on multiple levels, in combination with a decomposition of the Green function. For high frequency simulations, where the wavelength is smaller then the smallest features of the geometry, a propagating plane wave decomposition leads to a very efficient algorithm. Unfortunately, this decomposition fails when the geometry contains features smaller than the wavelength, which is the case for broadband simulations. Broadband simulations are becoming increasingly important, for example in the simulation of high frequency printed circuit boards and microwave circuits, metamaterials or the scattering of radar waves off complex shapes. Because of the failure of the propagating plane wave decomposition, performing broadband simulations requires the construction of a hybrid algorithm which uses the propagating plane wave decomposition when the boxes are large enough and some low frequency decomposition when they are not. However, the known low frequency decompositions are usually suboptimal compared to the theoretical performance of the propagating plane wave decomposition. In this work, the focus will be on these low frequency decompositions. First, an improvement over a known low frequency decomposition (the spectral decomposition) is presented. Among other techniques, the well-known Beltrami decomposition of electromagnetic fields is shown to significantly reduce the computational burden in this scheme. Secondly, entirely novel ways of decomposing the Green function are developed in both two and three dimensions. These decompositions use evanescent plane waves, so they can handle small boxes. Nevertheless, they have the same convergence characteristics as the propagating plane wave decomposition. Therefore, these decompositions are also very efficient. Finally, the novel techniques are applied in the full-wave homogenization of various metamaterials

    APPLICATION OF GROUP TESTING FOR ANALYZING NOISY NETWORKS

    Get PDF
    My dissertation focuses on developing scalable algorithms for analyzing large complex networks and evaluating how the results alter with changes to the network. Network analysis has become a ubiquitous and very effective tool in big data analysis, particularly for understanding the mechanisms of complex systems that arise in diverse disciplines such as cybersecurity [83], biology [15], sociology [5], and epidemiology [7]. However, data from real-world systems are inherently noisy because they are influenced by fluctuations in experiments, subjective interpretation of data, and limitation of computing resources. Therefore, the corresponding networks are also approximate. This research addresses these issues of obtaining accurate results from large noisy networks efficiently. My dissertation has four main components. The first component consists of developing efficient and scalable algorithms for centrality computations that produce reliable results on noisy networks. Two novel contributions I made in this area are the development of a group testing [16] based algorithm for identification of high centrality vertices which is extremely faster than current methods, and an algorithm for computing the betweenness centrality of a specific vertex. The second component consists of developing quantitative metrics to measure how different noise models affect the analysis results. We implemented a uniform perturbation model based on random addition/ deletion of edges of a network. To quantify the stability of a network we investigated the effect that perturbations have on the top-k ranked vertices and the local structure properties of the top ranked vertices. The third component consists of developing efficient software for network analysis. I have been part of the development of a software package, ESSENS (Extensible, Scalable Software for Evolving NetworkS) [76], that effectively supports our algorithms on large networks. The fourth component is a literature review of the various noise models that researchers have applied to networks and the methods they have used to quantify the stability, sensitivity, robustness, and reliability of networks. These four aspects together will lead to efficient, accurate, and highly scalable algorithms for analyzing noisy networks

    Techniques for Managing Grid Vulnerability and Assessing Structure

    Full text link
    As power systems increasingly rely on renewable power sources, generation fluctuations play a greater role in operation. These unpredictable changes shift the system operating point, potentially causing transmission lines to overheat and sag. Any attempt to anticipate line thermal constraint violations due to renewable generation shifts must address the temporal nature of temperature dynamics, as well as changing ambient conditions. An algorithm for assessing vulnerability in an operating environment should also have solution guarantees, and scale well to large systems. A method for quantifying and responding to system vulnerability to renewable generation fluctuations is presented. In contrast to existing methods, the proposed temporal framework captures system changes and line temperature dynamics over time. The non-convex quadratically constrained quadratic program (QCQP) associated with this temporal framework may be reliably solved via a proposed series of transformations. Case studies demonstrate the method's effectiveness for anticipating line temperature constraint violations due to small shifts in renewable generation. The method is also useful for quickly identifying optimal generator dispatch adjustments for cooling an overheated line, making it well-suited for use in power system operation. Development and testing of the temporal deviation scanning method involves time series data and system structure. Time series data are widely available, but publicly available data are often synthesized. Well-known time series analysis techniques are used to assess whether given data are realistic. Bounds from signal processing literature are used to identify, characterize, and isolate the quantization noise that exists in many commonly-used electric load profile datasets. Just as straightforward time series analysis can detect unrealistic data and quantization noise, so graph theory may be employed to identify unrealistic features of transmission networks. A small set of unweighted graph metrics is used on a large set of test networks to reveal unrealistic connectivity patterns in transmission grids. These structural anomalies often arise due to network reduction, and are shown to exist in multiple publicly available test networks. The aforementioned study of system structure suggested a means of improving the performance of algorithms that solve the semidefinite relaxation of the optimal power flow problem (SDP OPF). It is well known that SDP OPF performance improves when the semidefinite constraint is decomposed along the lines of the maximal cliques of the underlying network graph. Further improvement is possible by merging some cliques together, trading off between the number of decomposed constraints and their sizes. Potential for improvement over the existing greedy clique merge algorithm is shown. A comparison of clique merge algorithms demonstrates that approximate problem size may not be the most important consideration when merging cliques. The last subject of interest is the ubiquitous load-tap-changing (LTC) transformer, which regulates voltage in response to changes in generation and load. Unpredictable and significant changes in wind cause LTCs to tap more frequently, reducing their lifetimes. While voltage regulation at renewable sites can resolve this issue for nearby sub-transmission LTCs, upstream transmission-level LTCs must then tap more to offset the reactive power flows that result. A simple test network is used to illustrate this trade-off between transmission LTC and sub-transmission LTC tap operations as a function of wind-farm voltage regulation and device setpoints. The trade-off calls for more nuanced voltage regulation policies that balance tap operations between LTCs.PHDElectrical Engineering: SystemsUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttps://deepblue.lib.umich.edu/bitstream/2027.42/155266/1/kersulis_1.pd

    Smart Urban Water Networks

    Get PDF
    This book presents the paper form of the Special Issue (SI) on Smart Urban Water Networks. The number and topics of the papers in the SI confirm the growing interest of operators and researchers for the new paradigm of smart networks, as part of the more general smart city. The SI showed that digital information and communication technology (ICT), with the implementation of smart meters and other digital devices, can significantly improve the modelling and the management of urban water networks, contributing to a radical transformation of the traditional paradigm of water utilities. The paper collection in this SI includes different crucial topics such as the reliability, resilience, and performance of water networks, innovative demand management, and the novel challenge of real-time control and operation, along with their implications for cyber-security. The SI collected fourteen papers that provide a wide perspective of solutions, trends, and challenges in the contest of smart urban water networks. Some solutions have already been implemented in pilot sites (i.e., for water network partitioning, cyber-security, and water demand disaggregation and forecasting), while further investigations are required for other methods, e.g., the data-driven approaches for real time control. In all cases, a new deal between academia, industry, and governments must be embraced to start the new era of smart urban water systems
    corecore