83 research outputs found

    A convex optimization approach for automated water and energy end use disaggregation

    Get PDF
    A detailed knowledge of water consumption at an end-use level is an essential requirement to design and evaluate the efficiency of water saving policies. In the last years, this has led to the development of automated tools to disaggregate high resolution water consumption data at the household level into end use categories. In this work, a new disaggregation algorithm is presented. The proposed algorithm is based on the assumption that the disaggregated signals to be identified are piecewise constant over the time and it exploits the information on the time-of-day probability in which a specific water use event might occur. The disaggregation problem is formulated as a convex optimization problem, whose solution can be efficiently computed through numerical solvers. Specifically, the disaggregation problem is treated as a least-square error minimization problem, with an additional (convex) penalty term aiming at enforcing the disaggregate signals to be piece-wise constant over the time. The proposed disaggregation algorithm has been initially tested against household electricity data available in the literature. The obtained results look promising and similar results are expected to be obtained for water data

    Algorithms for energy disaggregation

    Get PDF
    In this project we compare three different solutions to the energy disaggregation problem. We test the algorithms on a reference dataset, and try to learn about the requirements and feasibility of energy disaggregation as a potential commercial product

    Designing Artificial Neural Networks (ANNs) for Electrical Appliance Classification in Smart Energy Distribution Systems

    Get PDF
    En este proyecto se abordará el problema de la desagregación del consumo eléctrico a través del diseño de sistemas inteligentes, basados en redes neuronales profundas, que puedan formar parte de sistemas más amplios de gestión y distribución de energía. Durante la definición estará presente la búsqueda de una complejidad computacional adecuada que permita una implementación posterior de bajo costo. En concreto, estos sistemas realizarán el proceso de clasificación a partir de los cambios en la corriente eléctrica provocados por los distintos electrodomésticos. Para la evaluación y comparación de las diferentes propuestas se hará uso de la base de datos BLUED.This project will address the energy consumption disaggregation problem through the design of intelligent systems, based on deep artificial neural networks, which would be part of broader energy management and distribution systems. The search for adequate computational complexity that will allow a subsequent implementation of low cost will be present during algorithm definition. Specifically, these systems will carry out the classification process based on the changes caused by the different appliances in the electric current. For the evaluation and comparison of the different proposals, the BLUED database will be used.Máster Universitario en Ingeniería Industrial (M141

    Non-parametric modeling in non-intrusive load monitoring

    Get PDF
    Non-intrusive Load Monitoring (NILM) is an approach to the increasingly important task of residential energy analytics. Transparency of energy resources and consumption habits presents opportunities and benefits at all ends of the energy supply-chain, including the end-user. At present, there is no feasible infrastructure available to monitor individual appliances at a large scale. The goal of NILM is to provide appliance monitoring using only the available aggregate data, side-stepping the need for expensive and intrusive monitoring equipment. The present work showcases two self-contained, fully unsupervised NILM solutions: the first featuring non-parametric mixture models, and the second featuring non-parametric factorial Hidden Markov Models with explicit duration distributions. The present implementation makes use of traditional and novel constraints during inference, showing marked improvement in disaggregation accuracy with very little effect on computational cost, relative to the motivating work. To constitute a complete unsupervised solution, labels are applied to the inferred components using a Res-Net-based deep learning architecture. Although this preliminary approach to labelling proves less than satisfactory, it is well-founded and several opportunities for improvement are discussed. Both methods, along with the labelling network, make use of block-filtered data: a steady-state representation that removes transient behaviour and signal noise. A novel filter to achieve this steady-state representation that is both fast and reliable is developed and discussed at length. Finally, an approach to monitor the aggregate for novel events during deployment is developed under the framework of Bayesian surprise. The same non-parametric modelling can be leveraged to examine how the predictive and transitional distributions change given new windows of observations. This framework is also shown to have potential elsewhere, such as in regularizing models against over-fitting, which is an important problem in existing supervised NILM

    Modelling of Electrical Appliance Signatures for Energy Disaggregation

    Get PDF
    The rapid development of technology in the electrical sector within the last 20 years has led to growing electric power needs through the increased number of electrical appliances and automation of tasks. In contrary, reduction of the overall energy consumption as well as efficient energy management are needed, in order to reduce global warming and meet the global climate protection goals. These requirements have led to the recent adoption of smart-meters and smart-grids, as well as to the rise of Non-Intrusive Load Monitoring. Non-Intrusive Load Monitoring aims to extract the energy consumption of individual electrical appliances through disaggregation of the total power consumption as measured by a single smart meter at the inlet of a household. Therefore, Non-Intrusive Load Monitoring is a highly under-determined problem which aims to estimate multiple variables from a single observation, thus is impossible to be solved analytical. In order to find accurate estimates of the unknown variables three fundamentally different approaches, namely deep-learning, pattern matching and single-channel source separation, have been investigated in the literature in order to solve the Non-Intrusive Load Monitoring problem. While Non-Intrusive Load Monitoring has multiple areas of application, including energy reduction through consumer awareness, load scheduling for energy cost optimization or reduction of peak demands, the focus of this thesis is especially on the performance of the disaggregation algorithm, the key part of the Non-Intrusive Load Monitoring architecture. In detail, optimizations are proposed for all three architectures, while the focus lies on deep-learning based approaches. Furthermore, the transferability capability of the deep-learning based approach is investigated and a NILM specific transfer architecture is proposed. The main contribution of the thesis is threefold. First, with Non-Intrusive Load Monitoring being a time-series problem incorporation of temporal information is crucial for accurate modelling of the appliance signatures and the change of signatures over time. Therefore, previously published architectures based on deep-learning have focused on utilizing regression models which intrinsically incorporating temporal information. In this work, the idea of incorporating temporal information is extended especially through modelling temporal patterns of appliances not only in the regression stage, but also in the input feature vector, i.e. by using fractional calculus, feature concatenation or high-frequency double Fourier integral signatures. Additionally, multi variance matching is utilized for Non-Intrusive Load Monitoring in order to have additional degrees of freedom for a pattern matching based solution. Second, with Non-Intrusive Load Monitoring systems expected to operate in realtime as well as being low-cost applications, computational complexity as well as storage limitations must be considered. Therefore, in this thesis an approximation for frequency domain features is presented in order to account for a reduction in computational complexity. Furthermore, investigations of reduced sampling frequencies and their impact on disaggregation performance has been evaluated. Additionally, different elastic matching techniques have been compared in order to account for reduction of training times and utilization of models without trainable parameters. Third, in order to fully utilize Non-Intrusive Load Monitoring techniques accurate transfer models, i.e. models which are trained on one data domain and tested on a different data domain, are needed. In this context it is crucial to transfer time-variant and manufacturer dependent appliance signatures to manufacturer invariant signatures, in order to assure accurate transfer modelling. Therefore, a transfer learning architecture specifically adapted to the needs of Non-Intrusive Load Monitoring is presented. Overall, this thesis contributes to the topic of Non-Intrusive Load Monitoring improving the performance of the disaggregation stage while comparing three fundamentally different approaches for the disaggregation problem

    On the 3D electromagnetic quantitative inverse scattering problem: algorithms and regularization

    Get PDF
    In this thesis, 3D quantitative microwave imaging algorithms are developed with emphasis on efficiency of the algorithms and quality of the reconstruction. First, a fast simulation tool has been implemented which makes use of a volume integral equation (VIE) to solve the forward scattering problem. The solution of the resulting linear system is done iteratively. To do this efficiently, two strategies are combined. First, the matrix-vector multiplications needed in every step of the iterative solution are accelerated using a combination of the Fast Fourier Transform (FFT) method and the Multilevel Fast Multipole Algorithm (MLFMA). It is shown that this hybridMLFMA-FFT method is most suited for large, sparse scattering problems. Secondly, the number of iterations is reduced by using an extrapolation technique to determine suitable initial guesses, which are already close to the solution. This technique combines a marching-on-in-source-position scheme with a linear extrapolation over the permittivity under the form of a Born approximation. It is shown that this forward simulator indeed exhibits a better efficiency. The fast forward simulator is incorporated in an optimization technique which minimizes the discrepancy between measured data and simulated data by adjusting the permittivity profile. A Gauss-Newton optimization method with line search is employed in this dissertation to minimize a least squares data fit cost function with additional regularization. Two different regularization methods were developed in this research. The first regularization method penalizes strong fluctuations in the permittivity by imposing a smoothing constraint, which is a widely used approach in inverse scattering. However, in this thesis, this constraint is incorporated in a multiplicative way instead of in the usual additive way, i.e. its weight in the cost function is reduced with an improving data fit. The second regularization method is Value Picking regularization, which is a new method proposed in this dissertation. This regularization is designed to reconstruct piecewise homogeneous permittivity profiles. Such profiles are hard to reconstruct since sharp interfaces between different permittivity regions have to be preserved, while other strong fluctuations need to be suppressed. Instead of operating on the spatial distribution of the permittivity, as certain existing methods for edge preservation do, it imposes the restriction that only a few different permittivity values should appear in the reconstruction. The permittivity values just mentioned do not have to be known in advance, however, and their number is also updated in a stepwise relaxed VP (SRVP) regularization scheme. Both regularization techniques have been incorporated in the Gauss-Newton optimization framework and yield significantly improved reconstruction quality. The efficiency of the minimization algorithm can also be improved. In every step of the iterative optimization, a linear Gauss-Newton update system has to be solved. This typically is a large system and therefore is solved iteratively. However, these systems are ill-conditioned as a result of the ill-posedness of the inverse scattering problem. Fortunately, the aforementioned regularization techniques allow for the use of a subspace preconditioned LSQR method to solve these systems efficiently, as is shown in this thesis. Finally, the incorporation of constraints on the permittivity through a modified line search path, helps to keep the forward problem well-posed and thus the number of forward iterations low. Another contribution of this thesis is the proposal of a new Consistency Inversion (CI) algorithm. It is based on the same principles as another well known reconstruction algorithm, the Contrast Source Inversion (CSI) method, which considers the contrast currents – equivalent currents that generate a field identical to the scattered field – as fundamental unknowns together with the permittivity. In the CI method, however, the permittivity variables are eliminated from the optimization and are only reconstructed in a final step. This avoids alternating updates of permittivity and contrast currents, which may result in a faster convergence. The CI method has also been supplemented with VP regularization, yielding the VPCI method. The quantitative electromagnetic imaging methods developed in this work have been validated on both synthetic and measured data, for both homogeneous and inhomogeneous objects and yield a high reconstruction quality in all these cases. The successful, completely blind reconstruction of an unknown target from measured data, provided by the Institut Fresnel in Marseille, France, demonstrates at once the validity of the forward scattering code, the performance of the reconstruction algorithm and the quality of the measurements. The reconstruction of a numerical MRI based breast phantom is encouraging for the further development of biomedical microwave imaging and of microwave breast cancer screening in particular

    Parallel software tool for decomposing and meshing of 3d structures

    Get PDF
    An algorithm for automatic parallel generation of three-dimensional unstructured computational meshes based on geometrical domain decomposition is proposed in this paper. Software package build upon proposed algorithm is described. Several practical examples of mesh generation on multiprocessor computational systems are given. It is shown that developed parallel algorithm enables us to reduce mesh generation time significantly (dozens of times). Moreover, it easily produces meshes with number of elements of order 5 · 107, construction of those on a single CPU is problematic. Questions of time consumption, efficiency of computations and quality of generated meshes are also considered

    Application of general semi-infinite Programming to Lapidary Cutting Problems

    Get PDF
    We consider a volume maximization problem arising in gemstone cutting industry. The problem is formulated as a general semi-infinite program (GSIP) and solved using an interiorpoint method developed by Stein. It is shown, that the convexity assumption needed for the convergence of the algorithm can be satisfied by appropriate modelling. Clustering techniques are used to reduce the number of container constraints, which is necessary to make the subproblems practically tractable. An iterative process consisting of GSIP optimization and adaptive refinement steps is then employed to obtain an optimal solution which is also feasible for the original problem. Some numerical results based on realworld data are also presented

    Using the Sharp Operator for edge detection and nonlinear diffusion

    Get PDF
    In this paper we investigate the use of the sharp function known from functional analysis in image processing. The sharp function gives a measure of the variations of a function and can be used as an edge detector. We extend the classical notion of the sharp function for measuring anisotropic behaviour and give a fast anisotropic edge detection variant inspired by the sharp function. We show that these edge detection results are useful to steer isotropic and anisotropic nonlinear diffusion filters for image enhancement

    Graph Signal Processing: Overview, Challenges and Applications

    Full text link
    Research in Graph Signal Processing (GSP) aims to develop tools for processing data defined on irregular graph domains. In this paper we first provide an overview of core ideas in GSP and their connection to conventional digital signal processing. We then summarize recent developments in developing basic GSP tools, including methods for sampling, filtering or graph learning. Next, we review progress in several application areas using GSP, including processing and analysis of sensor network data, biological data, and applications to image processing and machine learning. We finish by providing a brief historical perspective to highlight how concepts recently developed in GSP build on top of prior research in other areas.Comment: To appear, Proceedings of the IEE
    corecore