2,611 research outputs found

    Non-intrusive load monitoring solutions for low- and very low-rate granularity

    Get PDF
    Strathclyde theses - ask staff. Thesis no. : T15573Large-scale smart energy metering deployment worldwide and the integration of smart meters within the smart grid are enabling two-way communication between the consumer and energy network, thus ensuring an improved response to demand. Energy disaggregation or non-intrusive load monitoring (NILM), namely disaggregation of the total metered electricity consumption down to individual appliances using purely algorithmic tools, is gaining popularity as an added-value that makes the most of meter data.In this thesis, the first contribution tackles low-rate NILM problem by proposing an approach based on graph signal processing (GSP) that does not require any training.Note that Low-rate NILM refers to NILM of active power measurements only, at rates from 1 second to 1 minute. Adaptive thresholding, signal clustering and pattern matching are implemented via GSP concepts and applied to the NILM problem. Then for further demonstration of GSP potential, GSP concepts are applied at both, physical signal level via graph-based filtering and data level, via effective semi-supervised GSP-based feature matching. The proposed GSP-based NILM-improving methods are generic and can be used to improve the results of various event-based NILM approaches. NILM solutions for very low data rates (15-60 min) cannot leverage on low to highrates NILM approaches. Therefore, the third contribution of this thesis comprises three very low-rate load disaggregation solutions, based on supervised (i) K-nearest neighbours relying on features such as statistical measures of the energy signal, time usage profile of appliances and reactive power consumption (if available); unsupervised(ii) optimisation performing minimisation of error between aggregate and the sum of estimated individual loads, where energy consumed by always-on load is heuristically estimated prior to further disaggregation and appliance models are built only by manufacturer information; and (iii) GSP as a variant of aforementioned GSP-based solution proposed for low-rate load disaggregation, with an additional graph of time-of-day information.Large-scale smart energy metering deployment worldwide and the integration of smart meters within the smart grid are enabling two-way communication between the consumer and energy network, thus ensuring an improved response to demand. Energy disaggregation or non-intrusive load monitoring (NILM), namely disaggregation of the total metered electricity consumption down to individual appliances using purely algorithmic tools, is gaining popularity as an added-value that makes the most of meter data.In this thesis, the first contribution tackles low-rate NILM problem by proposing an approach based on graph signal processing (GSP) that does not require any training.Note that Low-rate NILM refers to NILM of active power measurements only, at rates from 1 second to 1 minute. Adaptive thresholding, signal clustering and pattern matching are implemented via GSP concepts and applied to the NILM problem. Then for further demonstration of GSP potential, GSP concepts are applied at both, physical signal level via graph-based filtering and data level, via effective semi-supervised GSP-based feature matching. The proposed GSP-based NILM-improving methods are generic and can be used to improve the results of various event-based NILM approaches. NILM solutions for very low data rates (15-60 min) cannot leverage on low to highrates NILM approaches. Therefore, the third contribution of this thesis comprises three very low-rate load disaggregation solutions, based on supervised (i) K-nearest neighbours relying on features such as statistical measures of the energy signal, time usage profile of appliances and reactive power consumption (if available); unsupervised(ii) optimisation performing minimisation of error between aggregate and the sum of estimated individual loads, where energy consumed by always-on load is heuristically estimated prior to further disaggregation and appliance models are built only by manufacturer information; and (iii) GSP as a variant of aforementioned GSP-based solution proposed for low-rate load disaggregation, with an additional graph of time-of-day information

    Calibration Uncertainty for Advanced LIGO's First and Second Observing Runs

    Get PDF
    Calibration of the Advanced LIGO detectors is the quantification of the detectors' response to gravitational waves. Gravitational waves incident on the detectors cause phase shifts in the interferometer laser light which are read out as intensity fluctuations at the detector output. Understanding this detector response to gravitational waves is crucial to producing accurate and precise gravitational wave strain data. Estimates of binary black hole and neutron star parameters and tests of general relativity require well-calibrated data, as miscalibrations will lead to biased results. We describe the method of producing calibration uncertainty estimates for both LIGO detectors in the first and second observing runs.Comment: 15 pages, 21 figures, LIGO DCC P160013

    Optimal water meter selection system

    Get PDF
    The comparison of the particular accuracy envelope of a water meter with a consumer's diurnal demand pattern by means of a common reference facilitates the optimal selection of water meters. The accuracy curve and envelope of a new water meter is governed by the type of water meter and relevant standards. Water demand patterns vary with time, period, seasons, consumers and combinations of these factors. The classical accuracy envelope and demand pattern are not directly comparable, and require a common comparison reference. The relative frequency of the volume of water passing through a meter at various flow rates and the weighted accuracies of these measured volumes play a pivotal role in establishing a common comparison reference. The time unit selected to calculate the volume of water passing through the meter is guided by the type of water reticulation infrastructure within which the meter is installed. However, experience and literature show that a flow interval of less than 1 min would result in the application of unrealistic high flow rates. A simplified example for the determination of the weighted accuracy of a water meter monitoring a theoretical demand pattern illustrates the methodology used to establish the common comparison reference. Economic/financial analysis based on an income statement together with capital budgeting techniques assist with the determination of the financial suitability of investing in a new replacement water meter. This financial analysis includes various potential income and expenditure components that will result from the installation of a new water meter. Sensitivity analysis facilitates the decision-making process. The analysis of flow data by a computer program developed in context with the described methodology illustrates that the savings achieved by the improved accuracy of matching the optimally selected meter and a particular demand profile can finance the costs of such an investment. WaterSA Vol.27(4) 2001: 481-48

    REDUCTION OF GIBBS PHENOMENON IN EOG SIGNAL MEASUREMENT USING THE MODIFIED DIGITAL STOCHASTIC MEASUREMENT METHOD

    Get PDF
    The method of digital stochastic measurement is based on stochastic analog-to-digital conversion, with a low-resolution A/D converters and accumulation. This method has been mainly tested and used for the measurement of stationary signals. This paper presents, analyses and discusses a simulation model development for an example of electrooculography (EOG) signal measurement in the time domain. Tests were carried out without adding a noise, and with adding a noise with various level of signal-to-noise ratio. For these values of signal-to-noise ratio, the mean and maximal relative errors are calculated and the significant influence of Gibbs phenomenon is noticed. In order to eliminate Gibbs phenomenon and decrease measurement error, a modified stochastic digital measurement method with overlapping measurement intervals has been developed and applied. On the basis of obtained results, the possibility of design and realization of an instrument with sufficient accuracy benefiting from the hardware simplicity of the method has been formulated. Also, the idea for the future research for developing a simulation model with a lower sampling frequency and implementing the proposed method is outlined

    Optical technology Apollo extension system, phase A, volume 2. Section 3 - Experiments

    Get PDF
    Optical propagation in turbulent atmosphere, optical communication diagnostics, spaceborne heterodyne experiments, and ground support requirement

    User data dissemination concepts for earth resources: Executive summary

    Get PDF
    The impact of the future capabilities of earth-resources data sensors (both satellite and airborne) and their requirements on the data dissemination network were investigated and optimum ways of configuring this network were determined. The scope of this study was limited to the continental U.S.A. (including Alaska) and to the 1985-1995 time period. Some of the conclusions and recommendations reached were: (1) Data from satellites in sun-synchronous polar orbits (700-920 km) will generate most of the earth-resources data in the specified time period. (2) Data from aircraft and shuttle sorties cannot be readily integrated in a data-dissemination network unless already preprocessed in a digitized form to a standard geometric coordinate system. (3) Data transmission between readout stations and central preprocessing facilities, and between processing facilities and user facilities are most economically performed by domestic communication satellites. (4) The effect of the following factors should be studied: cloud cover, expanded coverage, pricing strategies, multidiscipline missions
    corecore