1,231 research outputs found

    Scalable video/image transmission using rate compatible PUM turbo codes

    Get PDF
    The robust delivery of video over emerging wireless networks poses many challenges due to the heterogeneity of access networks, the variations in streaming devices, and the expected variations in network conditions caused by interference and coexistence. The proposed approach exploits the joint optimization of a wavelet-based scalable video/image coding framework and a forward error correction method based on PUM turbo codes. The scheme minimizes the reconstructed image/video distortion at the decoder subject to a constraint on the overall transmission bitrate budget. The minimization is achieved by exploiting the rate optimization technique and the statistics of the transmission channel

    Compressive sampling of binary images

    Get PDF
    Compressive sampling is a novel framework that exploits sparsity of a signal in a transform domain to perform sampling below the Nyquist rate. In this paper, we apply compressive sampling to reduce the sampling rate of binary images. A system is proposed whereby the image is split into non-overlapping blocks of equal size and compressive sampling is performed on selected blocks only using the orthogonal matching pursuit technique. The remaining blocks are sampled fully. This way, the complexity and the required sampling time is reduced since the orthogonal matching pursuit operates on a smaller number of samples, and at the same time local sparsity within an image is exploited. Our simulation results show more than 20% saving in acquisition for several binary images

    Measuring the energy intensity of domestic activities from smart meter data

    Get PDF
    Household electricity consumption can be broken down to appliance end-use through a variety of methods such as modelling, sub-metering, load disaggregation or non-intrusive appliance load monitoring (NILM). We advance and complement this important field of energy research through an innovative methodology that characterises the energy consumption of domestic life by making the linkages between appliance end-use and activities through an ontology built from qualitative data about the household and NILM data. We use activities as a descriptive term for the common ways households spend their time at home. These activities, such as cooking or laundering, are meaningful to households’ own lived experience. Thus, besides strictly technical algorithmic approaches for processing quantitative smart meter data, we also draw on social science time use approaches and interview and ethnography data. Our method disaggregates a households total electricity load down to appliance level and provides the start time, duration, and total electricity consumption for each occurrence of appliance usage. We then make inferences about activities occurring in the home by combining these disaggregated data with an ontology that formally specifies the relationships between electricity-using appliances and activities. We also propose two novel standardised metrics to enable easy quantifiable comparison within and across households of the energy intensity and routine of activities of interest. Finally, we demonstrate our results over a sample of ten households with an in-depth analysis of which activities can be inferred with the qualitative and quantitative data available for each household at any time, and the level of accuracy with which each activity can be inferred, unique to each household. This work has important applications from providing meaningful energy feedback to households to comparing the energy efficiency of households’ daily activities, and exploring the potential to shift the timing of activities for demand management

    High-accuracy real-time microseismic analysis platform : case study based on the super-sauze mud-based landslide

    Get PDF
    Understanding the evolution of landslide and other subsurface processes via microseismic monitoring and analysis is of paramount importance in predicting or even avoiding an imminent slope failure (via an early warning system). Microseismic monitoring recordings are often continuous, noisy and consist of signals emitted by various sources. Automated analysis of landslide processes comprises detection, localization and classification of microseismic events (with magnitude <2 richter scale). Previous research has mainly focused on manually tuning signal processing methods for detecting and classifying microseismic signals based on the signal waveform and its spectrum, which is time-consuming especially for long-term monitoring and big datasets. This paper proposes an automatic analysis platform that performs event detection and classification, after suitable feature selection, in near realtime. The platform is evaluated using seismology data from the Super-Sauze mud-based landslide, which islocated in the southwestern French Alps, and features earthquake, slidequake and tremor type events

    Understanding usage patterns of electric kettle and energy saving potential

    Get PDF
    The availability of smart metering and smart appliances enables detecting and characterising appliance use in a household, quantifying energy savings through efficient appliance use and predicting appliance-specific demand from load measurements is possible. With growing electric kettle ownership and usage, lack of any efficiency labelling guidelines for the kettle, slow technological progress in improving kettle efficiency relative to other domestic appliances, and current consumer attitudes, urgent investigation into consumer kettle usage patterns is warranted. From an efficiency point of view, little can be done about the kettle, which is more efficient than other methods of heating water such as the stove top kettle. However, since a majority households use the kettle inefficiently by overfilling, in order to meet energy targets, it is imperative to quantify inefficient usage and predict demand. For the purposes of scalability, we propose tools that depend only on load measurement data for quantifying and visualizing kettle usage and energy consumption, assessing energy wastage through overfilling via our proposed electric kettle model, and predicting kettle-specific demand, from which we can estimate potential energy savings in a household and across a housing stock. This is demonstrated using data from a longitudinal study across a sample of 14 UK households for a two-year period

    Trends and challenges in smart metering analytics

    Get PDF
    With strong policy support globally, it is expected that the total amount of smart energy meters installed worldwide will reach 780 million by 2020, including 200 million in the EU and 30 Million in the UK alone. Smart metering can improve grid operation and maintenance of distribution networks through load forecasting, improve demand response measures, and enhance end-user experience through accurate billing and appliance-level energy feedback via Non-Intrusive Load Monitoring (NILM). In this paper, we review trends of smart metering applications and challenges in large-scale adoption, and provide case studies to demonstrate application of NILM for meaningful energy feedback

    Compressive video sampling

    Get PDF
    Compressive sampling is a novel framework that exploits sparsity of a signal in a transform domain to perform sampling below the Nyquist rate. In this paper, we apply compressive sampling to reduce the sampling rate of images/video. The key idea is to exploit the intra- and inter-frame correlation to improve signal recovery algorithms. The image is split into non-overlapping blocks of fixed size, which are independently compressively sampled exploiting sparsity of natural scenes in the Discrete Cosine Transform (DCT) domain. At the decoder, each block is recovered using useful information extracted from the recovery of a neighboring block. In the case of video, a previous frame is used to help recovery of consecutive frames. The iterative algorithm for signal recovery with side information that extends the standard orthogonal matching pursuit (OMP) algorithm is employed. Simulation results are given for Magnetic Resonance Imaging (MRI) and video sequences to illustrate advantages of the proposed solution compared to the case when side information is not used

    Near surface full waveform inversion via deep learning for subsurface imaging

    Get PDF
    In order to meet increasing safety standards and technological requirements for underground construction, the estimation of Earth models is needed to characterize the subsurface. This can be achieved via near-surface or standard Full-Waveform Inversion (FWI) velocity model building, which reconstructs the Earth model parameters (compressional and shear wave velocities, density) via recordings obtained on the field. The wave function characterizing the Earth model parameters is inherently non-linear, rendering this optimization problem complex. With advances in computational power, including graphics processing units (GPUs) computing, data driven approaches to solve FWI via Deep Neural Networks (DNN) are increasing in popularity due to its ability to solve the FWI problem accurately. In this paper, we leverage on DNN-based FWI applied to field data, to demonstrate that instead of depending on observed data collected from multiple boreholes across a large distance, it is possible to obtain accurate Earth model parameters for areas with varied geotechnical characteristics by using geotechnical data as prior knowledge and constraining the training models according to a single borehole to map the large geological earth cross section. Also we propose a methodology to simulate acoustic recordings indirectly from laboratory tests on soil samples obtained from boreholes, which were analysed for compressive strength of intact rock and Geological Strength Index. Layers’ geometry and properties for a section of total 3.0 km are used for simulating 15 2D elastic spaces of 200 m width and 50m depth assuming receivers and Ricker-wavelet sources. We adopt a Fully Convolutional Neural Network for Velocity Model Building, previously shown to work well with synthetic data, to generate the 2D predicted Earth model. The results of this study show that the velocity model can be accurately predicted via DNN through the appropriate training with minimum demands for borehole data. The performance is evaluated through both metrics focused on image quality and on velocity values giving a multifaceted understanding of the model’s true ability to predict the subsurface
    • …
    corecore