48,423 research outputs found

    Human response to vibration in residential environments (NANR209), technical report 3 : calculation of vibration exposure

    Get PDF
    The Technical Report 3 describes the research undertaken to develop a methodology by which human exposure to vibration in residential environments can be calculated. That work has carried out by the University of Salford supported by the Department of environment food and rural affairs (Defra). The overall aim of the project is to derive exposure-response relationships for human vibration in residential environments. This document in particular focuses on the methods used to calculate vibration exposure from measured vibration signals due to different sources. The main objective of this report is to describe the different approaches used for calculating the different source-specific exposure. Reported here are findings obtained and a description of the feasibility of the methods used for evaluating exposure for different sources. In addition, an evaluation of the uncertainty related to the exposure calculation is considered

    Coherence methods in mapping AVO anomalies and predicting P-wave and S-wave impedances

    Get PDF
    Filters for migrated offset substacks are designed by partial coherence analysis to predict ‘normal’ amplitude variation with offset (AVO) in an anomaly free area. The same prediction filters generate localized prediction errors when applied in an AVO-anomalous interval. These prediction errors are quantitatively related to the AVO gradient anomalies in a background that is related to the minimum AVO anomaly detectable from the data. The prediction-error section is thus used to define a reliability threshold for the identification of AVO anomalies. Coherence analysis also enables quality control of AVO analysis and inversion. For example, predictions that are non-localized and/or do not show structural conformity may indicate spatial variations in amplitude–offset scaling, seismic wavelet or signal-to-noise (S/N) ratio content. Scaling and waveform variations can be identified from inspection of the prediction filters and their frequency responses. S/N ratios can be estimated via multiple coherence analysis. AVO inversion of seismic data is unstable if not constrained. However, the use of a constraint on the estimated parameters has the undesirable effect of introducing biases into the inverted results: an additional bias-correction step is then needed to retrieve unbiased results. An alternative form of AVO inversion that avoids additional corrections is proposed. This inversion is also fast as it inverts only AVO anomalies. A spectral coherence matching technique is employed to transform a zero-offset extrapolation or near-offset substack into P-wave impedance. The same technique is applied to the prediction-error section obtained by means of partial coherence, in order to estimate S-wave velocity to P-wave velocity (VS/VP) ratios. Both techniques assume that accurate well ties, reliable density measurements and P-wave and S-wave velocity logs are available, and that impedance contrasts are not too strong. A full Zoeppritz inversion is required when impedance contrasts that are too high are encountered. An added assumption is made for the inversion to the VS/VP ratio, i.e. the Gassmann fluid-substitution theory is valid within the reservoir area. One synthetic example and one real North Sea in-line survey illustrate the application of the two coherence methods

    Experimental observation of a strong mean flow induced by internal gravity waves

    Get PDF
    We report the experimental observation of a robust horizontal mean flow induced by internal gravity waves. A wave beam is forced at the lateral boundary of a tank filled with a linearly stratified fluid initially at rest. After a transient regime, a strong jet appears in the wave beam, with horizontal recirculations outside the wave beam. We present a simple physical mechanism predicting the growth rate of the mean flow and its initial spatial structure. We find good agreement with experimental results

    On the use of simple dynamical systems for climate predictions: A Bayesian prediction of the next glacial inception

    Full text link
    Over the last few decades, climate scientists have devoted much effort to the development of large numerical models of the atmosphere and the ocean. While there is no question that such models provide important and useful information on complicated aspects of atmosphere and ocean dynamics, skillful prediction also requires a phenomenological approach, particularly for very slow processes, such as glacial-interglacial cycles. Phenomenological models are often represented as low-order dynamical systems. These are tractable, and a rich source of insights about climate dynamics, but they also ignore large bodies of information on the climate system, and their parameters are generally not operationally defined. Consequently, if they are to be used to predict actual climate system behaviour, then we must take very careful account of the uncertainty introduced by their limitations. In this paper we consider the problem of the timing of the next glacial inception, about which there is on-going debate. Our model is the three-dimensional stochastic system of Saltzman and Maasch (1991), and our inference takes place within a Bayesian framework that allows both for the limitations of the model as a description of the propagation of the climate state vector, and for parametric uncertainty. Our inference takes the form of a data assimilation with unknown static parameters, which we perform with a variant on a Sequential Monte Carlo technique (`particle filter'). Provisional results indicate peak glacial conditions in 60,000 years.Comment: superseeds the arXiv:0809.0632 (which was published in European Reviews). The Bayesian section has been significantly expanded. The present version has gone scientific peer review and has been published in European Physics Special Topics. (typo in DOI and in Table 1 (psi -> theta) corrected on 25th August 2009

    Data-Driven Forecasting of High-Dimensional Chaotic Systems with Long Short-Term Memory Networks

    Full text link
    We introduce a data-driven forecasting method for high-dimensional chaotic systems using long short-term memory (LSTM) recurrent neural networks. The proposed LSTM neural networks perform inference of high-dimensional dynamical systems in their reduced order space and are shown to be an effective set of nonlinear approximators of their attractor. We demonstrate the forecasting performance of the LSTM and compare it with Gaussian processes (GPs) in time series obtained from the Lorenz 96 system, the Kuramoto-Sivashinsky equation and a prototype climate model. The LSTM networks outperform the GPs in short-term forecasting accuracy in all applications considered. A hybrid architecture, extending the LSTM with a mean stochastic model (MSM-LSTM), is proposed to ensure convergence to the invariant measure. This novel hybrid method is fully data-driven and extends the forecasting capabilities of LSTM networks.Comment: 31 page

    Online Learning Algorithm for Time Series Forecasting Suitable for Low Cost Wireless Sensor Networks Nodes

    Full text link
    Time series forecasting is an important predictive methodology which can be applied to a wide range of problems. Particularly, forecasting the indoor temperature permits an improved utilization of the HVAC (Heating, Ventilating and Air Conditioning) systems in a home and thus a better energy efficiency. With such purpose the paper describes how to implement an Artificial Neural Network (ANN) algorithm in a low cost system-on-chip to develop an autonomous intelligent wireless sensor network. The present paper uses a Wireless Sensor Networks (WSN) to monitor and forecast the indoor temperature in a smart home, based on low resources and cost microcontroller technology as the 8051MCU. An on-line learning approach, based on Back-Propagation (BP) algorithm for ANNs, has been developed for real-time time series learning. It performs the model training with every new data that arrive to the system, without saving enormous quantities of data to create a historical database as usual, i.e., without previous knowledge. Consequently to validate the approach a simulation study through a Bayesian baseline model have been tested in order to compare with a database of a real application aiming to see the performance and accuracy. The core of the paper is a new algorithm, based on the BP one, which has been described in detail, and the challenge was how to implement a computational demanding algorithm in a simple architecture with very few hardware resources.Comment: 28 pages, Published 21 April 2015 at MDPI's journal "Sensors
    • 

    corecore