14,751 research outputs found

    Condition Monitoring of Power Cables

    No full text
    A National Grid funded research project at Southampton has investigated possible methodologies for data acquisition, transmission and processing that will facilitate on-line continuous monitoring of partial discharges in high voltage polymeric cable systems. A method that only uses passive components at the measuring points has been developed and is outlined in this paper. More recent work, funded through the EPSRC Supergen V, UK Energy Infrastructure (AMPerES) grant in collaboration with UK electricity network operators has concentrated on the development of partial discharge data processing techniques that ultimately may allow continuous assessment of transmission asset health to be reliably determined

    Comparison of alternatives to amplitude thresholding for onset detection of acoustic emission signals

    Get PDF
    Acoustic Emission (AE) monitoring can be used to detect the presence of damage as well as determine its location in Structural Health Monitoring (SHM) applications. Information on the time difference of the signal generated by the damage event arriving at different sensors in an array is essential in performing localisation. Currently, this is determined using a fixed threshold which is particularly prone to errors when not set to optimal values. This paper presents three new methods for determining the onset of AE signals without the need for a predetermined threshold. The performance of the techniques is evaluated using AE signals generated during fatigue crack growth and compared to the established Akaike Information Criterion (AIC) and fixed threshold methods. It was found that the 1D location accuracy of the new methods was within the range of <1–7.1%<1–7.1% of the monitored region compared to 2.7% for the AIC method and a range of 1.8–9.4% for the conventional Fixed Threshold method at different threshold levels

    Detecting single-trial EEG evoked potential using a wavelet domain linear mixed model: application to error potentials classification

    Full text link
    Objective. The main goal of this work is to develop a model for multi-sensor signals such as MEG or EEG signals, that accounts for the inter-trial variability, suitable for corresponding binary classification problems. An important constraint is that the model be simple enough to handle small size and unbalanced datasets, as often encountered in BCI type experiments. Approach. The method involves linear mixed effects statistical model, wavelet transform and spatial filtering, and aims at the characterization of localized discriminant features in multi-sensor signals. After discrete wavelet transform and spatial filtering, a projection onto the relevant wavelet and spatial channels subspaces is used for dimension reduction. The projected signals are then decomposed as the sum of a signal of interest (i.e. discriminant) and background noise, using a very simple Gaussian linear mixed model. Main results. Thanks to the simplicity of the model, the corresponding parameter estimation problem is simplified. Robust estimates of class-covariance matrices are obtained from small sample sizes and an effective Bayes plug-in classifier is derived. The approach is applied to the detection of error potentials in multichannel EEG data, in a very unbalanced situation (detection of rare events). Classification results prove the relevance of the proposed approach in such a context. Significance. The combination of linear mixed model, wavelet transform and spatial filtering for EEG classification is, to the best of our knowledge, an original approach, which is proven to be effective. This paper improves on earlier results on similar problems, and the three main ingredients all play an important role

    Measuring Financial Cash Flow and Term Structure Dynamics

    Get PDF
    Financial turbulence is a phenomenon occurring in anti - persistent markets. In contrast, financial crises occur in persistent markets. A relationship can be established between these two extreme phenomena of long term market dependence and the older financial concept of financial (il-)liquidity. The measurement of the degree of market persistence and the measurement of the degree of market liquidity are related. To accomplish the two research objectives of measurement and simulation of different degrees of financial liquidity, I propose to boldly reformulate and reinterpret the classical laws of fluid mechanics into cash flow mechanics. At first this approach may appear contrived and artificial, but the end results of these reformulations and reinterpretations are useful quantifiable financial quantities, which will assist us with the measurement, analysis and proper characterization of modern dynamic financial markets in ways that classical comparative static financial - \ economic analyses do not allow.Financial Cash Flow, Term Structure

    Time domain analysis of switching transient fields in high voltage substations

    Get PDF
    Switching operations of circuit breakers and disconnect switches generate transient currents propagating along the substation busbars. At the moment of switching, the busbars temporarily acts as antennae radiating transient electromagnetic fields within the substations. The radiated fields may interfere and disrupt normal operations of electronic equipment used within the substation for measurement, control and communication purposes. Hence there is the need to fully characterise the substation electromagnetic environment as early as the design stage of substation planning and operation to ensure safe operations of the electronic equipment. This paper deals with the computation of transient electromagnetic fields due to switching within a high voltage air-insulated substation (AIS) using the finite difference time domain (FDTD) metho

    Sparse Bayesian mass-mapping with uncertainties: hypothesis testing of structure

    Get PDF
    A crucial aspect of mass-mapping, via weak lensing, is quantification of the uncertainty introduced during the reconstruction process. Properly accounting for these errors has been largely ignored to date. We present results from a new method that reconstructs maximum a posteriori (MAP) convergence maps by formulating an unconstrained Bayesian inference problem with Laplace-type ℓ1\ell_1-norm sparsity-promoting priors, which we solve via convex optimization. Approaching mass-mapping in this manner allows us to exploit recent developments in probability concentration theory to infer theoretically conservative uncertainties for our MAP reconstructions, without relying on assumptions of Gaussianity. For the first time these methods allow us to perform hypothesis testing of structure, from which it is possible to distinguish between physical objects and artifacts of the reconstruction. Here we present this new formalism, demonstrate the method on illustrative examples, before applying the developed formalism to two observational datasets of the Abel-520 cluster. In our Bayesian framework it is found that neither Abel-520 dataset can conclusively determine the physicality of individual local massive substructure at significant confidence. However, in both cases the recovered MAP estimators are consistent with both sets of data

    Revisiting the Local Scaling Hypothesis in Stably Stratified Atmospheric Boundary Layer Turbulence: an Integration of Field and Laboratory Measurements with Large-eddy Simulations

    Full text link
    The `local scaling' hypothesis, first introduced by Nieuwstadt two decades ago, describes the turbulence structure of stable boundary layers in a very succinct way and is an integral part of numerous local closure-based numerical weather prediction models. However, the validity of this hypothesis under very stable conditions is a subject of on-going debate. In this work, we attempt to address this controversial issue by performing extensive analyses of turbulence data from several field campaigns, wind-tunnel experiments and large-eddy simulations. Wide range of stabilities, diverse field conditions and a comprehensive set of turbulence statistics make this study distinct

    Detecting Baryon Acoustic Oscillations

    Full text link
    Baryon Acoustic Oscillations are a feature imprinted in the galaxy distribution by acoustic waves traveling in the plasma of the early universe. Their detection at the expected scale in large-scale structures strongly supports current cosmological models with a nearly linear evolution from redshift approximately 1000, and the existence of dark energy. Besides, BAOs provide a standard ruler for studying cosmic expansion. In this paper we focus on methods for BAO detection using the correlation function measurement. For each method, we want to understand the tested hypothesis (the hypothesis H0 to be rejected) and the underlying assumptions. We first present wavelet methods which are mildly model-dependent and mostly sensitive to the BAO feature. Then we turn to fully model-dependent methods. We present the most often used method based on the chi^2 statistic, but we find it has limitations. In general the assumptions of the chi^2 method are not verified, and it only gives a rough estimate of the significance. The estimate can become very wrong when considering more realistic hypotheses, where the covariance matrix of the measurement depends on cosmological parameters. Instead we propose to use a new method based on two modifications: we modify the procedure for computing the significance and make it rigorous, and we modify the statistic to obtain better results in the case of varying covariance matrix. We verify with simulations that correct significances are different from the ones obtained using the classical chi^2 procedure. We also test a simple example of varying covariance matrix. In this case we find that our modified statistic outperforms the classical chi^2 statistic when both significances are correctly computed. Finally we find that taking into account variations of the covariance matrix can change both BAO detection levels and cosmological parameter constraints

    Delay-Coordinates Embeddings as a Data Mining Tool for Denoising Speech Signals

    Full text link
    In this paper we utilize techniques from the theory of non-linear dynamical systems to define a notion of embedding threshold estimators. More specifically we use delay-coordinates embeddings of sets of coefficients of the measured signal (in some chosen frame) as a data mining tool to separate structures that are likely to be generated by signals belonging to some predetermined data set. We describe a particular variation of the embedding threshold estimator implemented in a windowed Fourier frame, and we apply it to speech signals heavily corrupted with the addition of several types of white noise. Our experimental work seems to suggest that, after training on the data sets of interest,these estimators perform well for a variety of white noise processes and noise intensity levels. The method is compared, for the case of Gaussian white noise, to a block thresholding estimator
    • 

    corecore