9 research outputs found

    Phasor Parameter Modeling and Time-Synchronized Calculation for Representation of Power System Dynamics

    Get PDF
    The electric power grid is undergoing sustained disturbances. In particular, the extreme dynamic events disrupt normal electric power transfer, degrade power system operating conditions, and may lead to catastrophic large-scale blackouts. Accordingly, control applications are deployed to detect the inception of extreme dynamic events, and mitigate their causes appropriately, so that normal power system operating conditions can be restored. In order to achieve this, the operating conditions of the power system should be accurately characterized in terms of the electrical quantities that are crucial to control applications. Currently, the power system operating conditions are obtained through SCADA system and the synchrophasor system. Because of GPS time-synchronized waveform sampling capability and higher measurement reporting rate, synchrophasor system is more advantageous in tracking the extreme dynamic operating conditions of the power system. In this Dissertation, a phasor parameter calculation approach is proposed to accurately characterize the power system operating conditions during the extreme electromagnetic and electromechanical dynamic events in the electric power grid. First, a framework for phasor parameter calculation during both electromagnetic and electromechanical dynamic events is proposed. The framework aims to satisfy both P-class and M-class PMU algorithm design accuracy requirements with a single algorithm. This is achieved by incorporating an adaptive event classification and algorithm model switching mechanism, followed by the phasor parameter definition and calculation tailored to each identified event. Then, a phasor estimation technique is designed for electromagnetic transient events. An ambient fundamental frequency estimator based on UKF is introduced, which is leveraged to adaptively tune the DFT-based algorithm to alleviate frequency leakage. A hybridization algorithm framework is also proposed, which further reduces the negative impact caused by decaying DC components in electromagnetic transient waveforms. Then, a phasor estimation technique for electromechanical dynamics is introduced. A novel wavelet is designed to effectively extract time-frequency features from electromechanical dynamic waveforms. These features are then used to classify input signal types, so that the PMU algorithm modeling can be thereafter tailored specifically to match the underlying signal features for the identified event. This adaptability of the proposed algorithm results in higher phasor parameter estimation accuracy. Finally, the Dissertation hypothesis is validated through experimental testing under design and application test use cases. The associated test procedures, test use cases, and test methodologies and metrics are defined and implemented. The impact of algorithm inaccuracy and communication network distortion on application performance is also demonstrated. Test results performance is then evaluated. Conclusions, Dissertation contributions, and future steps are outlined at the end

    Optimisation of vibration monitoring nodes in wireless sensor networks

    Get PDF
    This PhD research focuses on developing a wireless vibration condition monitoring (CM) node which allows an optimal implementation of advanced signal processing algorithms. Obviously, such a node should meet additional yet practical requirements including high robustness and low investments in achieving predictive maintenance. There are a number of wireless protocols which can be utilised to establish a wireless sensor network (WSN). Protocols like WiFi HaLow, Bluetooth low energy (BLE), ZigBee and Thread are more suitable for long-term non-critical CM battery powered nodes as they provide inherent merits like low cost, self-organising network, and low power consumption. WirelessHART and ISA100.11a provide more reliable and robust performance but their solutions are usually more expensive, thus they are more suitable for strict industrial control applications. Distributed computation can utilise the limited bandwidth of wireless network and battery life of sensor nodes more wisely. Hence it is becoming increasingly popular in wireless CM with the fast development of electronics and wireless technologies in recent years. Therefore, distributed computation is the primary focus of this research in order to develop an advanced sensor node for realising wireless networks which allow high-performance CM at minimal network traffic and economic cost. On this basis, a ZigBee-based vibration monitoring node is designed for the evaluation of embedding signal processing algorithms. A state-of-the-art Cortex-M4F processor is employed as the core processor on the wireless sensor node, which has been optimised for implementing complex signal processing algorithms at low power consumption. Meanwhile, an envelope analysis is focused on as the main intelligent technique embedded on the node due to the envelope analysis being the most effective and general method to characterise impulsive and modulating signatures. Such signatures can commonly be found on faulty signals generated by key machinery components, such as bearings, gears, turbines, and valves. Through a preliminary optimisation in implementing envelope analysis based on fast Fourier transform (FFT), an envelope spectrum of 2048 points is successfully achieved on a processor with a memory usage of 32 kB. Experimental results show that the simulated bearing faults can be clearly identified from the calculated envelope spectrum. Meanwhile, the data throughput requirement is reduced by more than 95% in comparison with the raw data transmission. To optimise the performance of the vibration monitoring node, three main techniques have been developed and validated: 1) A new data processing scheme is developed by combining three subsequent processing techniques: down-sampling, data frame overlapping and cascading. On this basis, a frequency resolution of 0.61 Hz in the envelope spectrum is achieved on the same processor. 2) The optimal band-pass filter for envelope analysis is selected by a scheme, in which the complicated fast kurtogram is implemented on the host computer for selecting optimal band-pass filter and real-time envelope analysis on the wireless sensor for extracting bearing fault features. Moreover, a frequency band of 16 kHz is analysed, which allows features to be extracted in a wide frequency band, covering a wide category of industrial applications. 3) Two new analysis methods: short-time RMS and spectral correlation algorithms are proposed for bearing fault diagnosis. They can significantly reduce the CPU usage, being over two times less and consequently much lower power consumptio

    Development of temporal phase unwrapping algorithms for depth-resolved measurements using an electronically tuned Ti:Sa laser

    Get PDF
    This thesis is concerned with (a) the development of full-field, multi-axis and phase contrast wavelength scanning interferometer, using an electronically tuned CW Ti:Sa laser for the study of depth resolved measurements in composite materials such as GFRPs and (b) the development of temporal phase unwrapping algorithms for depth re-solved measurements. Item (a) was part of the ultimate goal of successfully extracting the 3-D, depth-resolved, constituent parameters (Young s modulus E, Poisson s ratio v etc.) that define the mechanical behaviour of composite materials like GFRPs. Considering the success of OCT as an imaging modality, a wavelength scanning interferometer (WSI) capable of imaging the intensity AND the phase of the interference signal was proposed as the preferred technique to provide the volumetric displacement/strain fields (Note that displacement/strain fields are analogous to phase fields and thus a phase-contrast interferometer is of particular interest in this case). These would then be passed to the VFM and yield the sought parameters provided the loading scheme is known. As a result, a number of key opto-mechanical hardware was developed. First, a multiple channel (x6) tomographic interferometer realised in a Mach-Zehnder arrangement was built. Each of the three channels would provide the necessary information to extract the three orthogonal displacement/strain components while the other three are complementary and were included in the design in order to maximize the penetration depth (sample illuminated from both sides). Second, a miniature uniaxial (tensile and/or compression) loading machine was designed and built for the introduction of controlled and low magnitude displacements. Last, a rotation stage for the experimental determination of the sensitivity vectors and the re-registration of the volumetric data from the six channels was also designed and built. Unfortunately, due to the critical failure of the Ti:Sa laser data collection using the last two items was not possible. However, preliminary results at a single wavelength suggested that the above items work as expected. Item (b) involved the development of an optical sensor for the dynamic monitoring of wavenumber changes during a full 100 nm scan. The sensor is comprised of a set of four wedges in a Fizeau interferometer setup that became part of the multi-axis interferometer (7th channel). Its development became relevant due to the large amount of mode-hops present during a full scan of the Ti:Sa source. These are associated to the physics of the laser and have the undesirable effect of randomising the signal and thus preventing successful depth reconstructions. The multi-wedge sensor was designed so that it provides simultaneously high wavenumber change resolution and immunity to the large wavenumber jumps from the Ti:Sa. The analysis algorithms for the extraction of the sought wavenumber changes were based on 2-D Fourier transform method followed by temporal phase unwrapping. At first, the performance of the sensor was tested against that of a high-end commercial wavemeter for a limited scan of 1nm. A root mean square (rms) difference in measured wavenumber shift between the two of ∼4 m-1 has been achieved, equivalent to an rms wavelength shift error of ∼0.4 pm. Second, by resampling the interference signal and the wavenumber-change axis onto a uniformly sampled k-space, depth resolutions that are close to the theoretical limits were achieved for scans of up to 37 nm. Access of the full 100 nm range that is characterised by wavelength steps down to picometers level was achieved by introducing a number of improvements to the original temporal phase unwrapping algorithm reported in ref [1] tailored to depth resolved measurements. These involved the estimation and suppression of intensity background artefacts, improvements on the 2-D Fourier transform phase detection based on a previously developed algorithm in ref [2] and finally the introduction of two modifications to the original TPU. Both approaches are adaptive and involve signal re-referencing at regular intervals throughout the scan. Their purpose is to compensate for systematic and non-systematic errors owing to a small error in the value of R (a scaling factor applied to the lower sensitivity wedge phase-change signal used to unwrap the higher sensitivity one), or small changes in R with wavelength due to the possibility of a mismatch in the refractive dispersion curves of the wedges and/or a mismatch in the wedge angles. A hybrid approach combining both methods was proposed and used to analyse the data from each of the four wedges. It was found to give the most robust results of all the techniques considered, with a clear Fourier peak at the expected frequency, with significantly reduced spectral artefacts and identical depth resolutions for all four wedges of 2.2 μm measured at FWHM. The ability of the phase unwrapping strategy in resolving the aforementioned issues was demonstrated by successfully measuring the absolute thickness of four fused silica glasses using real experimental data. The results were compared with independent micrometer measurements and showed excellent agreement. Finally, due to the lack of additional experimental data and in an attempt to justify the validity of the proposed temporal phase unwrapping strategy termed as the hybrid approach, a set of simulations that closely matched the parameters characterising the real experimental data set analysed were produced and were subsequently analysed. The results of this final test justify that the various fixes included in the hybrid approach have not evolved to solve the problems of a particular data set but are rather of general nature thereby, highlighting its importance for PC-WSI applications concerning the processing and analysis of large scans

    Sensor Signal and Information Processing II

    Get PDF
    In the current age of information explosion, newly invented technological sensors and software are now tightly integrated with our everyday lives. Many sensor processing algorithms have incorporated some forms of computational intelligence as part of their core framework in problem solving. These algorithms have the capacity to generalize and discover knowledge for themselves and learn new information whenever unseen data are captured. The primary aim of sensor processing is to develop techniques to interpret, understand, and act on information contained in the data. The interest of this book is in developing intelligent signal processing in order to pave the way for smart sensors. This involves mathematical advancement of nonlinear signal processing theory and its applications that extend far beyond traditional techniques. It bridges the boundary between theory and application, developing novel theoretically inspired methodologies targeting both longstanding and emergent signal processing applications. The topic ranges from phishing detection to integration of terrestrial laser scanning, and from fault diagnosis to bio-inspiring filtering. The book will appeal to established practitioners, along with researchers and students in the emerging field of smart sensors processing

    Digital Signal Processing (Second Edition)

    Get PDF
    This book provides an account of the mathematical background, computational methods and software engineering associated with digital signal processing. The aim has been to provide the reader with the mathematical methods required for signal analysis which are then used to develop models and algorithms for processing digital signals and finally to encourage the reader to design software solutions for Digital Signal Processing (DSP). In this way, the reader is invited to develop a small DSP library that can then be expanded further with a focus on his/her research interests and applications. There are of course many excellent books and software systems available on this subject area. However, in many of these publications, the relationship between the mathematical methods associated with signal analysis and the software available for processing data is not always clear. Either the publications concentrate on mathematical aspects that are not focused on practical programming solutions or elaborate on the software development of solutions in terms of working ‘black-boxes’ without covering the mathematical background and analysis associated with the design of these software solutions. Thus, this book has been written with the aim of giving the reader a technical overview of the mathematics and software associated with the ‘art’ of developing numerical algorithms and designing software solutions for DSP, all of which is built on firm mathematical foundations. For this reason, the work is, by necessity, rather lengthy and covers a wide range of subjects compounded in four principal parts. Part I provides the mathematical background for the analysis of signals, Part II considers the computational techniques (principally those associated with linear algebra and the linear eigenvalue problem) required for array processing and associated analysis (error analysis for example). Part III introduces the reader to the essential elements of software engineering using the C programming language, tailored to those features that are used for developing C functions or modules for building a DSP library. The material associated with parts I, II and III is then used to build up a DSP system by defining a number of ‘problems’ and then addressing the solutions in terms of presenting an appropriate mathematical model, undertaking the necessary analysis, developing an appropriate algorithm and then coding the solution in C. This material forms the basis for part IV of this work. In most chapters, a series of tutorial problems is given for the reader to attempt with answers provided in Appendix A. These problems include theoretical, computational and programming exercises. Part II of this work is relatively long and arguably contains too much material on the computational methods for linear algebra. However, this material and the complementary material on vector and matrix norms forms the computational basis for many methods of digital signal processing. Moreover, this important and widely researched subject area forms the foundations, not only of digital signal processing and control engineering for example, but also of numerical analysis in general. The material presented in this book is based on the lecture notes and supplementary material developed by the author for an advanced Masters course ‘Digital Signal Processing’ which was first established at Cranfield University, Bedford in 1990 and modified when the author moved to De Montfort University, Leicester in 1994. The programmes are still operating at these universities and the material has been used by some 700++ graduates since its establishment and development in the early 1990s. The material was enhanced and developed further when the author moved to the Department of Electronic and Electrical Engineering at Loughborough University in 2003 and now forms part of the Department’s post-graduate programmes in Communication Systems Engineering. The original Masters programme included a taught component covering a period of six months based on two semesters, each Semester being composed of four modules. The material in this work covers the first Semester and its four parts reflect the four modules delivered. The material delivered in the second Semester is published as a companion volume to this work entitled Digital Image Processing, Horwood Publishing, 2005 which covers the mathematical modelling of imaging systems and the techniques that have been developed to process and analyse the data such systems provide. Since the publication of the first edition of this work in 2003, a number of minor changes and some additions have been made. The material on programming and software engineering in Chapters 11 and 12 has been extended. This includes some additions and further solved and supplementary questions which are included throughout the text. Nevertheless, it is worth pointing out, that while every effort has been made by the author and publisher to provide a work that is error free, it is inevitable that typing errors and various ‘bugs’ will occur. If so, and in particular, if the reader starts to suffer from a lack of comprehension over certain aspects of the material (due to errors or otherwise) then he/she should not assume that there is something wrong with themselves, but with the author

    River bed sediment surface characterisation using wavelet transform-based methods.

    Get PDF
    The primary purpose of this work was to study the morphological change of river-bedsediment surfaces over time using wavelet transform analysis techniques. The wavelettransform is a rapidly developing area of applied mathematics in both science andengineering. As it allows for interrogation of the spectral made up of local signalfeatures, it has superior performance compared to the traditionally used Fouriertransform which provides only signal averaged spectral information. The main study ofthis thesis includes the analysis of both synthetically generated sediment surfaces andlaboratory experimental sediment bed-surface data. This was undertaken usingtwo-dimensional wavelet transform techniques based on both the discrete and thestationary wavelet transforms.A comprehensive data-base of surface scans from experimental river-bed sedimentsurfaces topographies were included in the study. A novel wavelet-basedcharacterisation measure - the form size distribution ifsd) - was developed to quantifythe global characteristics of the sediment data. The fsd is based on the distribution ofwavelet-based scale-dependent energies. It is argued that this measure will potentiallybe more useful than the traditionally used particle size distribution (psd), as it is themorphology of the surface rather than the individual particle sizes that affects the nearbed flow regime and hence bed friction characteristics.Amplitude and scale dependent thresholding techniques were then studied. It was foundthat these thresholding techniques could be used to: (1) extract the overall surfacestructure, and (2) enhance dominant grains and formations of dominant grains withinthe surfaces. It is shown that assessment of the surface data-sets post-thresholding mayallow for the detection of structural changes over time

    Modern Telemetry

    Get PDF
    Telemetry is based on knowledge of various disciplines like Electronics, Measurement, Control and Communication along with their combination. This fact leads to a need of studying and understanding of these principles before the usage of Telemetry on selected problem solving. Spending time is however many times returned in form of obtained data or knowledge which telemetry system can provide. Usage of telemetry can be found in many areas from military through biomedical to real medical applications. Modern way to create a wireless sensors remotely connected to central system with artificial intelligence provide many new, sometimes unusual ways to get a knowledge about remote objects behaviour. This book is intended to present some new up to date accesses to telemetry problems solving by use of new sensors conceptions, new wireless transfer or communication techniques, data collection or processing techniques as well as several real use case scenarios describing model examples. Most of book chapters deals with many real cases of telemetry issues which can be used as a cookbooks for your own telemetry related problems
    corecore