197 research outputs found

    Construction of shift invariant m-band tight framelet packets

    Get PDF
    Framelets and their promising features in applications have attracted a great deal of interest and effort in recent years. In this paper, we outline a method for constructing shift invariant M-band tight framelet packets by recursively decomposing the multiresolution space VJ for a fixed scale J to level 0 with any combined mask m = [m0, m1, . . . , mL] satisfying some mild conditions.Publisher's Versio

    Multidimensional Wavelets and Computer Vision

    Get PDF
    This report deals with the construction and the mathematical analysis of multidimensional nonseparable wavelets and their efficient application in computer vision. In the first part, the fundamental principles and ideas of multidimensional wavelet filter design such as the question for the existence of good scaling matrices and sensible design criteria are presented and extended in various directions. Afterwards, the analytical properties of these wavelets are investigated in some detail. It will turn out that they are especially well-suited to represent (discretized) data as well as large classes of operators in a sparse form - a property that directly yields efficient numerical algorithms. The final part of this work is dedicated to the application of the developed methods to the typical computer vision problems of nonlinear image regularization and the computation of optical flow in image sequences. It is demonstrated how the wavelet framework leads to stable and reliable results for these problems of generally ill-posed nature. Furthermore, all the algorithms are of order O(n) leading to fast processing

    Combined Industry, Space and Earth Science Data Compression Workshop

    Get PDF
    The sixth annual Space and Earth Science Data Compression Workshop and the third annual Data Compression Industry Workshop were held as a single combined workshop. The workshop was held April 4, 1996 in Snowbird, Utah in conjunction with the 1996 IEEE Data Compression Conference, which was held at the same location March 31 - April 3, 1996. The Space and Earth Science Data Compression sessions seek to explore opportunities for data compression to enhance the collection, analysis, and retrieval of space and earth science data. Of particular interest is data compression research that is integrated into, or has the potential to be integrated into, a particular space or earth science data information system. Preference is given to data compression research that takes into account the scien- tist's data requirements, and the constraints imposed by the data collection, transmission, distribution and archival systems

    Phasor Parameter Modeling and Time-Synchronized Calculation for Representation of Power System Dynamics

    Get PDF
    The electric power grid is undergoing sustained disturbances. In particular, the extreme dynamic events disrupt normal electric power transfer, degrade power system operating conditions, and may lead to catastrophic large-scale blackouts. Accordingly, control applications are deployed to detect the inception of extreme dynamic events, and mitigate their causes appropriately, so that normal power system operating conditions can be restored. In order to achieve this, the operating conditions of the power system should be accurately characterized in terms of the electrical quantities that are crucial to control applications. Currently, the power system operating conditions are obtained through SCADA system and the synchrophasor system. Because of GPS time-synchronized waveform sampling capability and higher measurement reporting rate, synchrophasor system is more advantageous in tracking the extreme dynamic operating conditions of the power system. In this Dissertation, a phasor parameter calculation approach is proposed to accurately characterize the power system operating conditions during the extreme electromagnetic and electromechanical dynamic events in the electric power grid. First, a framework for phasor parameter calculation during both electromagnetic and electromechanical dynamic events is proposed. The framework aims to satisfy both P-class and M-class PMU algorithm design accuracy requirements with a single algorithm. This is achieved by incorporating an adaptive event classification and algorithm model switching mechanism, followed by the phasor parameter definition and calculation tailored to each identified event. Then, a phasor estimation technique is designed for electromagnetic transient events. An ambient fundamental frequency estimator based on UKF is introduced, which is leveraged to adaptively tune the DFT-based algorithm to alleviate frequency leakage. A hybridization algorithm framework is also proposed, which further reduces the negative impact caused by decaying DC components in electromagnetic transient waveforms. Then, a phasor estimation technique for electromechanical dynamics is introduced. A novel wavelet is designed to effectively extract time-frequency features from electromechanical dynamic waveforms. These features are then used to classify input signal types, so that the PMU algorithm modeling can be thereafter tailored specifically to match the underlying signal features for the identified event. This adaptability of the proposed algorithm results in higher phasor parameter estimation accuracy. Finally, the Dissertation hypothesis is validated through experimental testing under design and application test use cases. The associated test procedures, test use cases, and test methodologies and metrics are defined and implemented. The impact of algorithm inaccuracy and communication network distortion on application performance is also demonstrated. Test results performance is then evaluated. Conclusions, Dissertation contributions, and future steps are outlined at the end

    Oil transmissions pipelines condition monitoring using wavelet analysis and ultrasonic techniques

    Get PDF
    Proper and sensitive monitoring capability to determine the condition of pipelines is desirable to predict leakage and other failure modes, such as flaws and cracks. Currently methods used for detecting pipeline damage rely on visual inspection or localized measurements and thus, can only be used for the detection of damage that is on or near the surface of the structure. This thesis offers reliable, inexpensive and non-destructive technique, based on ultrasonic measurements, to detect faults within Carbon steel pipes and to evaluate the severity of these faults. The proposed technique allows inspections in areas where conventionally used inspection techniques are costly and/or difficult to apply. This work started by developing 3D Finite Elements Modelling (FEM) to describe the dynamic behaviour of ultrasonic wave propagations into the pipe’s structure and to identify the resonance modes. Consequently, the effects of quantified seeded faults, a 1-mm diameter hole of different depths in the pipe wall, on these resonance modes were examined using the developed model. An experimental test rig was designed and implemented for verifying the outcomes of the finite element model. Conventional analysis techniques were applied to detect and evaluate the severity of those quantified faults. However, those signal processing methods were found ineffective for such analysis. Therefore, a more capable signal processing technique, using continuous wavelet techniques (CWT), was developed. The energy contents of certain frequency bands of the CWT were found to be in good agreement with the model predicted responses and show important information on pipe’s defects. The developed technique is found to be sensitive for minor pipe structural related deficiencies and offers a reliable and inexpensive tool for pipeline integrity management programs

    Compressive sensing for signal ensembles

    Get PDF
    Compressive sensing (CS) is a new approach to simultaneous sensing and compression that enables a potentially large reduction in the sampling and computation costs for acquisition of signals having a sparse or compressible representation in some basis. The CS literature has focused almost exclusively on problems involving single signals in one or two dimensions. However, many important applications involve distributed networks or arrays of sensors. In other applications, the signal is inherently multidimensional and sensed progressively along a subset of its dimensions; examples include hyperspectral imaging and video acquisition. Initial work proposed joint sparsity models for signal ensembles that exploit both intra- and inter-signal correlation structures. Joint sparsity models enable a reduction in the total number of compressive measurements required by CS through the use of specially tailored recovery algorithms. This thesis reviews several different models for sparsity and compressibility of signal ensembles and multidimensional signals and proposes practical CS measurement schemes for these settings. For joint sparsity models, we evaluate the minimum number of measurements required under a recovery algorithm with combinatorial complexity. We also propose a framework for CS that uses a union-of-subspaces signal model. This framework leverages the structure present in certain sparse signals and can exploit both intra- and inter-signal correlations in signal ensembles. We formulate signal recovery algorithms that employ these new models to enable a reduction in the number of measurements required. Additionally, we propose the use of Kronecker product matrices as sparsity or compressibility bases for signal ensembles and multidimensional signals to jointly model all types of correlation present in the signal when each type of correlation can be expressed using sparsity. We compare the performance of standard global measurement ensembles, which act on all of the signal samples; partitioned measurements, which act on a partition of the signal with a given measurement depending only on a piece of the signal; and Kronecker product measurements, which can be implemented in distributed measurement settings. The Kronecker product formulation in the sparsity and measurement settings enables the derivation of analytical bounds for transform coding compression of signal ensembles and multidimensional signals. We also provide new theoretical results for performance of CS recovery when Kronecker product matrices are used, which in turn motivates new design criteria for distributed CS measurement schemes

    Iris Recognition: Robust Processing, Synthesis, Performance Evaluation and Applications

    Get PDF
    The popularity of iris biometric has grown considerably over the past few years. It has resulted in the development of a large number of new iris processing and encoding algorithms. In this dissertation, we will discuss the following aspects of the iris recognition problem: iris image acquisition, iris quality, iris segmentation, iris encoding, performance enhancement and two novel applications.;The specific claimed novelties of this dissertation include: (1) a method to generate a large scale realistic database of iris images; (2) a crosspectral iris matching method for comparison of images in color range against images in Near-Infrared (NIR) range; (3) a method to evaluate iris image and video quality; (4) a robust quality-based iris segmentation method; (5) several approaches to enhance recognition performance and security of traditional iris encoding techniques; (6) a method to increase iris capture volume for acquisition of iris on the move from a distance and (7) a method to improve performance of biometric systems due to available soft data in the form of links and connections in a relevant social network

    Seismological data acquisition and signal processing using wavelets

    Get PDF
    This work deals with two main fields: a) The design, built, installation, test, evaluation, deployment and maintenance of Seismological Network of Crete (SNC) of the Laboratory of Geophysics and Seismology (LGS) at Technological Educational Institute (TEI) at Chania. b) The use of Wavelet Transform (WT) in several applications during the operation of the aforementioned network. SNC began its operation in 2003. It is designed and built in order to provide denser network coverage, real time data transmission to CRC, real time telemetry, use of wired ADSL lines and dedicated private satellite links, real time data processing and estimation of source parameters as well as rapid dissemination of results. All the above are implemented using commercial hardware and software which is modified and where is necessary, author designs and deploy additional software modules. Up to now (July 2008) SNC has recorded 5500 identified events (around 970 more than those reported by national bulletin the same period) and its seismic catalogue is complete for magnitudes over 3.2, instead national catalogue which was complete for magnitudes over 3.7 before the operation of SNC. During its operation, several applications at SNC used WT as a signal processing tool. These applications benefited from the adaptation of WT to non-stationary signals such as the seismic signals. These applications are: HVSR method. WT used to reveal undetectable non-stationarities in order to eliminate errors in site’s fundamental frequency estimation. Denoising. Several wavelet denoising schemes compared with the widely used in seismology band-pass filtering in order to prove the superiority of wavelet denoising and to choose the most appropriate scheme for different signal to noise ratios of seismograms. EEWS. WT used for producing magnitude prediction equations and epicentral estimations from the first 5 secs of P wave arrival. As an alternative analysis tool for detection of significant indicators in temporal patterns of seismicity. Multiresolution wavelet analysis of seismicity used to estimate (in a several years time period) the time where the maximum emitted earthquake energy was observed.EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    Probabilistic characterization and synthesis of complex driven systems

    Get PDF
    Thesis (Ph.D.)--Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2000.Includes bibliographical references (leaves 194-204).Real-world systems that have characteristic input-output patterns but don't provide access to their internal states are as numerous as they are difficult to model. This dissertation introduces a modeling language for estimating and emulating the behavior of such systems given time series data. As a benchmark test, a digital violin is designed from observing the performance of an instrument. Cluster-weighted modeling (CWM), a mixture density estimator around local models, is presented as a framework for function approximation and for the prediction and characterization of nonlinear time series. The general model architecture and estimation algorithm are presented and extended to system characterization tools such as estimator uncertainty, predictor uncertainty and the correlation dimension of the data set. Furthermore a real-time implementation, a Hidden-Markov architecture, and function approximation under constraints are derived within the framework. CWM is then applied in the context of different problems and data sets, leading to architectures such as cluster-weighted classification, cluster-weighted estimation, and cluster-weighted sampling. Each application relies on a specific data representation, specific pre and post-processing algorithms, and a specific hybrid of CWM. The third part of this thesis introduces data-driven modeling of acoustic instruments, a novel technique for audio synthesis. CWM is applied along with new sensor technology and various audio representations to estimate models of violin-family instruments. The approach is demonstrated by synthesizing highly accurate violin sounds given off-line input data as well as cello sounds given real-time input data from a cello player.by Bernd Schoner.Ph.D
    • …
    corecore