737 research outputs found

    On-Line Learning and Wavelet-Based Feature Extraction Methodology for Process Monitoring using High-Dimensional Functional Data

    Get PDF
    The recent advances in information technology, such as the various automatic data acquisition systems and sensor systems, have created tremendous opportunities for collecting valuable process data. The timely processing of such data for meaningful information remains a challenge. In this research, several data mining methodology that will aid information streaming of high-dimensional functional data are developed. For on-line implementations, two weighting functions for updating support vector regression parameters were developed. The functions use parameters that can be easily set a priori with the slightest knowledge of the data involved and have provision for lower and upper bounds for the parameters. The functions are applicable to time series predictions, on-line predictions, and batch predictions. In order to apply these functions for on-line predictions, a new on-line support vector regression algorithm that uses adaptive weighting parameters was presented. The new algorithm uses varying rather than fixed regularization constant and accuracy parameter. The developed algorithm is more robust to the volume of data available for on-line training as well as to the relative position of the available data in the training sequence. The algorithm improves prediction accuracy by reducing uncertainty in using fixed values for the regression parameters. It also improves prediction accuracy by reducing uncertainty in using regression values based on some experts’ knowledge rather than on the characteristics of the incoming training data. The developed functions and algorithm were applied to feedwater flow rate data and two benchmark time series data. The results show that using adaptive regression parameters performs better than using fixed regression parameters. In order to reduce the dimension of data with several hundreds or thousands of predictors and enhance prediction accuracy, a wavelet-based feature extraction procedure called step-down thresholding procedure for identifying and extracting significant features for a single curve was developed. The procedure involves transforming the original spectral into wavelet coefficients. It is based on multiple hypothesis testing approach and it controls family-wise error rate in order to guide against selecting insignificant features without any concern about the amount of noise that may be present in the data. Therefore, the procedure is applicable for data-reduction and/or data-denoising. The procedure was compared to six other data-reduction and data-denoising methods in the literature. The developed procedure is found to consistently perform better than most of the popular methods and performs at the same level with the other methods. Many real-world data with high-dimensional explanatory variables also sometimes have multiple response variables; therefore, the selection of the fewest explanatory variables that show high sensitivity to predicting the response variable(s) and low sensitivity to the noise in the data is important for better performance and reduced computational burden. In order to select the fewest explanatory variables that can predict each of the response variables better, a two-stage wavelet-based feature extraction procedure is proposed. The first stage uses step-down procedure to extract significant features for each of the curves. Then, representative features are selected out of the extracted features for all curves using voting selection strategy. Other selection strategies such as union and intersection were also described and implemented. The essence of the first stage is to reduce the dimension of the data without any consideration for whether or not they can predict the response variables accurately. The second stage uses Bayesian decision theory approach to select some of the extracted wavelet coefficients that can predict each of the response variables accurately. The two stage procedure was implemented using near-infrared spectroscopy data and shaft misalignment data. The results show that the second stage further reduces the dimension and the prediction results are encouraging

    Unsupervised multi-scale change detection from SAR imagery for monitoring natural and anthropogenic disasters

    Get PDF
    Thesis (Ph.D.) University of Alaska Fairbanks, 2017Radar remote sensing can play a critical role in operational monitoring of natural and anthropogenic disasters. Despite its all-weather capabilities, and its high performance in mapping, and monitoring of change, the application of radar remote sensing in operational monitoring activities has been limited. This has largely been due to: (1) the historically high costs associated with obtaining radar data; (2) slow data processing, and delivery procedures; and (3) the limited temporal sampling that was provided by spaceborne radar-based satellites. Recent advances in the capabilities of spaceborne Synthetic Aperture Radar (SAR) sensors have developed an environment that now allows for SAR to make significant contributions to disaster monitoring. New SAR processing strategies that can take full advantage of these new sensor capabilities are currently being developed. Hence, with this PhD dissertation, I aim to: (i) investigate unsupervised change detection techniques that can reliably extract signatures from time series of SAR images, and provide the necessary flexibility for application to a variety of natural, and anthropogenic hazard situations; (ii) investigate effective methods to reduce the effects of speckle and other noise on change detection performance; (iii) automate change detection algorithms using probabilistic Bayesian inferencing; and (iv) ensure that the developed technology is applicable to current, and future SAR sensors to maximize temporal sampling of a hazardous event. This is achieved by developing new algorithms that rely on image amplitude information only, the sole image parameter that is available for every single SAR acquisition. The motivation and implementation of the change detection concept are described in detail in Chapter 3. In the same chapter, I demonstrated the technique's performance using synthetic data as well as a real-data application to map wildfire progression. I applied Radiometric Terrain Correction (RTC) to the data to increase the sampling frequency, while the developed multiscaledriven approach reliably identified changes embedded in largely stationary background scenes. With this technique, I was able to identify the extent of burn scars with high accuracy. I further applied the application of the change detection technology to oil spill mapping. The analysis highlights that the approach described in Chapter 3 can be applied to this drastically different change detection problem with only little modification. While the core of the change detection technique remained unchanged, I made modifications to the pre-processing step to enable change detection from scenes of continuously varying background. I introduced the Lipschitz regularity (LR) transformation as a technique to normalize the typically dynamic ocean surface, facilitating high performance oil spill detection independent of environmental conditions during image acquisition. For instance, I showed that LR processing reduces the sensitivity of change detection performance to variations in surface winds, which is a known limitation in oil spill detection from SAR. Finally, I applied the change detection technique to aufeis flood mapping along the Sagavanirktok River. Due to the complex nature of aufeis flooded areas, I substituted the resolution-preserving speckle filter used in Chapter 3 with curvelet filters. In addition to validating the performance of the change detection results, I also provide evidence of the wealth of information that can be extracted about aufeis flooding events once a time series of change detection information was extracted from SAR imagery. A summary of the developed change detection techniques is conducted and suggested future work is presented in Chapter 6

    Multispectral texture synthesis

    Get PDF
    Synthesizing texture involves the ordering of pixels in a 2D arrangement so as to display certain known spatial correlations, generally as described by a sample texture. In an abstract sense, these pixels could be gray-scale values, RGB color values, or entire spectral curves. The focus of this work is to develop a practical synthesis framework that maintains this abstract view while synthesizing texture with high spectral dimension, effectively achieving spectral invariance. The principle idea is to use a single monochrome texture synthesis step to capture the spatial information in a multispectral texture. The first step is to use a global color space transform to condense the spatial information in a sample texture into a principle luminance channel. Then, a monochrome texture synthesis step generates the corresponding principle band in the synthetic texture. This spatial information is then used to condition the generation of spectral information. A number of variants of this general approach are introduced. The first uses a multiresolution transform to decompose the spatial information in the principle band into an equivalent scale/space representation. This information is encapsulated into a set of low order statistical constraints that are used to iteratively coerce white noise into the desired texture. The residual spectral information is then generated using a non-parametric Markov Ran dom field model (MRF). The remaining variants use a non-parametric MRF to generate the spatial and spectral components simultaneously. In this ap proach, multispectral texture is grown from a seed region by sampling from the set of nearest neighbors in the sample texture as identified by a template matching procedure in the principle band. The effectiveness of both algorithms is demonstrated on a number of texture examples ranging from greyscale to RGB textures, as well as 16, 22, 32 and 63 band spectral images. In addition to the standard visual test that predominates the literature, effort is made to quantify the accuracy of the synthesis using informative and effective metrics. These include first and second order statistical comparisons as well as statistical divergence tests

    Bayesian Image Analysis in Fourier Space

    Full text link
    Bayesian image analysis has played a large role over the last 40+ years in solving problems in image noise-reduction, de-blurring, feature enhancement, and object detection. However, these problems can be complex and lead to computational difficulties, due to the modeled interdependence between spatial locations. The Bayesian image analysis in Fourier space (BIFS) approach proposed here reformulates the conventional Bayesian image analysis paradigm for continuous valued images as a large set of independent (but heterogeneous) processes over Fourier space. The original high-dimensional estimation problem in image space is thereby broken down into (trivially parallelizable) independent one-dimensional problems in Fourier space. The BIFS approach leads to easy model specification with fast and direct computation, a wide range of possible prior characteristics, easy modeling of isotropy into the prior, and models that are effectively invariant to changes in image resolution.Comment: 26 pages, 9 figure

    Seismic Ray Impedance Inversion

    Get PDF
    This thesis investigates a prestack seismic inversion scheme implemented in the ray parameter domain. Conventionally, most prestack seismic inversion methods are performed in the incidence angle domain. However, inversion using the concept of ray impedance, as it honours ray path variation following the elastic parameter variation according to Snell’s law, shows the capacity to discriminate different lithologies if compared to conventional elastic impedance inversion. The procedure starts with data transformation into the ray-parameter domain and then implements the ray impedance inversion along constant ray-parameter profiles. With different constant-ray-parameter profiles, mixed-phase wavelets are initially estimated based on the high-order statistics of the data and further refined after a proper well-to-seismic tie. With the estimated wavelets ready, a Cauchy inversion method is used to invert for seismic reflectivity sequences, aiming at recovering seismic reflectivity sequences for blocky impedance inversion. The impedance inversion from reflectivity sequences adopts a standard generalised linear inversion scheme, whose results are utilised to identify rock properties and facilitate quantitative interpretation. It has also been demonstrated that we can further invert elastic parameters from ray impedance values, without eliminating an extra density term or introducing a Gardner’s relation to absorb this term. Ray impedance inversion is extended to P-S converted waves by introducing the definition of converted-wave ray impedance. This quantity shows some advantages in connecting prestack converted wave data with well logs, if compared with the shearwave elastic impedance derived from the Aki and Richards approximation to the Zoeppritz equations. An analysis of P-P and P-S wave data under the framework of ray impedance is conducted through a real multicomponent dataset, which can reduce the uncertainty in lithology identification.Inversion is the key method in generating those examples throughout the entire thesis as we believe it can render robust solutions to geophysical problems. Apart from the reflectivity sequence, ray impedance and elastic parameter inversion mentioned above, inversion methods are also adopted in transforming the prestack data from the offset domain to the ray-parameter domain, mixed-phase wavelet estimation, as well as the registration of P-P and P-S waves for the joint analysis. The ray impedance inversion methods are successfully applied to different types of datasets. In each individual step to achieving the ray impedance inversion, advantages, disadvantages as well as limitations of the algorithms adopted are detailed. As a conclusion, the ray impedance related analyses demonstrated in this thesis are highly competent compared with the classical elastic impedance methods and the author would like to recommend it for a wider application

    Noise-Enhanced and Human Visual System-Driven Image Processing: Algorithms and Performance Limits

    Get PDF
    This dissertation investigates the problem of image processing based on stochastic resonance (SR) noise and human visual system (HVS) properties, where several novel frameworks and algorithms for object detection in images, image enhancement and image segmentation as well as the method to estimate the performance limit of image segmentation algorithms are developed. Object detection in images is a fundamental problem whose goal is to make a decision if the object of interest is present or absent in a given image. We develop a framework and algorithm to enhance the detection performance of suboptimal detectors using SR noise, where we add a suitable dose of noise into the original image data and obtain the performance improvement. Micro-calcification detection is employed in this dissertation as an illustrative example. The comparative experiments with a large number of images verify the efficiency of the presented approach. Image enhancement plays an important role and is widely used in various vision tasks. We develop two image enhancement approaches. One is based on SR noise, HVS-driven image quality evaluation metrics and the constrained multi-objective optimization (MOO) technique, which aims at refining the existing suboptimal image enhancement methods. Another is based on the selective enhancement framework, under which we develop several image enhancement algorithms. The two approaches are applied to many low quality images, and they outperform many existing enhancement algorithms. Image segmentation is critical to image analysis. We present two segmentation algorithms driven by HVS properties, where we incorporate the human visual perception factors into the segmentation procedure and encode the prior expectation on the segmentation results into the objective functions through Markov random fields (MRF). Our experimental results show that the presented algorithms achieve higher segmentation accuracy than many representative segmentation and clustering algorithms available in the literature. Performance limit, or performance bound, is very useful to evaluate different image segmentation algorithms and to analyze the segmentability of the given image content. We formulate image segmentation as a parameter estimation problem and derive a lower bound on the segmentation error, i.e., the mean square error (MSE) of the pixel labels considered in our work, using a modified Cramér-Rao bound (CRB). The derivation is based on the biased estimator assumption, whose reasonability is verified in this dissertation. Experimental results demonstrate the validity of the derived bound

    Machine Learning based IoT Flood Rediction Using Data Modeling and Decision Support System

    Get PDF
    An essential step in supplying data for climate impact studies and evaluations of hydrological processes is rainfall prediction.  However, rainfall events are complex phenomenon’s that continue to be difficult to forecast.  In this paper , we present unique hybrid models for the prediction of monthly precipitation that include Seasonal Artificial Neural Networks and Discrete wavelet transforms are two pre-processing methods, together with Artificial Neural Networks have two feed forward neural networks.  The temporal series of observed monthly rainfall from Vietnam’s Ca Mau hydrological station were decomposed into three subsets by seasonal decomposition and five sub signals and four levels by wavelet analysis.   The methods for predicting rainfall that use feed forward artificial neural networks (ANN) and seasonal artificial neural network (SANN) were fed with the processed data.  The classic genetic method and simulated annealing method backed by using an integrated moving average and autoregressive moving was contrasted with the predicted models for model evaluation.  The results showed that non-stationary regarding issues with non-linear time series, such forecasting rainfall could be satisfactorily simulated. The SANN model was integrated with the wavelet transform and seasonal decomposition are both used. Techniques, however the wavelet transform method produced the most accurate monthly rainfall data, Predictions. Due to the effects of climate change, nations including the Japan, China, the United States of America, and Taiwan, etc., have recently experienced severe and devastating natural disasters.  One of the biggest causes of the destruction in Asian nations like china, India, Bangladesh, Sri Lanka, etc. is the flood. The danger of fatality from these floods is increased by 78% as information technology advances; there is a demand for simple access to massive amounts of cloud storage and computing capacity

    Analysis of Dynamic Magnetic Resonance Breast Images

    Get PDF
    Dynamic Magnetic Resonance Imaging is a non-invasive technique that provides an image sequence based on dynamic information for locating lesions and investigating their structures. In this thesis we develop new methodology for analysing dynamic Magnetic Resonance image sequences of the breast. This methodology comprises an image restoration step that reduces random distortions affecting the data and an image classification step that identifies normal, benign or malignant tumoral tissues. In the first part of this thesis we present a non-parametric and a parametric approach for image restoration and classification. Both methods are developed within the Bayesian framework. A prior distribution modelling both spatial homogeneity and temporal continuity between neighbouring image pixels is employed. Statistical inference is performed by means of a Metropolis-Hastings algorithm with a specially chosen proposal distribution that out-performs other algorithms of the same family. We also provide novel procedures for estimating the hyper-parameters of the prior models and the normalizing constant so making the Bayesian methodology automatic. In the second part of this thesis we present new methodology for image classification based on deformable templates of a prototype shape. Our approach uses higher level knowledge about the tumour structure than the spatio-temporal prior distribution of our Bayesian methodology. The prototype shape is deformed to identify the structure of the malignant tumoral tissue by minimizing a novel objective function over the parameters of a set of non-affine transformations. Since these transformations can destroy the connectivity of the shape, we develop a new filter that restores connectivity without smoothing the shape. The restoration and classification results obtained from a small sample of image sequences are very encouraging. In order to validate these results on a larger sample, in the last part of the thesis we present a user friendly software package that implements our methodology

    Fusion based analysis of ophthalmologic image data

    Get PDF
    summary:The paper presents an overview of image analysis activities of the Brno DAR group in the medical application area of retinal imaging. Particularly, illumination correction and SNR enhancement by registered averaging as preprocessing steps are briefly described; further mono- and multimodal registration methods developed for specific types of ophthalmological images, and methods for segmentation of optical disc, retinal vessel tree and autofluorescence areas are presented. Finally, the designed methods for neural fibre layer detection and evaluation on retinal images, utilising different combined texture analysis approaches and several types of classifiers, are shown. The results in all the areas are shortly commented on at the respective sections. In order to emphasise methodological aspects, the methods and results are ordered according to consequential phases of processing rather then divided according to individual medical applications

    Hidden Markov Models

    Get PDF
    Hidden Markov Models (HMMs), although known for decades, have made a big career nowadays and are still in state of development. This book presents theoretical issues and a variety of HMMs applications in speech recognition and synthesis, medicine, neurosciences, computational biology, bioinformatics, seismology, environment protection and engineering. I hope that the reader will find this book useful and helpful for their own research
    • …
    corecore