1,355 research outputs found

    Wavelet/shearlet hybridized neural networks for biomedical image restoration

    Get PDF
    Recently, new programming paradigms have emerged that combine parallelism and numerical computations with algorithmic differentiation. This approach allows for the hybridization of neural network techniques for inverse imaging problems with more traditional methods such as wavelet-based sparsity modelling techniques. The benefits are twofold: on the one hand traditional methods with well-known properties can be integrated in neural networks, either as separate layers or tightly integrated in the network, on the other hand, parameters in traditional methods can be trained end-to-end from datasets in a neural network "fashion" (e.g., using Adagrad or Adam optimizers). In this paper, we explore these hybrid neural networks in the context of shearlet-based regularization for the purpose of biomedical image restoration. Due to the reduced number of parameters, this approach seems a promising strategy especially when dealing with small training data sets

    Single frame super-resolution image system

    Get PDF
    The estimation of some unknown quantity information from known observable information can be viewed as a specific statistical process which needs an extra source of information prediction strategy. In this regard, image super-resolution is an important application In this thesis, we proposed a new image interpolation method based on Redundant Discrete Wavelet Transform (RDWT) and self-adaptive processes in which edge direction details are considered to solve single-frame image super-resolution task. Information about sharp variations, both in horizontal and vertical directions derived from wavelet transform sub-bands are considered, followed by detection and modification of the aliasing part in the preliminary output in order to increase the visual effect. By exploiting fundamental properties of images such as property of edge direction, different parts of the source image are considered separately in order to predict the vertical and horizontal details accurately, helping to consummate the whole framework in reconstructing the high-resolution image. Extensive tests of the proposed method show that both objective quality (PSNR) and subjective quality are obviously improved compared to several other state-of-the-art methods. And this work also leaved capacious space for further research, not only theoretical but also practical. Some of the related research applications based on this algorithm strategy are also briefly introduced

    Image fusion using multi-resolution decomposition and LMMSE filter

    Get PDF
    The subject of data fusion utilizing heterogeneous sensors has received significant attention in recent years. Each of the sensors provides a limited perspective of the desired information. A heterogeneous sensor environment combined with a procedure for synergistically combining data from each of the transducers can potentially lead to a more comprehensive and accurate estimate of the desired information. An example of a field that can profit from the application of data fusion techniques is the area of nondestructive evaluation (NDE);This dissertation is concerned with developing efficient image fusion techniques for NDE applications. This dissertation begins with a brief description of several NDE imaging techniques with a special emphasis on eddy current and ultrasonic inspection methods. Signal degradation mechanisms associated with each NDE imaging method are described together with a discussion on methods to compensate or reduce the degradation effects;This dissertation then presents several image fusion methods beginning with those employing multilayer perceptron and radial basis function neural networks. This dissertation also introduces an optimal approach for fusing images derived from a heterogeneous sensor environment. The method uses a linear minimum mean square error (LMMSE) filter to fuse multiple images. The validity of the approach is evaluated using a pair of eddy current and ultrasonic NDE images;Finally the dissertation presents image fusion methods using multi-resolution decomposition techniques using both Fourier as well as two-dimensional wavelet transforms to decompose NDE images and reconstruct the fused image

    Data mining based learning algorithms for semi-supervised object identification and tracking

    Get PDF
    Sensor exploitation (SE) is the crucial step in surveillance applications such as airport security and search and rescue operations. It allows localization and identification of movement in urban settings and can significantly boost knowledge gathering, interpretation and action. Data mining techniques offer the promise of precise and accurate knowledge acquisition techniques in high-dimensional data domains (and diminishing the “curse of dimensionality” prevalent in such datasets), coupled by algorithmic design in feature extraction, discriminative ranking, feature fusion and supervised learning (classification). Consequently, data mining techniques and algorithms can be used to refine and process captured data and to detect, recognize, classify, and track objects with predictable high degrees of specificity and sensitivity. Automatic object detection and tracking algorithms face several obstacles, such as large and incomplete datasets, ill-defined regions of interest (ROIs), variable scalability, lack of compactness, angular regions, partial occlusions, environmental variables, and unknown potential object classes, which work against their ability to achieve accurate real-time results. Methods must produce fast and accurate results by streamlining image processing, data compression and reduction, feature extraction, classification, and tracking algorithms. Data mining techniques can sufficiently address these challenges by implementing efficient and accurate dimensionality reduction with feature extraction to refine incomplete (ill-partitioning) data-space and addressing challenges related to object classification, intra-class variability, and inter-class dependencies. A series of methods have been developed to combat many of the challenges for the purpose of creating a sensor exploitation and tracking framework for real time image sensor inputs. The framework has been broken down into a series of sub-routines, which work in both series and parallel to accomplish tasks such as image pre-processing, data reduction, segmentation, object detection, tracking, and classification. These methods can be implemented either independently or together to form a synergistic solution to object detection and tracking. The main contributions to the SE field include novel feature extraction methods for highly discriminative object detection, classification, and tracking. Also, a new supervised classification scheme is presented for detecting objects in urban environments. This scheme incorporates both novel features and non-maximal suppression to reduce false alarms, which can be abundant in cluttered environments such as cities. Lastly, a performance evaluation of Graphical Processing Unit (GPU) implementations of the subtask algorithms is presented, which provides insight into speed-up gains throughout the SE framework to improve design for real time applications. The overall framework provides a comprehensive SE system, which can be tailored for integration into a layered sensing scheme to provide the war fighter with automated assistance and support. As more sensor technology and integration continues to advance, this SE framework can provide faster and more accurate decision support for both intelligence and civilian applications

    Convolutional Deblurring for Natural Imaging

    Full text link
    In this paper, we propose a novel design of image deblurring in the form of one-shot convolution filtering that can directly convolve with naturally blurred images for restoration. The problem of optical blurring is a common disadvantage to many imaging applications that suffer from optical imperfections. Despite numerous deconvolution methods that blindly estimate blurring in either inclusive or exclusive forms, they are practically challenging due to high computational cost and low image reconstruction quality. Both conditions of high accuracy and high speed are prerequisites for high-throughput imaging platforms in digital archiving. In such platforms, deblurring is required after image acquisition before being stored, previewed, or processed for high-level interpretation. Therefore, on-the-fly correction of such images is important to avoid possible time delays, mitigate computational expenses, and increase image perception quality. We bridge this gap by synthesizing a deconvolution kernel as a linear combination of Finite Impulse Response (FIR) even-derivative filters that can be directly convolved with blurry input images to boost the frequency fall-off of the Point Spread Function (PSF) associated with the optical blur. We employ a Gaussian low-pass filter to decouple the image denoising problem for image edge deblurring. Furthermore, we propose a blind approach to estimate the PSF statistics for two Gaussian and Laplacian models that are common in many imaging pipelines. Thorough experiments are designed to test and validate the efficiency of the proposed method using 2054 naturally blurred images across six imaging applications and seven state-of-the-art deconvolution methods.Comment: 15 pages, for publication in IEEE Transaction Image Processin

    Predictive Maintenance of an External Gear Pump using Machine Learning Algorithms

    Get PDF
    The importance of Predictive Maintenance is critical for engineering industries, such as manufacturing, aerospace and energy. Unexpected failures cause unpredictable downtime, which can be disruptive and high costs due to reduced productivity. This forces industries to ensure the reliability of their equip-ment. In order to increase the reliability of equipment, maintenance actions, such as repairs, replacements, equipment updates, and corrective actions are employed. These actions affect the flexibility, quality of operation and manu-facturing time. It is therefore essential to plan maintenance before failure occurs.Traditional maintenance techniques rely on checks conducted routinely based on running hours of the machine. The drawback of this approach is that maintenance is sometimes performed before it is required. Therefore, conducting maintenance based on the actual condition of the equipment is the optimal solu-tion. This requires collecting real-time data on the condition of the equipment, using sensors (to detect events and send information to computer processor).Predictive Maintenance uses these types of techniques or analytics to inform about the current, and future state of the equipment. In the last decade, with the introduction of the Internet of Things (IoT), Machine Learning (ML), cloud computing and Big Data Analytics, manufacturing industry has moved forward towards implementing Predictive Maintenance, resulting in increased uptime and quality control, optimisation of maintenance routes, improved worker safety and greater productivity.The present thesis describes a novel computational strategy of Predictive Maintenance (fault diagnosis and fault prognosis) with ML and Deep Learning applications for an FG304 series external gear pump, also known as a domino pump. In the absence of a comprehensive set of experimental data, synthetic data generation techniques are implemented for Predictive Maintenance by perturbing the frequency content of time series generated using High-Fidelity computational techniques. In addition, various types of feature extraction methods considered to extract most discriminatory informations from the data. For fault diagnosis, three types of ML classification algorithms are employed, namely Multilayer Perceptron (MLP), Support Vector Machine (SVM) and Naive Bayes (NB) algorithms. For prognosis, ML regression algorithms, such as MLP and SVM, are utilised. Although significant work has been reported by previous authors, it remains difficult to optimise the choice of hyper-parameters (important parameters whose value is used to control the learning process) for each specific ML algorithm. For instance, the type of SVM kernel function or the selection of the MLP activation function and the optimum number of hidden layers (and neurons).It is widely understood that the reliability of ML algorithms is strongly depen-dent upon the existence of a sufficiently large quantity of high-quality training data. In the present thesis, due to the unavailability of experimental data, a novel high-fidelity in-silico dataset is generated via a Computational Fluid Dynamic (CFD) model, which has been used for the training of the underlying ML metamodel. In addition, a large number of scenarios are recreated, ranging from healthy to faulty ones (e.g. clogging, radial gap variations, axial gap variations, viscosity variations, speed variations). Furthermore, the high-fidelity dataset is re-enacted by using degradation functions to predict the remaining useful life (fault prognosis) of an external gear pump.The thesis explores and compares the performance of MLP, SVM and NB algo-rithms for fault diagnosis and MLP and SVM for fault prognosis. In order to enable fast training and reliable testing of the MLP algorithm, some predefined network architectures, like 2n neurons per hidden layer, are used to speed up the identification of the precise number of neurons (shown to be useful when the sample data set is sufficiently large). Finally, a series of benchmark tests are presented, enabling to conclude that for fault diagnosis, the use of wavelet features and a MLP algorithm can provide the best accuracy, and the MLP al-gorithm provides the best prediction results for fault prognosis. In addition, benchmark examples are simulated to demonstrate the mesh convergence for the CFD model whereas, quantification analysis and noise influence on training data are performed for ML algorithms

    Invariance transformations for processing NDE signals

    Get PDF
    The ultimate objective in nondestructive evaluation (NDE) is the characterization of materials, on the basis of information in the response from energy/material interactions. This is commonly referred to as the inverse problem. Inverse problems are in general ill-posed and full analytical solutions to these problems are seldom tractable. Pragmatic approaches for solving them employ a constrained search technique by limiting the space of all possible solutions. A more modest goal is therefore to use the received signal for characterizing defects in objects in terms of the location, size and shape. However, the NDE signal received by the sensors is influenced not only by the defect, but also by the operational parameters associated with the experiment. This dissertation deals with the subject of invariant pattern recognition techniques that render NDE signals insensitive to operational variables, while at the same time, preserve or enhance defect related information. Such techniques are comprised of invariance transformations that operate on the raw signals prior to interpretation using subsequent defect characterization schemes. Invariance transformations are studied in the context of the magnetostatic flux leakage (MFL) inspection technique, which is the method of choice for inspecting natural gas transmission pipelines buried underground;The magnetic flux leakage signal received by the scanning device is very sensitive to a number of operational parameters. Factors that have a major impact on the signal include those caused by variations in the permeability of the pipe-wall material and the velocity of the inspection tool. This study describes novel approaches to compensate for the effects of these variables;Two types of invariance schemes, feature selection and signal compensation, are studied. In the feature selection approach, the invariance transformation is recast as a problem in interpolation of scattered, multi-dimensional data. A variety of interpolation techniques are explored, the most powerful among them being feed-forward neural networks. The second parametric variation is compensated by using restoration filters. The filter kernels are derived using a constrained, stochastic least square optimization technique or by adaptive methods. Both linear and non-linear filters are studied as tools for signal compensation;Results showing the successful application of these invariance transformations to real and simulated MFL data are presented

    The Contour Extraction of Cup in Fundus Images for Glaucoma Detection

    Get PDF
    Glaucoma is the second leading cause of blindness in the world; therefore the detection of glaucoma is required. The detection of glaucoma is used to distinguish whether a patient's eye is normal or glaucoma. An expert observed the structure of the retina using fundus image to detect glaucoma. In this research, we propose feature extraction method based on cup area contour using fundus images to detect glaucoma. Our proposed method has been evaluated on 44 fundus images consisting of 23 normal and 21 glaucoma. The data is divided into two parts: firstly, used to the learning phase and secondly, used to the testing phase. In order to identify the fundus images including the class of normal or glaucoma, we applied Support Vector Machines (SVM) method. The performance of our method achieves the accuracy of 94.44%

    Combination of global features for the automatic quality assessment of retinal images

    Get PDF
    Producción CientíficaDiabetic retinopathy (DR) is one of the most common causes of visual loss in developed countries. Computer-aided diagnosis systems aimed at detecting DR can reduce the workload of ophthalmologists in screening programs. Nevertheless, a large number of retinal images cannot be analyzed by physicians and automatic methods due to poor quality. Automatic retinal image quality assessment (RIQA) is needed before image analysis. The purpose of this study was to combine novel generic quality features to develop a RIQA method. Several features were calculated from retinal images to achieve this goal. Features derived from the spatial and spectral entropy-based quality (SSEQ) and the natural images quality evaluator (NIQE) methods were extracted. They were combined with novel sharpness and luminosity measures based on the continuous wavelet transform (CWT) and the hue saturation value (HSV) color model, respectively. A subset of non-redundant features was selected using the fast correlation-based filter (FCBF) method. Subsequently, a multilayer perceptron (MLP) neural network was used to obtain the quality of images from the selected features. Classification results achieved 91.46% accuracy, 92.04% sensitivity, and 87.92% specificity. Results suggest that the proposed RIQA method could be applied in a more general computer-aided diagnosis system aimed at detecting a variety of retinal pathologies such as DR and age-related macular degeneration.Ministerio de Ciencia, Innovación y Universidades - Fondo Europeo de Desarrollo Regional (projects RTC-2015-3467-1 and DPI2017-84280-R
    corecore