20 research outputs found

    Exploring extra dimensions to capture saliva metabolite fingerprints from metabolically healthy and unhealthy obese patients by comprehensive two-dimensional gas chromatography featuring Tandem Ionization mass spectrometry

    Get PDF
    This study examines the information potential of comprehensive two-dimensional gas chromatography combined with time-of-flight mass spectrometry (GC×GC-TOF MS) and variable ionization energy (i.e., Tandem Ionization™) to study changes in saliva metabolic signatures from a small group of obese individuals. The study presents a proof of concept for an effective exploitation of the complementary nature of tandem ionization data. Samples are taken from two sub-populations of severely obese (BMI > 40 kg/m2) patients, named metabolically healthy obese (MHO) and metabolically unhealthy obese (MUO). Untargeted fingerprinting, based on pattern recognition by template matching, is applied on single data streams and on fused data, obtained by combining raw signals from the two ionization energies (12 and 70 eV). Results indicate that at lower energy (i.e., 12 eV), the total signal intensity is one order of magnitude lower compared to the reference signal at 70 eV, but the ranges of variations for 2D peak responses is larger, extending the dynamic range. Fused data combine benefits from 70 eV and 12 eV resulting in more comprehensive coverage by sample fingerprints. Multivariate statistics, principal component analysis (PCA), and partial least squares discriminant analysis (PLS-DA) show quite good patient clustering, with total explained variance by the first two principal components (PCs) that increases from 54% at 70 eV to 59% at 12 eV and up to 71% for fused data. With PLS-DA, discriminant components are highlighted and putatively identified by comparing retention data and 70 eV spectral signatures. Within the most informative analytes, lactose is present in higher relative amount in saliva from MHO patients, whereas N-acetyl-D-glucosamine, urea, glucuronic acid γ-lactone, 2-deoxyribose, N-acetylneuraminic acid methyl ester, and 5-aminovaleric acid are more abundant in MUO patients. Visual feature fingerprinting is combined with pattern recognition algorithms to highlight metabolite variations between composite per-class images obtained by combining raw data from individuals belonging to different classes, i.e., MUO vs. MHO.Graphical abstract

    Sea surface wave reconstruction from marine radar images

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Mechanical Engineering, 2012.Cataloged from PDF version of thesis.Includes bibliographical references (p. 103-105).The X-band marine radar is one type of remote sensing technology which is being increasingly used to measure sea surface waves nowadays. In this thesis, how to reconstruct sea surface wave elevation maps from X-band marine radar images and do wave field prediction over short term in real time are discussed. The key idea of reconstruction is using dispersion relation based on the linear wave theory to separate the wave-related signal from non-wave signal in radar images. The reconstruction process involves three-dimensional Fourier analysis and some radar imaging mechanism. In this thesis, an improved shadowing simulation model combined with wave field simulation models for the study of the correction function in the reconstruction process and an improved wave scale estimation model using non-coherent radar data are proposed, which are of great importance in the reconstruction process. A radar image calibration method based on wave field simulation is put forward in order to improve the quality of reconstructed sea surface wave. Besides, a theoretical wave scale estimation model using Doppler spectra of the coherent radar is put forward, which is proposed to be a good alternative to the current wave scale estimation model. The reconstructed sea surface wave can be used for wave field simulation in order to predict the wave field, which is not only an application of this reconstruction process, but also a parameter optimizing tool for the reconstruction process.by Yusheng Qi.S.M

    Visual quality assessment for super-resolved images: database and method

    Get PDF
    Image super-resolution (SR) has been an active re-search problem which has recently received renewed interest due to the introduction of new technologies such as deep learning. However, the lack of suitable criteria to evaluate the SR perfor-mance has hindered technology development. In this paper, we fill a gap in the literature by providing the first publicly available database as well as a new image quality assessment (IQA) method specifically designed for assessing the visual quality of su-per-resolved images (SRIs). In constructing the Quality Assess-ment Database for SRIs (QADS), we carefully selected 20 refer-ence images and created 980 SRIs using 21 image SR methods. Mean opinion score (MOS) for these SRIs are collected through 100 individuals participating a suitably designed psychovisual experiment. Extensive numerical and statistical analysis is per-formed to show that the MOS of QADS has excellent suitability and reliability. The psychovisual experiment has led to the dis-covery that, unlike distortions encountered in other IQA data-bases, artifacts of the SRIs degenerate the image structure as well as image texture. Moreover, the structural and textural degener-ations have distinctive perceptual properties. Based on these in-sights, we propose a novel method to assess the visual quality of SRIs by separately considering the structural and textural com-ponents of images. Observing that textural degenerations are mainly attributed to dissimilar texture or checkerboard artifacts, we propose to measure the changes of textural distributions. We also observe that structural degenerations appear as blurring and jaggies artifacts in SRIs and develop separate similarity measures for different types of structural degenerations. A new pooling mechanism is then used to fuse the different similarities together to give the final quality score for an SRI. Experiments conducted on the QADS demonstrate that our method significantly outper-forms classical as well as current state-of-the-art IQA methods

    Some Intra-Frame and Inter-Frame Processing Schemes for Efficient Video Compression

    Get PDF
    Rapid increase in digital applications due to recent advances in digital communication and devices needs significant video information storing, processing and transmitting. But the amount of original captured video data is huge and thus makes the system complex in all kind of video processing.But applications demand a faster transmission in different sized electronic devices with good quality.Along with, limited bandwidth and memory for storage makes it challenging. These practical constraints for processing a huge amount of video data, makes video compression as active and challenging field of research. The aim of video compression is to remove redundancy of raw video while maintaining the quality and fidelity. For inter frame processing, motion estimation technique is significantly used to reduce temporal redundancy in almost all the video coding standards e.g. MPEG2, MPEG4, H264/AVC which uses state-of-art algorithm to provide higher compression with a perceptual quality.Though motion estimation is main contributor for higher compression, this is the most computationally complex part of video coding tools. So, it is always a requirement to design an algorithm that is both faster and accurate and provides higher compression but good quality output. The goal of this project is to propose an algorithm for motion estimation which will meet all the requirements and overcome all the practical limitations. In this thesis we analyze the motion of video sequences and some novel block matching based motion estimation algorithms are proposed to improve video coding efficiency in inter frame processing. Particle Swarm Optimization technique and Differential Evolutionary model is used for fast and accurate motion estimation and compensation. Spatial and temporal correlation is adapted for initial population. We followed some strategy for adaptive generations, particle population, particle location history preservation and exploitation. The experimental result shows that our proposed algorithm is efficient to maintain the accuracy. There is significant reduction of search points and thus computational complexity while achieving comparable performance in video coding. Spatial domain redundancy is reduced skipping the irrelevant or spatially co-related data by different sub-sampling algorithm.The sub-sampled intra-frame is up-sampled at the receiver side. The up-sampled high resolution frame requires to have good quality . The existing up-sampling or interpolation techniques produce undesirable blurring and ringing artifacts. To alleviate this problem, a novel spatio-temporal pre-processing approach is proposed to improve the quality. The proposed method use low frequency DCT (Discrete cosine transform) component to sub-sample the frame at the transmitter side. In transmitter side a preprocessing method is proposed where the received subsampled frame is passed through a Wiener filter which uses its local statistics in 3×3 neighborhood to modify pixel values. The output of Wiener filter is added with optimized multiple of high frequency component. The output is then passed through a DCT block to up-sample. Result shows that the proposed method outperforms popularly used interpolation techniques in terms of quality measure

    Towards robust convolutional neural networks in challenging environments

    Get PDF
    Image classification is one of the fundamental tasks in the field of computer vision. Although Artificial Neural Network (ANN) showed a lot of promise in this field, the lack of efficient computer hardware subdued its potential to a great extent. In the early 2000s, advances in hardware coupled with better network design saw the dramatic rise of Convolutional Neural Network (CNN). Deep CNNs pushed the State-of-The-Art (SOTA) in a number of vision tasks, including image classification, object detection, and segmentation. Presently, CNNs dominate these tasks. Although CNNs exhibit impressive classification performance on clean images, they are vulnerable to distortions, such as noise and blur. Fine-tuning a pre-trained CNN on mutually exclusive or a union set of distortions is a brute-force solution. This iterative fine-tuning process with all known types of distortion is, however, exhaustive and the network struggles to handle unseen distortions. CNNs are also vulnerable to image translation or shift, partly due to common Down-Sampling (DS) layers, e.g., max-pooling and strided convolution. These operations violate the Nyquist sampling rate and cause aliasing. The textbook solution is low-pass filtering (blurring) before down-sampling, which can benefit deep networks as well. Even so, non-linearity units, such as ReLU, often re-introduce the problem, suggesting that blurring alone may not suffice. Another important but under-explored issue for CNNs is unknown or Open Set Recognition (OSR). CNNs are commonly designed for closed set arrangements, where test instances only belong to some ‘Known Known’ (KK) classes used in training. As such, they predict a class label for a test sample based on the distribution of the KK classes. However, when used under the OSR setup (where an input may belong to an ‘Unknown Unknown’ or UU class), such a network will always classify a test instance as one of the KK classes even if it is from a UU class. Historically, CNNs have struggled with detecting objects in images with large difference in scale, especially small objects. This is because the DS layers inside a CNN often progressively wipe out the signal from small objects. As a result, the final layers are left with no signature from these objects leading to degraded performance. In this work, we propose solutions to the above four problems. First, we improve CNN robustness against distortion by proposing DCT based augmentation, adaptive regularisation, and noise suppressing Activation Functions (AF). Second, to ensure further performance gain and robustness to image transformations, we introduce anti-aliasing properties inside the AF and propose a novel DS method called blurpool. Third, to address the OSR problem, we propose a novel training paradigm that ensures detection of UU classes and accurate classification of the KK classes. Finally, we introduce a novel CNN that enables a deep detector to identify small objects with high precision and recall. We evaluate our methods on a number of benchmark datasets and demonstrate that they outperform contemporary methods in the respective problem set-ups.Doctor of Philosoph

    Optical techniques for non-destructive detection of flaws in ceramic components

    Get PDF
    No abstract availableThis thesis primarily concerns development of a non-destructive inspection method for 3mol% Yttria-Stabilised Zirconia Polycrystal (3Y-TZP) ceramics used for dental applications and a scoping study on applying the technique to other ceramic materials applied in thermal barrier coatings and other fields. Zirconia ceramics are materials of great interest for various engineering applications, primarily due to their stiffness, hardness and wear resistance. These factors in combination with the complex manufacturing processes may reduce the material strength and durability due to induced cracking. Knowledge of the extent of this cracking must be obtained and often, if each part is unique as in biomedicine, the assessment must be carried out for every part non-destructively so the part can be subsequently used. Only a few techniques are known for inspection of Zirconia ceramics, however these techniques are not able to detect flaws in thick (above 500 μm) parts. The main limitation for optical inspection of 3Y-TZP is the highly scattering nature of the material due to its multicrystalline grain structure (grains size of 500 nm) which, particularly in the visible region, reduces imaging capabilities. However, a transmission window in the mid-infrared (between 3 and 8 μm) exists opening up the potential for inspection at these wavelengths. Mid-Infrared Transmission Imaging (MIR-TI) and Confocal Mid-Infrared Transmission Imaging (CMIR-TI) techniques were developed for inspection of 3Y-TZP parts which allow for detecting sub mm scale cracks. The measured imaging resolution for the MIR-TI is 42 ± 5 μm, whereas for the CMIR-TI it is below 38.5 ± 5 μm. The maximum sample thickness inspected with the MIR-TI and CMIR-TI is 6 mm and 3.5 mm respectively, considerably more than currently available inspection methods. The MIRTI technique provides fast inspection of the part due to the large field of view (11 by 7 mm), however the high cost and limited imaging resolution make this technique less attractive. The CMIR-TI technique on the other hand is more cost effective due to reduced cost of the infrared sensor and it provides an enhanced imaging capabilities. The promising results achieved with the MIR-TI and CMIR-TI techniques led to the development of reflection equivalents (Camera-MIRI and Confocal-MIRI) for ceramic coating measurements, however further in-depth experiments to determine and quantify the capabilities of both techniques are required

    Blind Image Deconvolution using Approximate Greatest Common Divisor and Approximate Polynomial Factorisation

    Get PDF
    Images play a significant and important role in diverse areas of everyday modern life. Examples of the areas where the use of images is routine include medicine, forensic investigations, engineering applications and astronomical science. The procedures and methods that depend on image processing would benefit considerably from images that are free of blur. Most images are unfortunately affected by noise and blur that result from the practical limitations of image sourcing systems. The blurring and noise effects render the image less useful. An efficient method for image restoration is hence important for many applications. Restoration of true images from blurred images is the inverse of the naturally occurring problem of true image convolution through a blurring function. The deconvolution of images from blurred images is a non-trivial task. One challenge is that the computation of the mathematical function that represents the blurring process, which is known as the point spread function (PSF), is an ill-posed problem, i.e. an infinite number of solutions are possible for given inexact data. The blind image deconvolution (BID) problem is the central subject of this thesis. There are a number of approaches for solving the BID problem, including statistical methods and linear algebraic methods. The approach adopted in this research study for solving this problem falls within the class of linear algebraic methods. Polynomial linear algebra offers a way of computing the PSF size and its components without requiring any prior knowledge about the true image and the blurring PSF. This research study has developed a BID method for image restoration based on the approximate greatest common divisor (AGCD) algorithms, specifically, the approximate polynomial factorization (APF) algorithm of two polynomials. The developed method uses the Sylvester resultant matrix algorithm in the computation of the AGCD and the QR decomposition for computing the degree of the AGCD. It is shown that the AGCD is equal to the PSF and the deblurred image can be computed from the coprime polynomials. In practice, the PSF can be spatially variant or invariant. PSF spatial invariance means that the blurred image pixels are the convolution of the true image pixels and the same PSF. Some of the PSF bivariate functions, in particular, separable functions, can be further simplified as the multiplication of two univariate polynomials. This research study is focused on the invariant separable and non-separable PSF cases. The performance of state-of-the-art image restoration methods varies in terms of computational speed and accuracy. In addition, most of these methods require prior knowledge about the true image and the blurring function, which in a significant number of applications is an impractical requirement. The development of image restoration methods that require no prior knowledge about the true image and the blurring functions is hence desirable. Previous attempts at developing BID methods resulted in methods that have a robust performance against noise perturbations; however, their good performance is limited to blurring functions of small size. In addition, even for blurring functions of small size, these methods require the size of the blurring functions to be known and an estimate of the noise level to be present in the blurred image. The developed method has better performance than all the other state-of-the-art methods, in particular, it determines the correct size and coefficients of the PSF and then uses it to recover the original image. It does not require any prior knowledge about the PSF, which is a prerequisite for all the other methods
    corecore