92 research outputs found

    Alignment parameter calibration for IMU using the Taguchi method for image deblurring

    Get PDF
    Inertial measurement units (IMUs) utilized in smartphones can be used to detect camera motion during exposure, in order to improve image quality degraded with blur through long hand-held exposure. Based on the captured camera motion, blur in images can be removed when an appropriate deblurring filter is used. However, two research issues have not been addressed: (a) the calibration of alignment parameters for the IMU has not been addressed. When inappropriate alignment parameters are used for the IMU, the camera motion would not be captured accurately and the deblurring effectiveness can be downgraded. (b) Also selection of an appropriate deblurring filter correlated with the image quality has still not been addressed. Without the use of an appropriate deblurring filter, the image quality could not be optimal. This paper proposes a systematic method, namely the Taguchi method, which is a robust and systematic approach for designing reliable and high-precision devices, in order to perform the alignment parameter calibration for the IMU and filter selection. The Taguchi method conducts a small number of systematic experiments based on orthogonal arrays. It studies the impact of the alignment parameters and appropriate deblurring filter, which attempts to perform an effective deblurring. Several widely adopted image quality metrics are used to evaluate the deblurred images generated by the proposed Taguchi method. Experimental results show that the quality of deblurred images achieved by the proposed Taguchi method is better than those obtained by deblurring methods which are not involved with the alignment parameter calibration and filter selection. Also, much less computational effort is required by the Taguchi method when comparing with the commonly used optimization methods for determining alignment parameters and deblurring filter

    Orbiting Rainbows: Optical Manipulation of Aerosols and the Beginnings of Future Space Construction

    Get PDF
    Our objective is to investigate the conditions to manipulate and maintain the shape of an orbiting cloud of dust-like matter so that it can function as an ultra-lightweight surface with useful and adaptable electromagnetic characteristics, for instance, in the optical, RF, or microwave bands. Inspired by the light scattering and focusing properties of distributed optical assemblies in Nature, such as rainbows and aerosols, and by recent laboratory successes in optical trapping and manipulation, we propose a unique combination of space optics and autonomous robotic system technology, to enable a new vision of space system architecture with applications to ultra-lightweight space optics and, ultimately, in-situ space system fabrication. Typically, the cost of an optical system is driven by the size and mass of the primary aperture. The ideal system is a cloud of spatially disordered dust-like objects that can be optically manipulated: it is highly reconfigurable, fault-tolerant, and allows very large aperture sizes at low cost. See Figure 1 for a scenario of application of this concept. The solution that we propose is to construct an optical system in space in which the nonlinear optical properties of a cloud of micron-sized particles are shaped into a specific surface by light pressure, allowing it to form a very large and lightweight aperture of an optical system, hence reducing overall mass and cost. Other potential advantages offered by the cloud properties as optical system involve possible combination of properties (combined transmit/receive), variable focal length, combined refractive and reflective lens designs, and hyper-spectral imaging. A cloud of highly reflective particles of micron-size acting coherently in a specific electromagnetic band, just like an aerosol in suspension in the atmosphere, would reflect the Sun's light much like a rainbow. The only difference with an atmospheric or industrial aerosol is the absence of the supporting fluid medium. This new concept is based on recent understandings in the physics of optical manipulation of small particles in the laboratory and the engineering of distributed ensembles of spacecraft clouds to shape an orbiting cloud of micron-sized objects. In the same way that optical tweezers have revolutionized micro- and nano-manipulation of objects, our breakthrough concept will enable new large scale NASA mission applications and develop new technology in the areas of Astrophysical Imaging Systems and Remote Sensing because the cloud can operate as an adaptive optical imaging sensor. While achieving the feasibility of constructing one single aperture out of the cloud is the main topic of this work, it is clear that multiple orbiting aerosol lenses could also combine their power to synthesize a much larger aperture in space to enable challenging goals such as exoplanet detection. Furthermore, this effort could establish feasibility of key issues related to material properties, remote manipulation, and autonomy characteristics of cloud in orbit. There are several types of endeavors (science missions) that could be enabled by this type of approach, i.e. it can enable new astrophysical imaging systems, exoplanet search, large apertures allow for unprecedented high resolution to discern continents and important features of other planets, hyperspectral imaging, adaptive systems, spectroscopy imaging through limb, and stable optical systems from Lagrange-points. Future micro-miniaturization might hold promise of a further extension of our dust aperture concept to other more exciting smart dust concepts with other associated capabilities

    An Optimal Region Of Interest Localization Using Edge Refinement Filter And Entropy-Based Measurement For Point Spread Function Stimation

    Get PDF
    The use of edges to determine an optimal region of interest (ROI) location is increasingly becoming popular for image deblurring. Recent studies have shown that regions with strong edges tend to produce better deblurring results. In this study, a direct method for ROI localization based on edge refinement filter and entropy-based measurement is proposed. Using this method, the randomness of grey level distribution is quantitatively measured, from which the ROI is determined. This method has low computation cost since it contains no matrix operations. The proposed method has been tested using three sets of test images - Dataset I, II and III. Empirical results suggest that the improved edge refinement filter is competitive when compared to the established edge detection schemes and achieves better performance in the Pratt's figure-of-merit (PFoM) and the twofold consensus ground truth (TCGT); averaging at 15.7 % and 28.7 %, respectively. The novelty of the proposed approach lies in the use of this improved filtering strategy for accurate estimation of point spread function (PSF), and hence, a more precise image restoration. As a result, the proposed solutions compare favourably against existing techniques with the peak signal-to-noise ratio (PSNR), kernel similarity (KS) index, and error ratio (ER) averaging at 24.8 dB, 0.6 and 1.4, respectively. Additional experiments involving real blurred images demonstrated the competitiveness of the proposed approach in performing restoration in the absent of PSF

    Parameters Estimation For Image Restoration

    Get PDF
    Image degradation generally occurs due to transmission channel error, camera mis-focus, atmospheric turbulence, relative object-camera motion, etc. Such degradations are unavoidable while a scene is captured through a camera. As degraded images are having less scientific values, restoration of such images is extremely essential in many practical applications. In this thesis, attempts have been made to recover images from their degraded observations. Various degradations including, out-of-focus blur, motion blur, atmospheric turbulence blur along with Gaussian noise are considered. Basically image restoration schemes are based on classical, regularisation parameter estimation and PSF estimation. In this thesis, five different contributions have been made based on various aspects of restoration. Four of them deal with spatial invariant degradation and in one of the approach we attempt for removal of spatial variant degradation. Two different schemes are proposed to estimate the motion blur parameters. Two dimensional Gabor filter has been used to calculate the direction of the blur. Radial basis function neural network (RBFNN) has been utilised to find the length of the blur. Subsequently, Wiener filter has been used to restore the images. Noise robustness of the proposed scheme is tested with different noise strengths. The blur parameter estimation problem is modelled as a pattern classification problem and is solved using support vector machine (SVM). The length parameter of motion blur and sigma (σ) parameter of Gaussian blur are identified through multi-class SVM. Support vector regression (SVR) has been utilised to obtain a true mapping of the images from the observed noisy blurred image. The parameters in SVR play a key role in SVR performance and these are optimised through particle swarm optimisation (PSO) technique. The optimised SVR model is used to restore the noisy blurred images. Blur in the presence of noise makes the restoration problem ill-conditioned. The regularisation parameter required for restoration of noisy blurred image is discussed and for the purpose, a global optimisation scheme namely PSO is utilisedto minimise the cost function of generalised cross validation (GCV) measure, which is dependent on regularisation parameter. This eliminates the problem of falling into a local minima. The scheme adapts to degradations due to motion and out-of-focus blur, associated with noise of varying strengths. In another contribution, an attempt has been made to restore images degraded due to rotational motion. Such situation is considered as spatial variant blur and handled by considering this as a combination of a number of spatial invariant blurs. The proposed scheme divides the blurred image into a number of images using elliptical path modelling. Each image is deblurred separately using Wiener filter and finally integrated to construct the whole image. Each model is studied separately, and experiments are conducted to evaluate their performances. The visual as well as the peak signal to noise ratio (PSNR in dB) of restored images are compared with competent recent schemes

    Deep learning-based diagnostic system for malignant liver detection

    Get PDF
    Cancer is the second most common cause of death of human beings, whereas liver cancer is the fifth most common cause of mortality. The prevention of deadly diseases in living beings requires timely, independent, accurate, and robust detection of ailment by a computer-aided diagnostic (CAD) system. Executing such intelligent CAD requires some preliminary steps, including preprocessing, attribute analysis, and identification. In recent studies, conventional techniques have been used to develop computer-aided diagnosis algorithms. However, such traditional methods could immensely affect the structural properties of processed images with inconsistent performance due to variable shape and size of region-of-interest. Moreover, the unavailability of sufficient datasets makes the performance of the proposed methods doubtful for commercial use. To address these limitations, I propose novel methodologies in this dissertation. First, I modified a generative adversarial network to perform deblurring and contrast adjustment on computed tomography (CT) scans. Second, I designed a deep neural network with a novel loss function for fully automatic precise segmentation of liver and lesions from CT scans. Third, I developed a multi-modal deep neural network to integrate pathological data with imaging data to perform computer-aided diagnosis for malignant liver detection. The dissertation starts with background information that discusses the proposed study objectives and the workflow. Afterward, Chapter 2 reviews a general schematic for developing a computer-aided algorithm, including image acquisition techniques, preprocessing steps, feature extraction approaches, and machine learning-based prediction methods. The first study proposed in Chapter 3 discusses blurred images and their possible effects on classification. A novel multi-scale GAN network with residual image learning is proposed to deblur images. The second method in Chapter 4 addresses the issue of low-contrast CT scan images. A multi-level GAN is utilized to enhance images with well-contrast regions. Thus, the enhanced images improve the cancer diagnosis performance. Chapter 5 proposes a deep neural network for the segmentation of liver and lesions from abdominal CT scan images. A modified Unet with a novel loss function can precisely segment minute lesions. Similarly, Chapter 6 introduces a multi-modal approach for liver cancer variants diagnosis. The pathological data are integrated with CT scan images to diagnose liver cancer variants. In summary, this dissertation presents novel algorithms for preprocessing and disease detection. Furthermore, the comparative analysis validates the effectiveness of proposed methods in computer-aided diagnosis
    corecore