25 research outputs found

    Combined denoising filter for fringe pattern in electronic speckle shearing pattern interferometry

    Get PDF
    We propose an effective image denoising filter that combines an improved spin filter (ISF) and wave atoms thresholding (WA) to remove the noise of fringe patterns in electronic speckle shearing pattern interferometry. The WA is first employed to denoise the fringe to save the processing time, and then the ISF is further used to remove noise of the denoised image using WA to obtain a better denoising performance. The performance of our proposed approach is evaluated by using both numerically simulated and experimental fringes. At the same time, three figures of merit for denoised fringes are also calculated to quantify the performance of the combined denoising filter. The denoised results produced by ISF, WA, and bilateral filtering are compared. The comparisons show that our proposed method can effectively remove noise and an improvement of 12 s in processing time and 0.3 in speckle index value is obtained with respect to ISF

    The development of the quaternion wavelet transform

    Get PDF
    The purpose of this article is to review what has been written on what other authors have called quaternion wavelet transforms (QWTs): there is no consensus about what these should look like and what their properties should be. We briefly explain what real continuous and discrete wavelet transforms and multiresolution analysis are and why complex wavelet transforms were introduced; we then go on to detail published approaches to QWTs and to analyse them. We conclude with our own analysis of what it is that should define a QWT as being truly quaternionic and why all but a few of the “QWTs” we have described do not fit our definition

    Optical Coherence Tomography guided Laser-Cochleostomy

    Get PDF
    Despite the high precision of laser, it remains challenging to control the laser-bone ablation without injuring the underlying critical structures. Providing an axial resolution on micrometre scale, OCT is a promising candidate for imaging microstructures beneath the bone surface and monitoring the ablation process. In this work, a bridge connecting these two technologies is established. A closed-loop control of laser-bone ablation under the monitoring with OCT has been successfully realised

    Image Restoration Methods for Retinal Images: Denoising and Interpolation

    Get PDF
    Retinal imaging provides an opportunity to detect pathological and natural age-related physiological changes in the interior of the eye. Diagnosis of retinal abnormality requires an image that is sharp, clear and free of noise and artifacts. However, to prevent tissue damage, retinal imaging instruments use low illumination radiation, hence, the signal-to-noise ratio (SNR) is reduced which means the total noise power is increased. Furthermore, noise is inherent in some imaging techniques. For example, in Optical Coherence Tomography (OCT) speckle noise is produced due to the coherence between the unwanted backscattered light. Improving OCT image quality by reducing speckle noise increases the accuracy of analyses and hence the diagnostic sensitivity. However, the challenge is to preserve image features while reducing speckle noise. There is a clear trade-off between image feature preservation and speckle noise reduction in OCT. Averaging multiple OCT images taken from a unique position provides a high SNR image, but it drastically increases the scanning time. In this thesis, we develop a multi-frame image denoising method for Spectral Domain OCT (SD-OCT) images extracted from a very close locations of a SD-OCT volume. The proposed denoising method was tested using two dictionaries: nonlinear (NL) and KSVD-based adaptive dictionary. The NL dictionary was constructed by adding phases, polynomial, exponential and boxcar functions to the conventional Discrete Cosine Transform (DCT) dictionary. The proposed denoising method denoises nearby frames of SD-OCT volume using a sparse representation method and combines them by selecting median intensity pixels from the denoised nearby frames. The result showed that both dictionaries reduced the speckle noise from the OCT images; however, the adaptive dictionary showed slightly better results at the cost of a higher computational complexity. The NL dictionary was also used for fundus and OCT image reconstruction. The performance of the NL dictionary was always better than that of other analytical-based dictionaries, such as DCT and Haar. The adaptive dictionary involves a lengthy dictionary learning process, and therefore cannot be used in real situations. We dealt this problem by utilizing a low-rank approximation. In this approach SD-OCT frames were divided into a group of noisy matrices that consist of non-local similar patches. A noise-free patch matrix was obtained from a noisy patch matrix utilizing a low-rank approximation. The noise-free patches from nearby frames were averaged to enhance the denoising. The denoised image obtained from the proposed approach was better than those obtained by several state-of-the-art methods. The proposed approach was extended to jointly denoise and interpolate SD-OCT image. The results show that joint denoising and interpolation method outperforms several existing state-of-the-art denoising methods plus bicubic interpolation.4 month

    Machine learning methods for sign language recognition: a critical review and analysis.

    Get PDF
    Sign language is an essential tool to bridge the communication gap between normal and hearing-impaired people. However, the diversity of over 7000 present-day sign languages with variability in motion position, hand shape, and position of body parts making automatic sign language recognition (ASLR) a complex system. In order to overcome such complexity, researchers are investigating better ways of developing ASLR systems to seek intelligent solutions and have demonstrated remarkable success. This paper aims to analyse the research published on intelligent systems in sign language recognition over the past two decades. A total of 649 publications related to decision support and intelligent systems on sign language recognition (SLR) are extracted from the Scopus database and analysed. The extracted publications are analysed using bibliometric VOSViewer software to (1) obtain the publications temporal and regional distributions, (2) create the cooperation networks between affiliations and authors and identify productive institutions in this context. Moreover, reviews of techniques for vision-based sign language recognition are presented. Various features extraction and classification techniques used in SLR to achieve good results are discussed. The literature review presented in this paper shows the importance of incorporating intelligent solutions into the sign language recognition systems and reveals that perfect intelligent systems for sign language recognition are still an open problem. Overall, it is expected that this study will facilitate knowledge accumulation and creation of intelligent-based SLR and provide readers, researchers, and practitioners a roadmap to guide future direction

    Underwater image restoration: super-resolution and deblurring via sparse representation and denoising by means of marine snow removal

    Get PDF
    Underwater imaging has been widely used as a tool in many fields, however, a major issue is the quality of the resulting images/videos. Due to the light's interaction with water and its constituents, the acquired underwater images/videos often suffer from a significant amount of scatter (blur, haze) and noise. In the light of these issues, this thesis considers problems of low-resolution, blurred and noisy underwater images and proposes several approaches to improve the quality of such images/video frames. Quantitative and qualitative experiments validate the success of proposed algorithms

    Neutro-Connectedness Theory, Algorithms and Applications

    Get PDF
    Connectedness is an important topological property and has been widely studied in digital topology. However, three main challenges exist in applying connectedness to solve real world problems: (1) the definitions of connectedness based on the classic and fuzzy logic cannot model the “hidden factors” that could influence our decision-making; (2) these definitions are too general to be applied to solve complex problem; and (4) many measurements of connectedness are heavily dependent on the shape (spatial distribution of vertices) of the graph and violate the intuitive idea of connectedness. This research focused on solving these challenges by redesigning the connectedness theory, developing fast algorithms for connectedness computation, and applying the newly proposed theory and algorithms to solve challenges in real problems. The newly proposed Neutro-Connectedness (NC) generalizes the conventional definitions of connectedness and can model uncertainty and describe the part and the whole relationship. By applying the dynamic programming strategy, a fast algorithm was proposed to calculate NC for general dataset. It is not just calculating NC map, and the output NC forest can discover a dataset’s topological structure regarding connectedness. In the first application, interactive image segmentation, two approaches were proposed to solve the two most difficult challenges: user interaction-dependence and intense interaction. The first approach, named NC-Cut, models global topologic property among image regions and reduces the dependence of segmentation performance on the appearance models generated by user interactions. It is less sensitive to the initial region of interest (ROI) than four state-of-the-art ROI-based methods. The second approach, named EISeg, provides user with visual clues to guide the interacting process based on NC. It reduces user interaction greatly by guiding user to where interacting can produce the best segmentation results. In the second application, NC was utilized to solve the challenge of weak boundary problem in breast ultrasound image segmentation. The approach can model the indeterminacy resulted from weak boundaries better than fuzzy connectedness, and achieved more accurate and robust result on our dataset with 131 breast tumor cases
    corecore