14,528 research outputs found

    Understanding and optimising the packing density of perylene bisimide layers on CVD-grown graphene

    Full text link
    The non-covalent functionalisation of graphene is an attractive strategy to alter the surface chemistry of graphene without damaging its superior electrical and mechanical properties. Using the facile method of aqueous-phase functionalisation on large-scale CVD-grown graphene, we investigated the formation of different packing densities in self-assembled monolayers (SAMs) of perylene bisimide derivatives and related this to the amount of substrate contamination. We were able to directly observe wet-chemically deposited SAMs in scanning tunnelling microscopy (STM) on transferred CVD graphene and revealed that the densely packed perylene ad-layers adsorb with the conjugated {\pi}-system of the core perpendicular to the graphene substrate. This elucidation of the non-covalent functionalisation of graphene has major implications on controlling its surface chemistry and opens new pathways for adaptable functionalisation in ambient conditions and on the large scale.Comment: 27 pages (including SI), 10 figure

    Rehaussement du signal de parole par EMD et opérateur de Teager-Kaiser

    Get PDF
    The authors would like to thank Professor Mohamed Bahoura from Universite de Quebec a Rimouski for fruitful discussions on time adaptive thresholdingIn this paper a speech denoising strategy based on time adaptive thresholding of intrinsic modes functions (IMFs) of the signal, extracted by empirical mode decomposition (EMD), is introduced. The denoised signal is reconstructed by the superposition of its adaptive thresholded IMFs. Adaptive thresholds are estimated using the Teager–Kaiser energy operator (TKEO) of signal IMFs. More precisely, TKEO identifies the type of frame by expanding differences between speech and non-speech frames in each IMF. Based on the EMD, the proposed speech denoising scheme is a fully data-driven approach. The method is tested on speech signals with different noise levels and the results are compared to EMD-shrinkage and wavelet transform (WT) coupled with TKEO. Speech enhancement performance is evaluated using output signal to noise ratio (SNR) and perceptual evaluation of speech quality (PESQ) measure. Based on the analyzed speech signals, the proposed enhancement scheme performs better than WT-TKEO and EMD-shrinkage approaches in terms of output SNR and PESQ. The noise is greatly reduced using time-adaptive thresholding than universal thresholding. The study is limited to signals corrupted by additive white Gaussian noise

    A wavelet based mammographic system

    Get PDF
    Mammography's role in the detection of breast cancer at early stages is well known. Although more accurate than other existing techniques, mammography still only finds 80 to 90 percent of breast cancers. It has been suggested that mammograms, as normally viewed, display only about 3% of the total information detected. The general inability to detect small tumors and other salient features within mammograms motivates our investigation of a system we call the Mammogram Display System (MDS). The core technology used for MDS image enhancement is the wavelet transform

    Multi-Frame Quality Enhancement for Compressed Video

    Full text link
    The past few years have witnessed great success in applying deep learning to enhance the quality of compressed image/video. The existing approaches mainly focus on enhancing the quality of a single frame, ignoring the similarity between consecutive frames. In this paper, we investigate that heavy quality fluctuation exists across compressed video frames, and thus low quality frames can be enhanced using the neighboring high quality frames, seen as Multi-Frame Quality Enhancement (MFQE). Accordingly, this paper proposes an MFQE approach for compressed video, as a first attempt in this direction. In our approach, we firstly develop a Support Vector Machine (SVM) based detector to locate Peak Quality Frames (PQFs) in compressed video. Then, a novel Multi-Frame Convolutional Neural Network (MF-CNN) is designed to enhance the quality of compressed video, in which the non-PQF and its nearest two PQFs are as the input. The MF-CNN compensates motion between the non-PQF and PQFs through the Motion Compensation subnet (MC-subnet). Subsequently, the Quality Enhancement subnet (QE-subnet) reduces compression artifacts of the non-PQF with the help of its nearest PQFs. Finally, the experiments validate the effectiveness and generality of our MFQE approach in advancing the state-of-the-art quality enhancement of compressed video. The code of our MFQE approach is available at https://github.com/ryangBUAA/MFQE.gitComment: to appear in CVPR 201

    Mutual Guidance and Residual Integration for Image Enhancement

    Full text link
    Previous studies show the necessity of global and local adjustment for image enhancement. However, existing convolutional neural networks (CNNs) and transformer-based models face great challenges in balancing the computational efficiency and effectiveness of global-local information usage. Especially, existing methods typically adopt the global-to-local fusion mode, ignoring the importance of bidirectional interactions. To address those issues, we propose a novel mutual guidance network (MGN) to perform effective bidirectional global-local information exchange while keeping a compact architecture. In our design, we adopt a two-branch framework where one branch focuses more on modeling global relations while the other is committed to processing local information. Then, we develop an efficient attention-based mutual guidance approach throughout our framework for bidirectional global-local interactions. As a result, both the global and local branches can enjoy the merits of mutual information aggregation. Besides, to further refine the results produced by our MGN, we propose a novel residual integration scheme following the divide-and-conquer philosophy. The extensive experiments demonstrate the effectiveness of our proposed method, which achieves state-of-the-art performance on several public image enhancement benchmarks.Comment: 17 pages, 15 figure

    General Adaptive Neighborhood Image Processing for Biomedical Applications

    Get PDF
    In biomedical imaging, the image processing techniques using spatially invariant transformations, with fixed operational windows, give efficient and compact computing structures, with the conventional separation between data and operations. Nevertheless, these operators have several strong drawbacks, such as removing significant details, changing some meaningful parts of large objects, and creating artificial patterns. This kind of approaches is generally not sufficiently relevant for helping the biomedical professionals to perform accurate diagnosis and therapy by using image processing techniques. Alternative approaches addressing context-dependent processing have been proposed with the introduction of spatially-adaptive operators (Bouannaya and Schonfeld, 2008; Ciuc et al., 2000; Gordon and Rangayyan, 1984;Maragos and Vachier, 2009; Roerdink, 2009; Salembier, 1992), where the adaptive concept results from the spatial adjustment of the sliding operational window. A spatially-adaptive image processing approach implies that operators will no longer be spatially invariant, but must vary over the whole image with adaptive windows, taking locally into account the image context by involving the geometrical, morphological or radiometric aspects. Nevertheless, most of the adaptive approaches require a priori or extrinsic informations on the image for efficient processing and analysis. An original approach, called General Adaptive Neighborhood Image Processing (GANIP), has been introduced and applied in the past few years by Debayle & Pinoli (2006a;b); Pinoli and Debayle (2007). This approach allows the building of multiscale and spatially adaptive image processing transforms using context-dependent intrinsic operational windows. With the help of a specified analyzing criterion (such as luminance, contrast, ...) and of the General Linear Image Processing (GLIP) (Oppenheim, 1967; Pinoli, 1997a), such transforms perform a more significant spatial and radiometric analysis. Indeed, they take intrinsically into account the local radiometric, morphological or geometrical characteristics of an image, and are consistent with the physical (transmitted or reflected light or electromagnetic radiation) and/or physiological (human visual perception) settings underlying the image formation processes. The proposed GAN-based transforms are very useful and outperforms several classical or modern techniques (Gonzalez and Woods, 2008) - such as linear spatial transforms, frequency noise filtering, anisotropic diffusion, thresholding, region-based transforms - used for image filtering and segmentation (Debayle and Pinoli, 2006b; 2009a; Pinoli and Debayle, 2007). This book chapter aims to first expose the fundamentals of the GANIP approach (Section 2) by introducing the GLIP frameworks, the General Adaptive Neighborhood (GAN) sets and two kinds of GAN-based image transforms: the GAN morphological filters and the GAN Choquet filters. Thereafter in Section 3, several GANIP processes are illustrated in the fields of image restoration, image enhancement and image segmentation on practical biomedical application examples. Finally, Section 4 gives some conclusions and prospects of the proposed GANIP approach

    Single Frame Image super Resolution using Learned Directionlets

    Full text link
    In this paper, a new directionally adaptive, learning based, single image super resolution method using multiple direction wavelet transform, called Directionlets is presented. This method uses directionlets to effectively capture directional features and to extract edge information along different directions of a set of available high resolution images .This information is used as the training set for super resolving a low resolution input image and the Directionlet coefficients at finer scales of its high-resolution image are learned locally from this training set and the inverse Directionlet transform recovers the super-resolved high resolution image. The simulation results showed that the proposed approach outperforms standard interpolation techniques like Cubic spline interpolation as well as standard Wavelet-based learning, both visually and in terms of the mean squared error (mse) values. This method gives good result with aliased images also.Comment: 14 pages,6 figure
    • …
    corecore