18,447 research outputs found

    Medical image denoising using convolutional denoising autoencoders

    Full text link
    Image denoising is an important pre-processing step in medical image analysis. Different algorithms have been proposed in past three decades with varying denoising performances. More recently, having outperformed all conventional methods, deep learning based models have shown a great promise. These methods are however limited for requirement of large training sample size and high computational costs. In this paper we show that using small sample size, denoising autoencoders constructed using convolutional layers can be used for efficient denoising of medical images. Heterogeneous images can be combined to boost sample size for increased denoising performance. Simplest of networks can reconstruct images with corruption levels so high that noise and signal are not differentiable to human eye.Comment: To appear: 6 pages, paper to be published at the Fourth Workshop on Data Mining in Biomedical Informatics and Healthcare at ICDM, 201

    A multi-view approach to cDNA micro-array analysis

    Get PDF
    The official published version can be obtained from the link below.Microarray has emerged as a powerful technology that enables biologists to study thousands of genes simultaneously, therefore, to obtain a better understanding of the gene interaction and regulation mechanisms. This paper is concerned with improving the processes involved in the analysis of microarray image data. The main focus is to clarify an image's feature space in an unsupervised manner. In this paper, the Image Transformation Engine (ITE), combined with different filters, is investigated. The proposed methods are applied to a set of real-world cDNA images. The MatCNN toolbox is used during the segmentation process. Quantitative comparisons between different filters are carried out. It is shown that the CLD filter is the best one to be applied with the ITE.This work was supported in part by the Engineering and Physical Sciences Research Council (EPSRC) of the UK under Grant GR/S27658/01, the National Science Foundation of China under Innovative Grant 70621001, Chinese Academy of Sciences under Innovative Group Overseas Partnership Grant, the BHP Billiton Cooperation of Australia Grant, the International Science and Technology Cooperation Project of China under Grant 2009DFA32050 and the Alexander von Humboldt Foundation of Germany

    Dual-wavelength thulium fluoride fiber laser based on SMF-TMSIF-SMF interferometer as potential source for microwave generationin 100-GHz region

    Get PDF
    A dual-wavelength thulium-doped fluoride fiber (TDFF) laser is presented. The generation of the TDFF laser is achieved with the incorporation of a single modemultimode- single mode (SMS) interferometer in the laser cavity. The simple SMS interferometer is fabricated using the combination of two-mode step index fiber and single-mode fiber. With this proposed design, as many as eight stable laser lines are experimentally demonstrated. Moreover, when a tunable bandpass filter is inserted in the laser cavity, a dual-wavelength TDFF laser can be achieved in a 1.5-Οm region. By heterodyning the dual-wavelength laser, simulation results suggest that the generated microwave signals can be tuned from 105.678 to 106.524 GHz with a constant step of �0.14 GHz. The presented photonics-based microwave generation method could provide alternative solution for 5G signal sources in 100-GHz region

    Classification and Recovery of Radio Signals from Cosmic Ray Induced Air Showers with Deep Learning

    Full text link
    Radio emission from air showers enables measurements of cosmic particle kinematics and identity. The radio signals are detected in broadband Megahertz antennas among continuous background noise. We present two deep learning concepts and their performance when applied to simulated data. The first network classifies time traces as signal or background. We achieve a true positive rate of about 90% for signal-to-noise ratios larger than three with a false positive rate below 0.2%. The other network is used to clean the time trace from background and to recover the radio time trace originating from an air shower. Here we achieve a resolution in the energy contained in the trace of about 20% without a bias for 80%80\% of the traces with a signal. The obtained frequency spectrum is cleaned from signals of radio frequency interference and shows the expected shape.Comment: 20 pages, 13 figures, resubmitted to JINS

    Generating 3D faces using Convolutional Mesh Autoencoders

    Full text link
    Learned 3D representations of human faces are useful for computer vision problems such as 3D face tracking and reconstruction from images, as well as graphics applications such as character generation and animation. Traditional models learn a latent representation of a face using linear subspaces or higher-order tensor generalizations. Due to this linearity, they can not capture extreme deformations and non-linear expressions. To address this, we introduce a versatile model that learns a non-linear representation of a face using spectral convolutions on a mesh surface. We introduce mesh sampling operations that enable a hierarchical mesh representation that captures non-linear variations in shape and expression at multiple scales within the model. In a variational setting, our model samples diverse realistic 3D faces from a multivariate Gaussian distribution. Our training data consists of 20,466 meshes of extreme expressions captured over 12 different subjects. Despite limited training data, our trained model outperforms state-of-the-art face models with 50% lower reconstruction error, while using 75% fewer parameters. We also show that, replacing the expression space of an existing state-of-the-art face model with our autoencoder, achieves a lower reconstruction error. Our data, model and code are available at http://github.com/anuragranj/com

    Exploiting temporal information for 3D pose estimation

    Full text link
    In this work, we address the problem of 3D human pose estimation from a sequence of 2D human poses. Although the recent success of deep networks has led many state-of-the-art methods for 3D pose estimation to train deep networks end-to-end to predict from images directly, the top-performing approaches have shown the effectiveness of dividing the task of 3D pose estimation into two steps: using a state-of-the-art 2D pose estimator to estimate the 2D pose from images and then mapping them into 3D space. They also showed that a low-dimensional representation like 2D locations of a set of joints can be discriminative enough to estimate 3D pose with high accuracy. However, estimation of 3D pose for individual frames leads to temporally incoherent estimates due to independent error in each frame causing jitter. Therefore, in this work we utilize the temporal information across a sequence of 2D joint locations to estimate a sequence of 3D poses. We designed a sequence-to-sequence network composed of layer-normalized LSTM units with shortcut connections connecting the input to the output on the decoder side and imposed temporal smoothness constraint during training. We found that the knowledge of temporal consistency improves the best reported result on Human3.6M dataset by approximately 12.2%12.2\% and helps our network to recover temporally consistent 3D poses over a sequence of images even when the 2D pose detector fails
    • …
    corecore