3 research outputs found

    Quadratic Autoencoder (Q-AE) for Low-dose CT Denoising

    Full text link
    Inspired by complexity and diversity of biological neurons, our group proposed quadratic neurons by replacing the inner product in current artificial neurons with a quadratic operation on input data, thereby enhancing the capability of an individual neuron. Along this direction, we are motivated to evaluate the power of quadratic neurons in popular network architectures, simulating human-like learning in the form of quadratic-neuron-based deep learning. Our prior theoretical studies have shown important merits of quadratic neurons and networks in representation, efficiency, and interpretability. In this paper, we use quadratic neurons to construct an encoder-decoder structure, referred as the quadratic autoencoder, and apply it to low-dose CT denoising. The experimental results on the Mayo low-dose CT dataset demonstrate the utility of quadratic autoencoder in terms of image denoising and model efficiency. To our best knowledge, this is the first time that the deep learning approach is implemented with a new type of neurons and demonstrates a significant potential in the medical imaging field

    Soft-Autoencoder and Its Wavelet Shrinkage Interpretation

    Full text link
    Recently, deep learning becomes the main focus of machine learning research and has greatly impacted many fields. However, deep learning is criticized for lack of interpretability. As a successful unsupervised model in deep learning, the autoencoder embraces a wide spectrum of applications, yet it suffers from the model opaqueness as well. In this paper, we propose a new type of convolutional autoencoders, termed as Soft-Autoencoder (Soft-AE), in which the activation functions of encoding layers are implemented with adaptable soft-thresholding units while decoding layers are realized with linear units. Consequently, Soft-AE can be naturally interpreted as a learned cascaded wavelet shrinkage system. Our denoising experiments demonstrate that Soft-AE not only is interpretable but also offers a competitive performance relative to its counterparts. Furthermore, we propose a generalized linear unit (GeLU) and its truncated variant (tGeLU) to allow autoencoder for more tasks from denoising to deblurring

    On Interpretability of Artificial Neural Networks: A Survey

    Full text link
    Deep learning as represented by the artificial deep neural networks (DNNs) has achieved great success in many important areas that deal with text, images, videos, graphs, and so on. However, the black-box nature of DNNs has become one of the primary obstacles for their wide acceptance in mission-critical applications such as medical diagnosis and therapy. Due to the huge potential of deep learning, interpreting neural networks has recently attracted much research attention. In this paper, based on our comprehensive taxonomy, we systematically review recent studies in understanding the mechanism of neural networks, describe applications of interpretability especially in medicine, and discuss future directions of interpretability research, such as in relation to fuzzy logic and brain science
    corecore