198 research outputs found
Efficient Continuous Manifold Learning for Time Series Modeling
Modeling non-Euclidean data is drawing attention along with the unprecedented
successes of deep neural networks in diverse fields. In particular, symmetric
positive definite (SPD) matrix is being actively studied in computer vision,
signal processing, and medical image analysis, thanks to its ability to learn
appropriate statistical representations. However, due to its strong
constraints, it remains challenging for optimization problems or inefficient
computation costs, especially, within a deep learning framework. In this paper,
we propose to exploit a diffeomorphism mapping between Riemannian manifolds and
a Cholesky space, by which it becomes feasible not only to efficiently solve
optimization problems but also to reduce computation costs greatly. Further, in
order for dynamics modeling in time series data, we devise a continuous
manifold learning method by integrating a manifold ordinary differential
equation and a gated recurrent neural network in a systematic manner. It is
noteworthy that because of the nice parameterization of matrices in a Cholesky
space, it is straightforward to train our proposed network with Riemannian
geometric metrics equipped. We demonstrate through experiments that the
proposed model can be efficiently and reliably trained as well as outperform
existing manifold methods and state-of-the-art methods in two classification
tasks: action recognition and sleep staging classification
Dilated Convolutional Neural Network for Skin Cancer Classification Based on Image Data
Skin cancer is a disorder of cell growth in the skin. Skin cancer has a big impact, causing physical disabilities that can be seen directly and high treatment costs. In addition, skin cancer also causes death if nor treated properly. Generally, dermatologists diagnose the presence of skin cancer in the human body by using the Biopsy process. In this study, the Dilated Convolutional Neural Network method was used to classify skin cancer image data. Dilated Convolutional Neural Network method is a development method of the Convolutional Neural Network method by modifying the dilation factors. The Dilated Convolutional Neural Network method is divided into two stages, including feature extraction and fully connected layer. The data used in this study is HAM1000 dataset. The data are dermoscopic image datasets which consists of 10015 images data from 7 types of skin cancer. This study conducted several experimental scenarios of changes in the value of d, which are 2,4,6, and 8 to get the optimal results. The parameters used in this study are epoch = 100, minibatch size = 8, learning rate = 0.1, and dropout = 0.5. The best results in this study were obtained with value of d=2 with the value of accuracy is 85.67% and the sensitivity is 65.48%
Neural Architecture Search for Compressed Sensing Magnetic Resonance Image Reconstruction
Recent works have demonstrated that deep learning (DL) based compressed
sensing (CS) implementation can accelerate Magnetic Resonance (MR) Imaging by
reconstructing MR images from sub-sampled k-space data. However, network
architectures adopted in previous methods are all designed by handcraft. Neural
Architecture Search (NAS) algorithms can automatically build neural network
architectures which have outperformed human designed ones in several vision
tasks. Inspired by this, here we proposed a novel and efficient network for the
MR image reconstruction problem via NAS instead of manual attempts.
Particularly, a specific cell structure, which was integrated into the
model-driven MR reconstruction pipeline, was automatically searched from a
flexible pre-defined operation search space in a differentiable manner.
Experimental results show that our searched network can produce better
reconstruction results compared to previous state-of-the-art methods in terms
of PSNR and SSIM with 4-6 times fewer computation resources. Extensive
experiments were conducted to analyze how hyper-parameters affect
reconstruction performance and the searched structures. The generalizability of
the searched architecture was also evaluated on different organ MR datasets.
Our proposed method can reach a better trade-off between computation cost and
reconstruction performance for MR reconstruction problem with good
generalizability and offer insights to design neural networks for other medical
image applications. The evaluation code will be available at
https://github.com/yjump/NAS-for-CSMRI.Comment: To be appear in Computerized Medical Imaging and Graphic
Applied Harmonic Analysis and Data Processing
Massive data sets have their own architecture. Each data source has an inherent structure, which we should attempt to detect in order to utilize it for applications, such as denoising, clustering, anomaly detection, knowledge extraction, or classification. Harmonic analysis revolves around creating new structures for decomposition, rearrangement and reconstruction of operators and functions—in other words inventing and exploring new architectures for information and inference. Two previous very successful workshops on applied harmonic analysis and sparse approximation have taken place in 2012 and in 2015. This workshop was the an evolution and continuation of these workshops and intended to bring together world leading experts in applied harmonic analysis, data analysis, optimization, statistics, and machine learning to report on recent developments, and to foster new developments and collaborations
Stable Invariant Models via Koopman Spectra
Weight-tied models have attracted attention in the modern development of
neural networks. The deep equilibrium model (DEQ) represents infinitely deep
neural networks with weight-tying, and recent studies have shown the potential
of this type of approach. DEQs are needed to iteratively solve root-finding
problems in training and are built on the assumption that the underlying
dynamics determined by the models converge to a fixed point. In this paper, we
present the stable invariant model (SIM), a new class of deep models that in
principle approximates DEQs under stability and extends the dynamics to more
general ones converging to an invariant set (not restricted in a fixed point).
The key ingredient in deriving SIMs is a representation of the dynamics with
the spectra of the Koopman and Perron--Frobenius operators. This perspective
approximately reveals stable dynamics with DEQs and then derives two variants
of SIMs. We also propose an implementation of SIMs that can be learned in the
same way as feedforward models. We illustrate the empirical performance of SIMs
with experiments and demonstrate that SIMs achieve comparative or superior
performance against DEQs in several learning tasks
- …