259 research outputs found
Modeling Nonlinear Vector Time Series Data
In this chapter, we review nonlinear models for vector time series data and develop new nonparametric estimation and inference for them. Vector time series data exist widely in practice. In financial markets, multiple time series are usually correlated. When analyzing several interdependent time series, in general one should consider them as a single vector time series fitted by multivariate models, which provides a useful tool for modeling interdependencies among multiple time series and for simultaneously analyzing feedback and Granger causality effects. Since nonlinear features are widely observed in time series, we consider nonlinear methodology for modeling nonlinear vector time series data, which allows flexibility in the model structure and avoids the curse of dimensionality
GPT-NAS: Neural Architecture Search with the Generative Pre-Trained Model
Neural Architecture Search (NAS) has emerged as one of the effective methods
to design the optimal neural network architecture automatically. Although
neural architectures have achieved human-level performances in several tasks,
few of them are obtained from the NAS method. The main reason is the huge
search space of neural architectures, making NAS algorithms inefficient. This
work presents a novel architecture search algorithm, called GPT-NAS, that
optimizes neural architectures by Generative Pre-Trained (GPT) model. In
GPT-NAS, we assume that a generative model pre-trained on a large-scale corpus
could learn the fundamental law of building neural architectures. Therefore,
GPT-NAS leverages the generative pre-trained (GPT) model to propose reasonable
architecture components given the basic one. Such an approach can largely
reduce the search space by introducing prior knowledge in the search process.
Extensive experimental results show that our GPT-NAS method significantly
outperforms seven manually designed neural architectures and thirteen
architectures provided by competing NAS methods. In addition, our ablation
study indicates that the proposed algorithm improves the performance of finely
tuned neural architectures by up to about 12% compared to those without GPT,
further demonstrating its effectiveness in searching neural architectures
WaveDM: Wavelet-Based Diffusion Models for Image Restoration
Latest diffusion-based methods for many image restoration tasks outperform
traditional models, but they encounter the long-time inference problem. To
tackle it, this paper proposes a Wavelet-Based Diffusion Model (WaveDM) with an
Efficient Conditional Sampling (ECS) strategy. WaveDM learns the distribution
of clean images in the wavelet domain conditioned on the wavelet spectrum of
degraded images after wavelet transform, which is more time-saving in each step
of sampling than modeling in the spatial domain. In addition, ECS follows the
same procedure as the deterministic implicit sampling in the initial sampling
period and then stops to predict clean images directly, which reduces the
number of total sampling steps to around 5. Evaluations on four benchmark
datasets including image raindrop removal, defocus deblurring, demoir\'eing,
and denoising demonstrate that WaveDM achieves state-of-the-art performance
with the efficiency that is comparable to traditional one-pass methods and over
100 times faster than existing image restoration methods using vanilla
diffusion models
- …