86 research outputs found

    Inheriting Bayer's Legacy-Joint Remosaicing and Denoising for Quad Bayer Image Sensor

    Full text link
    Pixel binning based Quad sensors have emerged as a promising solution to overcome the hardware limitations of compact cameras in low-light imaging. However, binning results in lower spatial resolution and non-Bayer CFA artifacts. To address these challenges, we propose a dual-head joint remosaicing and denoising network (DJRD), which enables the conversion of noisy Quad Bayer and standard noise-free Bayer pattern without any resolution loss. DJRD includes a newly designed Quad Bayer remosaicing (QB-Re) block, integrated denoising modules based on Swin-transformer and multi-scale wavelet transform. The QB-Re block constructs the convolution kernel based on the CFA pattern to achieve a periodic color distribution in the perceptual field, which is used to extract exact spectral information and reduce color misalignment. The integrated Swin-Transformer and multi-scale wavelet transform capture non-local dependencies, frequency and location information to effectively reduce practical noise. By identifying challenging patches utilizing Moire and zipper detection metrics, we enable our model to concentrate on difficult patches during the post-training phase, which enhances the model's performance in hard cases. Our proposed model outperforms competing models by approximately 3dB, without additional complexity in hardware or software

    The transcriptional characteristics of NADC34-like PRRSV in porcine alveolar macrophages

    Get PDF
    The widespread and endemic circulation of porcine reproductive and respiratory syndrome virus (PRRSV) cause persistent financial losses to the swine industry worldwide. In 2017, NADC34-like PRRSV-2 emerged in northeastern China and spread rapidly. The dynamics analysis of immune perturbations associated with novel PRRSV lineage is still incomplete. This study performed a time-course transcriptome sequencing of NADC34-like PRRSV strain YC-2020-infected porcine alveolar macrophages (PAMs) and compared them with JXA1-infected PAMs. The results illustrated dramatic changes in the host’s differentially expressed genes (DEGs) presented at different timepoints after PRRSV infection, and the expression profile of YC-2020 group is distinct from that of JXA1 group. Functional enrichment analysis showed that the expression of many inflammatory cytokines was up-regulated following YC-2020 infection but at a significantly lower magnitude than JXA1 group, in line with the trends for most interferon-stimulated genes (ISGs) and their regulators. Meanwhile, numerous components of histocompatibility complex (MHC) class II and phagosome presented a stronger transcription suppression after the YC-2020 infection. All results imply that YC-2020 may induce milder inflammatory responses, weaker antiviral processes, and more severe disturbance of antigen processing and presentation compared with HP-PRRSV. Additionally, LAPTM4A, GLMP, and LITAF, which were selected from weighted gene co-expression network analysis (WGCNA), could significantly inhibit PRRSV proliferation. This study provides fundamental data for understanding the biological characteristics of NADC34-like PRRSV and new insights into PRRSV evolution and prevention

    Recurrent online kernel recursive least square algorithm for nonlinear modeling

    No full text
    In this paper, we proposed a recurrent kernel recursive least square (RLS) algorithm for online learning. In classical kernel methods, the kernel function number grows as the number of training sample increases, which makes the computational cost of the algorithm very high and only applicable for offline learning. In order to make the kernel methods suitable for online learning where the system is updated when a new training sample is obtained, a compact dictionary (support vectors set) should be chosen to represent the whole training data, which in turn reduces the number of kernel functions. For this purpose, a sparsification method based on the Hessian matrix of the loss function is applied to continuously examine the importance of the new training sample and determine the update of the dictionary according to the importance measure. We show that the Hessian matrix is equivalent to the correlation matrix of the training samples in the RLS algorithm. This makes the sparsification method able to be easily incorporated into the RLS algorithm and reduce the computational cost futher. Simulation results show that our algorithm is an effective learning method for online chaotic signal prediction and nonlinear system identification

    An information theoretic kernel algorithm for robust online learning

    No full text
    Kernel methods are widely used in nonlinear modeling applications. In this paper, a robust information theoretic sparse kernel algorithm is proposed for online learning. In order to reduce the computational cost and make the algorithm suitable for online applications, we investigate an information theoretic sparsification rule based on the mutual information between the system input and output to determine the update of the dictionary (support vectors). According to the rule, only novel and informative samples are selected to form a sparse and compact dictionary. Furthermore, to improve the generalization ability, a robust learning scheme is proposed to avoid the algorithm over learning the redundant samples, which assures the convergence of the learning algorithm and makes the learning algorithm converge to its steady state much faster. Experiment are conducted on practical and simulated data and results are shown to validate the effectiveness of our proposed algorithm

    Online prediction of time series data with recurrent kernels

    No full text
    We propose a robust recurrent kernel online learning (RRKOL) algorithm which allows the exploitation of the kernel trick in an online fashion. The novel RRKOL algorithm achieves guaranteed weight convergence with regularized risk management through the recurrent hyper-parameters for a superior generalization performance. To select useful data to be learned and remove redundant ones, a sparcification procedure is developed based on the stability analysis of the system. Two time-series prediction examples are presented

    A training algorithm and stability analysis for recurrent neural networks

    No full text
    Training of recurrent neural networks (RNNs) introduces considerable computational complexities due to the need for gradient evaluations. How to get fast convergence speed and low computational complexity remains a challenging and open topic. Besides, the transient response of learning process of RNNs is a critical issue, especially for on-line applications. Conventional RNNs training algorithms such as the backpropagation through time (BPTT) and real-time recurrent learning (RTRL) have not adequately satisfied these requirements because they often suffer from slow convergence speed. If a large learning rate is chosen to improve performance, the training process may become unstable in terms of weight divergence. In this paper, a novel training algorithm of RNN, named robust recurrent simultaneous perturbation stochastic approximation (RRSPSA), is developed with a specially designed recurrent hybrid adaptive parameter and adaptive learning rates. RRSPSA is a powerful novel twin-engine simultaneous perturbation stochastic approximation (SPSA) type of RNN training algorithm. It utilizes specific designed three adaptive parameters to maximize training speed for recurrent training signal while exhibiting certain weight convergence properties with only two objective function measurements as the original SPSA algorithm. The RRSPSA is proved with guaranteed weight convergence and system stability in the sense of Lyapunov function. Computer simulations were carried out to demonstrate applicability of the theoretical results.Published versio
    • …
    corecore