379 research outputs found

    NBLDA: Negative Binomial Linear Discriminant Analysis for RNA-Seq Data

    Full text link
    RNA-sequencing (RNA-Seq) has become a powerful technology to characterize gene expression profiles because it is more accurate and comprehensive than microarrays. Although statistical methods that have been developed for microarray data can be applied to RNA-Seq data, they are not ideal due to the discrete nature of RNA-Seq data. The Poisson distribution and negative binomial distribution are commonly used to model count data. Recently, Witten (2011) proposed a Poisson linear discriminant analysis for RNA-Seq data. The Poisson assumption may not be as appropriate as negative binomial distribution when biological replicates are available and in the presence of overdispersion (i.e., when the variance is larger than the mean). However, it is more complicated to model negative binomial variables because they involve a dispersion parameter that needs to be estimated. In this paper, we propose a negative binomial linear discriminant analysis for RNA-Seq data. By Bayes' rule, we construct the classifier by fitting a negative binomial model, and propose some plug-in rules to estimate the unknown parameters in the classifier. The relationship between the negative binomial classifier and the Poisson classifier is explored, with a numerical investigation of the impact of dispersion on the discriminant score. Simulation results show the superiority of our proposed method. We also analyze four real RNA-Seq data sets to demonstrate the advantage of our method in real-world applications

    Energy-Efficient Resource Allocation Optimization for Multimedia Heterogeneous Cloud Radio Access Networks

    Full text link
    The heterogeneous cloud radio access network (H-CRAN) is a promising paradigm which incorporates the cloud computing into heterogeneous networks (HetNets), thereby taking full advantage of cloud radio access networks (C-RANs) and HetNets. Characterizing the cooperative beamforming with fronthaul capacity and queue stability constraints is critical for multimedia applications to improving energy efficiency (EE) in H-CRANs. An energy-efficient optimization objective function with individual fronthaul capacity and inter-tier interference constraints is presented in this paper for queue-aware multimedia H-CRANs. To solve this non-convex objective function, a stochastic optimization problem is reformulated by introducing the general Lyapunov optimization framework. Under the Lyapunov framework, this optimization problem is equivalent to an optimal network-wide cooperative beamformer design algorithm with instantaneous power, average power and inter-tier interference constraints, which can be regarded as the weighted sum EE maximization problem and solved by a generalized weighted minimum mean square error approach. The mathematical analysis and simulation results demonstrate that a tradeoff between EE and queuing delay can be achieved, and this tradeoff strictly depends on the fronthaul constraint

    Inter-tier Interference Suppression in Heterogeneous Cloud Radio Access Networks

    Full text link
    Incorporating cloud computing into heterogeneous networks, the heterogeneous cloud radio access network (H-CRAN) has been proposed as a promising paradigm to enhance both spectral and energy efficiencies. Developing interference suppression strategies is critical for suppressing the inter-tier interference between remote radio heads (RRHs) and a macro base station (MBS) in H-CRANs. In this paper, inter-tier interference suppression techniques are considered in the contexts of collaborative processing and cooperative radio resource allocation (CRRA). In particular, interference collaboration (IC) and beamforming (BF) are proposed to suppress the inter-tier interference, and their corresponding performance is evaluated. Closed-form expressions for the overall outage probabilities, system capacities, and average bit error rates under these two schemes are derived. Furthermore, IC and BF based CRRA optimization models are presented to maximize the RRH-accessed users' sum rates via power allocation, which is solved with convex optimization. Simulation results demonstrate that the derived expressions for these performance metrics for IC and BF are accurate; and the relative performance between IC and BF schemes depends on system parameters, such as the number of antennas at the MBS, the number of RRHs, and the target signal-to-interference-plus-noise ratio threshold. Furthermore, it is seen that the sum rates of IC and BF schemes increase almost linearly with the transmit power threshold under the proposed CRRA optimization solution

    Denoising Diffusion Autoencoders are Unified Self-supervised Learners

    Full text link
    Inspired by recent advances in diffusion models, which are reminiscent of denoising autoencoders, we investigate whether they can acquire discriminative representations for classification via generative pre-training. This paper shows that the networks in diffusion models, namely denoising diffusion autoencoders (DDAE), are unified self-supervised learners: by pre-training on unconditional image generation, DDAE has already learned strongly linear-separable representations within its intermediate layers without auxiliary encoders, thus making diffusion pre-training emerge as a general approach for generative-and-discriminative dual learning. To validate this, we conduct linear probe and fine-tuning evaluations. Our diffusion-based approach achieves 95.9% and 50.0% linear evaluation accuracies on CIFAR-10 and Tiny-ImageNet, respectively, and is comparable to contrastive learning and masked autoencoders for the first time. Transfer learning from ImageNet also confirms the suitability of DDAE for Vision Transformers, suggesting the potential to scale DDAEs as unified foundation models. Code is available at github.com/FutureXiang/ddae.Comment: ICCV 2023 Ora

    Patching Weak Convolutional Neural Network Models through Modularization and Composition

    Full text link
    Despite great success in many applications, deep neural networks are not always robust in practice. For instance, a convolutional neuron network (CNN) model for classification tasks often performs unsatisfactorily in classifying some particular classes of objects. In this work, we are concerned with patching the weak part of a CNN model instead of improving it through the costly retraining of the entire model. Inspired by the fundamental concepts of modularization and composition in software engineering, we propose a compressed modularization approach, CNNSplitter, which decomposes a strong CNN model for NN-class classification into NN smaller CNN modules. Each module is a sub-model containing a part of the convolution kernels of the strong model. To patch a weak CNN model that performs unsatisfactorily on a target class (TC), we compose the weak CNN model with the corresponding module obtained from a strong CNN model. The ability of the weak CNN model to recognize the TC can thus be improved through patching. Moreover, the ability to recognize non-TCs is also improved, as the samples misclassified as TC could be classified as non-TCs correctly. Experimental results with two representative CNNs on three widely-used datasets show that the averaged improvement on the TC in terms of precision and recall are 12.54% and 2.14%, respectively. Moreover, patching improves the accuracy of non-TCs by 1.18%. The results demonstrate that CNNSplitter can patch a weak CNN model through modularization and composition, thus providing a new solution for developing robust CNN models.Comment: Accepted at ASE'2
    corecore