997 research outputs found

    Replication of Marek's Disease Virus Is Dependent on Synthesis of De Novo Fatty Acid and Prostaglandin E2

    Get PDF
    Marek’s disease virus (MDV) causes deadly lymphoma and induces an imbalance of the lipid metabolism in infected chickens. Here, we discovered that MDV activates the fatty acid synthesis (FAS) pathway in primary chicken embryo fibroblasts (CEFs). In addition, MDV-infected cells contained high levels of fatty acids and showed increased numbers of lipid droplets (LDs). Chemical inhibitors of the FAS pathway (TOFA and C75) reduced MDV titers by approximately 30-fold. Addition of the downstream metabolites, including malonyl-coenzyme A and palmitic acid, completely restored the inhibitory effects of the FAS inhibitors. Furthermore, we could demonstrate that MDV infection activates the COX-2/prostaglandin E2 (PGE2) pathway, as evident by increased levels of arachidonic acid, COX-2 expression, and PGE2 synthesis. Inhibition of the COX-2/PGE2 pathway by chemical inhibitors or knockdown of COX2 using short hairpin RNA reduced MDV titers, suggesting that COX-2 promotes virus replication. Exogenous PGE2 completely restored the inhibition of the COX-2/PGE2 pathway in MDV replication. Unexpectedly, exogenous PGE2 also partially rescued the inhibitory effects of FAS inhibitors on MDV replication, suggesting that there is a link between these two pathways in MDV infection. Taken together, our data demonstrate that the FAS and COX-2/PGE2 pathways play an important role in the replication of this deadly pathogen

    Transient Attacks against the VMG-KLJN Secure Key Exchanger

    Full text link
    The security vulnerability of the Vadai, Mingesz, and Gingl (VMG) Kirchhoff-Law-Johnson-Noise (KLJN) key exchanger, as presented in the publication "Nature, Science Report 5 (2015) 13653," has been exposed to transient attacks. Recently an effective defense protocol was introduced (Appl. Phys. Lett. 122 (2023) 143503) to counteract mean-square voltage-based (or mean-square current-based) transient attacks targeted at the ideal KLJN framework. In the present study, this same mitigation methodology has been employed to fortify the security of the VMG-KLJN key exchanger. It is worth noting that the protective measures need to be separately implemented for the HL and LH scenarios. This conceptual framework is corroborated through computer simulations, demonstrating that the application of this defensive technique substantially mitigates information leakage to a point of insignificance

    Ensembles of Deep Neural Networks for Action Recognition in Still Images

    Full text link
    Despite the fact that notable improvements have been made recently in the field of feature extraction and classification, human action recognition is still challenging, especially in images, in which, unlike videos, there is no motion. Thus, the methods proposed for recognizing human actions in videos cannot be applied to still images. A big challenge in action recognition in still images is the lack of large enough datasets, which is problematic for training deep Convolutional Neural Networks (CNNs) due to the overfitting issue. In this paper, by taking advantage of pre-trained CNNs, we employ the transfer learning technique to tackle the lack of massive labeled action recognition datasets. Furthermore, since the last layer of the CNN has class-specific information, we apply an attention mechanism on the output feature maps of the CNN to extract more discriminative and powerful features for classification of human actions. Moreover, we use eight different pre-trained CNNs in our framework and investigate their performance on Stanford 40 dataset. Finally, we propose using the Ensemble Learning technique to enhance the overall accuracy of action classification by combining the predictions of multiple models. The best setting of our method is able to achieve 93.17%\% accuracy on the Stanford 40 dataset.Comment: 5 pages, 2 figures, 3 tables, Accepted by ICCKE 201

    Pyramid Transformer for Traffic Sign Detection

    Full text link
    Traffic sign detection is a vital task in the visual system of self-driving cars and the automated driving system. Recently, novel Transformer-based models have achieved encouraging results for various computer vision tasks. We still observed that vanilla ViT could not yield satisfactory results in traffic sign detection because the overall size of the datasets is very small and the class distribution of traffic signs is extremely unbalanced. To overcome this problem, a novel Pyramid Transformer with locality mechanisms is proposed in this paper. Specifically, Pyramid Transformer has several spatial pyramid reduction layers to shrink and embed the input image into tokens with rich multi-scale context by using atrous convolutions. Moreover, it inherits an intrinsic scale invariance inductive bias and is able to learn local feature representation for objects at various scales, thereby enhancing the network robustness against the size discrepancy of traffic signs. The experiments are conducted on the German Traffic Sign Detection Benchmark (GTSDB). The results demonstrate the superiority of the proposed model in the traffic sign detection tasks. More specifically, Pyramid Transformer achieves 77.8% mAP on GTSDB when applied to the Cascade RCNN as the backbone, which surpasses most well-known and widely-used state-of-the-art models

    Traffic Sign Recognition Using Local Vision Transformer

    Full text link
    Recognition of traffic signs is a crucial aspect of self-driving cars and driver assistance systems, and machine vision tasks such as traffic sign recognition have gained significant attention. CNNs have been frequently used in machine vision, but introducing vision transformers has provided an alternative approach to global feature learning. This paper proposes a new novel model that blends the advantages of both convolutional and transformer-based networks for traffic sign recognition. The proposed model includes convolutional blocks for capturing local correlations and transformer-based blocks for learning global dependencies. Additionally, a locality module is incorporated to enhance local perception. The performance of the suggested model is evaluated on the Persian Traffic Sign Dataset and German Traffic Sign Recognition Benchmark and compared with SOTA convolutional and transformer-based models. The experimental evaluations demonstrate that the hybrid network with the locality module outperforms pure transformer-based models and some of the best convolutional networks in accuracy. Specifically, our proposed final model reached 99.66% accuracy in the German traffic sign recognition benchmark and 99.8% in the Persian traffic sign dataset, higher than the best convolutional models. Moreover, it outperforms existing CNNs and ViTs while maintaining fast inference speed. Consequently, the proposed model proves to be significantly faster and more suitable for real-world applications
    corecore