277 research outputs found

    Comparative Study of Reinforcement Learning Algorithms: Deep Q-Networks, Deep Deterministic Policy Gradients and Proximal Policy Optimization

    Get PDF
    The advancement of Artificial Intelligence (AI), particularly in the field of Reinforcement Learning (RL), has led to significant breakthroughs in numerous domains, ranging from autonomous systems to complex game environments. Among this progress, the emergence and evolution of algorithms like Deep Q-Networks (DQN), Deep Deterministic Policy Gradients (DDPG), and Proximal Policy Optimization (PPO) have been pivotal. These algorithms, each with unique approaches and strengths, have become fundamental in tackling diverse RL challenges. This study aims to dissect and compare these three influential algorithms to provide a clearer understanding of their mechanics, efficiencies, and applicability. We delve into the theoretical underpinnings of DQN, DDPG, and PPO, and assess their performances across a variety of standard benchmarks. Through this comparative analysis, we seek to offer valuable insights for choosing the right algorithms for different environments and highlight potential pathways for future research in the field of Reinforcement Learning

    Calling Cards For DNA-Binding Proteins

    Get PDF
    Organisms respond to their environment by altering patterns of gene expression. This process is orchestrated by transcription factors, which bind to specific DNA sequences near genes. In order to understand the regulatory networks that control transcription, the genomic targets of all transcription factors under various conditions and in different cell types must be identified. This remains a distant goal, mainly due to the lack of a high-throughput, in vivo method to study protein-DNA interactions. To fill this gap, I developed transposon Calling Cards for DNA-binding proteins. I endowed DNA binding proteins with the ability to direct the insertion of a transposon into the genome near to where they bind. The transposon becomes a Calling Card that marks the visit of a DNA-binding protein to the genome. I demonstrated that the Calling Card method is accurate and robust. I combined Calling Cards with next generation DNA sequencing technology to increase the sensitivity, specificity, and resolution of the method. This improved method: Calling Card-Seq ) allows for multiple transcription factors to be analyzed in a single experiment, greatly increasing sample throughput. I used Calling Card-Seq to study transcription factors of the yeast S. cerevisiae that have not been well-characterized, and I successfully identified DNA sequence recognition motifs and target genes for many of them. Calling Card-Seq will enable a systematic exploration of transcription factor binding under many different environments and growth conditions in a way that has heretofore not been possible. This dissertation describes my work developing this method, as well as several interesting results obtained using this method to study the gene regulatory networks of the yeast S. cerevisiae

    Towards Higher Speed Next Generation Passive Optical Networks

    Get PDF
    L'abstract è presente nell'allegato / the abstract is in the attachmen

    Business Analysis and Future Development of an Electric Vehicle Company -- Tesla

    Get PDF
    The boom in electric vehicles in recent years has caught the attention of many companies that are investing or will be investing in the industry due to the increasing demand for electric cars. Tesla as a leader of the electric vehicles (EVs) industry, its development is of vital significance for referential value. Previous research on electric vehicle acceptance and behavioral intention of purchase is comprehensive, which could enable the EVs industry to understand consumer psychology. However, there is little analysis of the business strategy and future development of specific companies. When it comes to sustainability, almost every company has a path that is best suited to. This paper presents a comprehensive review of the historical background of Tesla, followed by in-depth states on its current strategy and future analysis. Given recommendations on its future development, Tesla could engage more in other different industries to increase the source of revenue and invest more into the development of autonomous public transportation, such as electric car-sharing services (ECS). These will help Tesla move steadily into the next stage

    Age-related facial analysis with deep learning

    Get PDF
    Age, as an important soft biometric trait, can be inferred based on the appearance of human faces. However, compared to other facial attributes like race and gender, age is rather subtle due to the underlying conditions of individuals (i.e., their upbringing environment and genes). These uncertainties make age-related facial analysis (including age estimation, age-oriented face synthesis and age-invariant face recognition) still unsolved. In this thesis, we study these age-related problems and propose several deep learning-based methods, each tackle a problem from a specific aspect. We first propose a customised Convolutional Neural Network architecture called the FusionNet and also its extension to study the age estimation problem. Although faces are composed of numerous facial attributes, most deep learning-based methods still consider a face as a typical object and do not pay enough attention to facial regions that carry age-specific features for this particular task. Therefore, the proposed methods take several age-specific facial patches as part of the input to emphasise the learning of age-specific patches. Through extensive evaluation, we show that these methods outperform existing methods on age estimation benchmark datasets under various evaluation matrices. Then, we propose a Generative Adversarial Network (GAN) model for age-oriented face synthesis. Specifically, to ensure that the synthesised images are within target age groups, this method tackles the mode collapse issue in vanilla GANs with a novel Conditional Discriminator Pool (CDP), which consists of multiple discriminators, each targeting one particular age category. To ensure the identity information xiv is unaltered in the synthesised images, our method uses a novel Adversarial Triplet loss. This loss, which is based on the Triplet loss, adds a ranking operation to further pull the positive embedding towards the anchor embedding resulting in significantly reduced intra-class variances in the feature space. Through extensive experiments, we show that our method can precisely transform input faces into the target age category while preserving the identity information on the synthesised faces. Last but not least, we propose the disentangled contrastive learning (DCL) for unsupervised age-invariant face recognition. Different from existing AIFR methods, DCL, which aims to learn disentangled identity features, can be trained on any facial datasets and further tested on age-oriented datasets. Moreover, by utilising a set of three augmented samples derived from the same input image, Disentangled Contrastive Learning can be directly trained on small-sized datasets with promising performance. We further modify the conventional contrastive loss function to fit this training strategy with three augmented samples. We show that our method dramatically outperforms previous unsupervised methods and other contrastive learning methods

    Face.evoLVe: A High-Performance Face Recognition Library

    Full text link
    In this paper, we develop face.evoLVe -- a comprehensive library that collects and implements a wide range of popular deep learning-based methods for face recognition. First of all, face.evoLVe is composed of key components that cover the full process of face analytics, including face alignment, data processing, various backbones, losses, and alternatives with bags of tricks for improving performance. Later, face.evoLVe supports multi-GPU training on top of different deep learning platforms, such as PyTorch and PaddlePaddle, which facilitates researchers to work on both large-scale datasets with millions of images and low-shot counterparts with limited well-annotated data. More importantly, along with face.evoLVe, images before & after alignment in the common benchmark datasets are released with source codes and trained models provided. All these efforts lower the technical burdens in reproducing the existing methods for comparison, while users of our library could focus on developing advanced approaches more efficiently. Last but not least, face.evoLVe is well designed and vibrantly evolving, so that new face recognition approaches can be easily plugged into our framework. Note that we have used face.evoLVe to participate in a number of face recognition competitions and secured the first place. The version that supports PyTorch is publicly available at https://github.com/ZhaoJ9014/face.evoLVe.PyTorch and the PaddlePaddle version is available at https://github.com/ZhaoJ9014/face.evoLVe.PyTorch/tree/master/paddle. Face.evoLVe has been widely used for face analytics, receiving 2.4K stars and 622 forks.Comment: A short verson is accepted by NeuroComputing (https://www.sciencedirect.com/science/article/pii/S0925231222005057?via%3Dihub). Primary corresponding author is Dr. Jian Zha

    TiC: Exploring Vision Transformer in Convolution

    Full text link
    While models derived from Vision Transformers (ViTs) have been phonemically surging, pre-trained models cannot seamlessly adapt to arbitrary resolution images without altering the architecture and configuration, such as sampling the positional encoding, limiting their flexibility for various vision tasks. For instance, the Segment Anything Model (SAM) based on ViT-Huge requires all input images to be resized to 1024×\times1024. To overcome this limitation, we propose the Multi-Head Self-Attention Convolution (MSA-Conv) that incorporates Self-Attention within generalized convolutions, including standard, dilated, and depthwise ones. Enabling transformers to handle images of varying sizes without retraining or rescaling, the use of MSA-Conv further reduces computational costs compared to global attention in ViT, which grows costly as image size increases. Later, we present the Vision Transformer in Convolution (TiC) as a proof of concept for image classification with MSA-Conv, where two capacity enhancing strategies, namely Multi-Directional Cyclic Shifted Mechanism and Inter-Pooling Mechanism, have been proposed, through establishing long-distance connections between tokens and enlarging the effective receptive field. Extensive experiments have been carried out to validate the overall effectiveness of TiC. Additionally, ablation studies confirm the performance improvement made by MSA-Conv and the two capacity enhancing strategies separately. Note that our proposal aims at studying an alternative to the global attention used in ViT, while MSA-Conv meets our goal by making TiC comparable to state-of-the-art on ImageNet-1K. Code will be released at https://github.com/zs670980918/MSA-Conv
    • …
    corecore