372 research outputs found

    Safe DreamerV3: Safe Reinforcement Learning with World Models

    Full text link
    The widespread application of Reinforcement Learning (RL) in real-world situations is yet to come to fruition, largely as a result of its failure to satisfy the essential safety demands of such systems. Existing safe reinforcement learning (SafeRL) methods, employing cost functions to enhance safety, fail to achieve zero-cost in complex scenarios, including vision-only tasks, even with comprehensive data sampling and training. To address this, we introduce Safe DreamerV3, a novel algorithm that integrates both Lagrangian-based and planning-based methods within a world model. Our methodology represents a significant advancement in SafeRL as the first algorithm to achieve nearly zero-cost in both low-dimensional and vision-only tasks within the Safety-Gymnasium benchmark. Our project website can be found in: https://sites.google.com/view/safedreamerv3

    Screening of Potential Hub Genes in Glioma Progression Based on Bioinformatics Analysis

    Get PDF
    Objectives: Glioma is the most common primary tumor of the central nervous system, and its therapeutic effect is not optimistic. In recent years, related therapeutic technologies have developed rapidly, but unfortunately, the improvement of clinical therapeutic effect is not satisfactory. In addition to conventional therapies, there are some attractive therapies, such as biological therapy (immunotherapy), gene therapy, etc[1]. Therefore, searching for potential target genes of glioma is of great significance for developing new therapeutic directions and designing new biomarkers[2]. Methods: Download gene expression data set, GSE137902 gelatin and GSE13790 matrix through NCBI-G to screen overlapping differential expression genes (DEGs). In order to identify central genes from these genes, we conducted protein protein interaction (PPI) network. To further explore the potential mechanism of central genes in glioma, we performed gene ontology (GO) and Kyoto Gene and Genome Encyclopedia (KEGG) analysis. Then get the intersection of key genes according to five algorithms of Closeness Degree EPC MCC Stress. The intersection is obtained through GSE117423, GSE188256 and GSE90598 in geo database, and finally verified through Receiver Operating Characteristic (ROC) curve. Results: A total of 1274 differentially expressed genes are identified, and then 309 genes are obtained by intersection of the two. 16 Hub genes were obtained, and then the intersection of the two genes with GSE117423, GES188256 and GSE90598 genes was verified to obtain the key gene TIMP1 of glioma. Made the ROC curve of key gene.The intersection with hub gene was determined to identify TIMP1 as the key gene. Conclusion: The DEGs and Hub genes and signal pathways found in this study can confirm that the key gene TIMP1 is closely related to the occurrence and evolution of glioma, and provide candidate targets for the diagnosis and treatment of glioma

    Learning Raw Image Denoising with Bayer Pattern Unification and Bayer Preserving Augmentation

    Full text link
    In this paper, we present new data pre-processing and augmentation techniques for DNN-based raw image denoising. Compared with traditional RGB image denoising, performing this task on direct camera sensor readings presents new challenges such as how to effectively handle various Bayer patterns from different data sources, and subsequently how to perform valid data augmentation with raw images. To address the first problem, we propose a Bayer pattern unification (BayerUnify) method to unify different Bayer patterns. This allows us to fully utilize a heterogeneous dataset to train a single denoising model instead of training one model for each pattern. Furthermore, while it is essential to augment the dataset to improve model generalization and performance, we discovered that it is error-prone to modify raw images by adapting augmentation methods designed for RGB images. Towards this end, we present a Bayer preserving augmentation (BayerAug) method as an effective approach for raw image augmentation. Combining these data processing technqiues with a modified U-Net, our method achieves a PSNR of 52.11 and a SSIM of 0.9969 in NTIRE 2019 Real Image Denoising Challenge, demonstrating the state-of-the-art performance. Our code is available at https://github.com/Jiaming-Liu/BayerUnifyAug.Comment: Accepted by CVPRW 201

    TextNet: Irregular Text Reading from Images with an End-to-End Trainable Network

    Full text link
    Reading text from images remains challenging due to multi-orientation, perspective distortion and especially the curved nature of irregular text. Most of existing approaches attempt to solve the problem in two or multiple stages, which is considered to be the bottleneck to optimize the overall performance. To address this issue, we propose an end-to-end trainable network architecture, named TextNet, which is able to simultaneously localize and recognize irregular text from images. Specifically, we develop a scale-aware attention mechanism to learn multi-scale image features as a backbone network, sharing fully convolutional features and computation for localization and recognition. In text detection branch, we directly generate text proposals in quadrangles, covering oriented, perspective and curved text regions. To preserve text features for recognition, we introduce a perspective RoI transform layer, which can align quadrangle proposals into small feature maps. Furthermore, in order to extract effective features for recognition, we propose to encode the aligned RoI features by RNN into context information, combining spatial attention mechanism to generate text sequences. This overall pipeline is capable of handling both regular and irregular cases. Finally, text localization and recognition tasks can be jointly trained in an end-to-end fashion with designed multi-task loss. Experiments on standard benchmarks show that the proposed TextNet can achieve state-of-the-art performance, and outperform existing approaches on irregular datasets by a large margin.Comment: Asian conference on computer vision, 2018, oral presentatio

    Guiding Corpus-based Set Expansion by Auxiliary Sets Generation and Co-Expansion

    Full text link
    Given a small set of seed entities (e.g., ``USA'', ``Russia''), corpus-based set expansion is to induce an extensive set of entities which share the same semantic class (Country in this example) from a given corpus. Set expansion benefits a wide range of downstream applications in knowledge discovery, such as web search, taxonomy construction, and query suggestion. Existing corpus-based set expansion algorithms typically bootstrap the given seeds by incorporating lexical patterns and distributional similarity. However, due to no negative sets provided explicitly, these methods suffer from semantic drift caused by expanding the seed set freely without guidance. We propose a new framework, Set-CoExpan, that automatically generates auxiliary sets as negative sets that are closely related to the target set of user's interest, and then performs multiple sets co-expansion that extracts discriminative features by comparing target set with auxiliary sets, to form multiple cohesive sets that are distinctive from one another, thus resolving the semantic drift issue. In this paper we demonstrate that by generating auxiliary sets, we can guide the expansion process of target set to avoid touching those ambiguous areas around the border with auxiliary sets, and we show that Set-CoExpan outperforms strong baseline methods significantly.Comment: WWW 202
    corecore