3,267 research outputs found

    The Role of Sulfur-Related Species in Oxygen Reduction Reactions

    Get PDF
    Heteroatom (metal and nonmetal) doping is essential to achieve excellent oxygen reduction reaction (ORR) activity of carbon materials. Among the heteroatoms that have been studied to date, sulfur (S) doping, including metal sulfides and sulfur atoms, has attracted tremendous attention. Since S-doping can modify spin density distributions around the metal centers as well as the synergistic effect between S and other doped heteroatoms, the S-C bond and metal sulfides can function as important ORR active sites. Furthermore, the S-doped hybrid sample shows a small charge-transfer resistance. Therefore, S-doping contributes to the superior ORR performance. This chapter describes the recent advancements of S-doped carbon materials, and their development in the area of ORR with regard to components, structures, and their ORR activities of S-related species

    Active Sites Derived from Heteroatom Doping in Carbon Materials for Oxygen Reduction Reaction

    Get PDF
    The oxygen reduction reaction (ORR) is a key cathode reaction in fuel cells. Due to the sluggish kinetics of the ORR, various kinds of catalysts have been developed to compensate for the shortcomings of the cathode reaction. Carbon materials are considered ideal cathode catalysts. In particular, heteroatom doping is essential to achieve an excellent ORR activity. Interestingly, doping trace amounts of metals in carbon materials plays an important role in enhancing the electrocatalytic activities. This chapter describes the recent advancements with regard to heteroatom-doped carbons and discusses the active sites decorated in the carbon matrix in terms of their configurations and contents, as well as their effectiveness in boosting the ORR performance. Furthermore, trace metal residues and metal-free catalysts for the ORR are clarified

    Institutional ownership and liquidity commonality: evidence from Australia

    Full text link
    We study the liquidity commonality impact of local and foreign institutional investment in the Australian equity market in the cross-section and over time. We find that commonality in liquidity is higher for large stocks compared to small stocks in the cross-section of stocks, and the spread between the two has increased over the past two decades. We show that this divergence can be explained by foreign institutional ownership. This finding suggests that foreign institutional investment contributes to an increase in the exposure of large stocks to unexpected liquidity events in the local market. We find a positive association between foreign institutional ownership and commonality in liquidity across all stocks, particularly in large and mid-cap stocks. Correlated trading by foreign institutions explains this association. However, local institutional ownership is positively related to the commonality in liquidity for large-cap stocks only

    Nucleic Acid Encoding Human REV1 Protein

    Get PDF
    The present invention relates to a human cDNA homologous to the yeast REV1 gene. The sequence of human REV1 (hREV1) gene is described

    WLST: Weak Labels Guided Self-training for Weakly-supervised Domain Adaptation on 3D Object Detection

    Full text link
    In the field of domain adaptation (DA) on 3D object detection, most of the work is dedicated to unsupervised domain adaptation (UDA). Yet, without any target annotations, the performance gap between the UDA approaches and the fully-supervised approach is still noticeable, which is impractical for real-world applications. On the other hand, weakly-supervised domain adaptation (WDA) is an underexplored yet practical task that only requires few labeling effort on the target domain. To improve the DA performance in a cost-effective way, we propose a general weak labels guided self-training framework, WLST, designed for WDA on 3D object detection. By incorporating autolabeler, which can generate 3D pseudo labels from 2D bounding boxes, into the existing self-training pipeline, our method is able to generate more robust and consistent pseudo labels that would benefit the training process on the target domain. Extensive experiments demonstrate the effectiveness, robustness, and detector-agnosticism of our WLST framework. Notably, it outperforms previous state-of-the-art methods on all evaluation tasks

    Late Fusion with Triplet Margin Objective for Multimodal Ideology Prediction and Analysis

    Full text link
    Prior work on ideology prediction has largely focused on single modalities, i.e., text or images. In this work, we introduce the task of multimodal ideology prediction, where a model predicts binary or five-point scale ideological leanings, given a text-image pair with political content. We first collect five new large-scale datasets with English documents and images along with their ideological leanings, covering news articles from a wide range of US mainstream media and social media posts from Reddit and Twitter. We conduct in-depth analyses of news articles and reveal differences in image content and usage across the political spectrum. Furthermore, we perform extensive experiments and ablation studies, demonstrating the effectiveness of targeted pretraining objectives on different model components. Our best-performing model, a late-fusion architecture pretrained with a triplet objective over multimodal content, outperforms the state-of-the-art text-only model by almost 4% and a strong multimodal baseline with no pretraining by over 3%.Comment: EMNLP 202

    Fair Robust Active Learning by Joint Inconsistency

    Full text link
    Fair Active Learning (FAL) utilized active learning techniques to achieve high model performance with limited data and to reach fairness between sensitive groups (e.g., genders). However, the impact of the adversarial attack, which is vital for various safety-critical machine learning applications, is not yet addressed in FAL. Observing this, we introduce a novel task, Fair Robust Active Learning (FRAL), integrating conventional FAL and adversarial robustness. FRAL requires ML models to leverage active learning techniques to jointly achieve equalized performance on benign data and equalized robustness against adversarial attacks between groups. In this new task, previous FAL methods generally face the problem of unbearable computational burden and ineffectiveness. Therefore, we develop a simple yet effective FRAL strategy by Joint INconsistency (JIN). To efficiently find samples that can boost the performance and robustness of disadvantaged groups for labeling, our method exploits the prediction inconsistency between benign and adversarial samples as well as between standard and robust models. Extensive experiments under diverse datasets and sensitive groups demonstrate that our method not only achieves fairer performance on benign samples but also obtains fairer robustness under white-box PGD attacks compared with existing active learning and FAL baselines. We are optimistic that FRAL would pave a new path for developing safe and robust ML research and applications such as facial attribute recognition in biometrics systems.Comment: 11 pages, 3 figure
    • …
    corecore