58 research outputs found

    How does the creditor conflict affect bond IPO underpricing?

    Get PDF
    In this paper, we find that the conflict of interest between loan holders and bondholders is positively related to bond IPO underpricing, which serves as compensation to the initial bond investors. We construct four proxies for the conflict between loan holders and bondholders, namely a loan covenant index, the outstanding loan amount, the number of lead banks, and the loan remaining maturity. Our empirical tests show that all four variables are positively related to bond IPO underpricing, indicating that the loan structure of firms has a real impact on the pricing of their bond IPOs

    Perfect Alignment May be Poisonous to Graph Contrastive Learning

    Full text link
    Graph Contrastive Learning (GCL) aims to learn node representations by aligning positive pairs and separating negative ones. However, limited research has been conducted on the inner law behind specific augmentations used in graph-based learning. What kind of augmentation will help downstream performance, how does contrastive learning actually influence downstream tasks, and why the magnitude of augmentation matters? This paper seeks to address these questions by establishing a connection between augmentation and downstream performance, as well as by investigating the generalization of contrastive learning. Our findings reveal that GCL contributes to downstream tasks mainly by separating different classes rather than gathering nodes of the same class. So perfect alignment and augmentation overlap which draw all intra-class samples the same can not explain the success of contrastive learning. Then in order to comprehend how augmentation aids the contrastive learning process, we conduct further investigations into its generalization, finding that perfect alignment that draw positive pair the same could help contrastive loss but is poisonous to generalization, on the contrary, imperfect alignment enhances the model's generalization ability. We analyse the result by information theory and graph spectrum theory respectively, and propose two simple but effective methods to verify the theories. The two methods could be easily applied to various GCL algorithms and extensive experiments are conducted to prove its effectiveness

    Can Large Language Models Empower Molecular Property Prediction?

    Full text link
    Molecular property prediction has gained significant attention due to its transformative potential in multiple scientific disciplines. Conventionally, a molecule graph can be represented either as a graph-structured data or a SMILES text. Recently, the rapid development of Large Language Models (LLMs) has revolutionized the field of NLP. Although it is natural to utilize LLMs to assist in understanding molecules represented by SMILES, the exploration of how LLMs will impact molecular property prediction is still in its early stage. In this work, we advance towards this objective through two perspectives: zero/few-shot molecular classification, and using the new explanations generated by LLMs as representations of molecules. To be specific, we first prompt LLMs to do in-context molecular classification and evaluate their performance. After that, we employ LLMs to generate semantically enriched explanations for the original SMILES and then leverage that to fine-tune a small-scale LM model for multiple downstream tasks. The experimental results highlight the superiority of text explanations as molecular representations across multiple benchmark datasets, and confirm the immense potential of LLMs in molecular property prediction tasks. Codes are available at \url{https://github.com/ChnQ/LLM4Mol}

    DCPT: Darkness Clue-Prompted Tracking in Nighttime UAVs

    Full text link
    Existing nighttime unmanned aerial vehicle (UAV) trackers follow an "Enhance-then-Track" architecture - first using a light enhancer to brighten the nighttime video, then employing a daytime tracker to locate the object. This separate enhancement and tracking fails to build an end-to-end trainable vision system. To address this, we propose a novel architecture called Darkness Clue-Prompted Tracking (DCPT) that achieves robust UAV tracking at night by efficiently learning to generate darkness clue prompts. Without a separate enhancer, DCPT directly encodes anti-dark capabilities into prompts using a darkness clue prompter (DCP). Specifically, DCP iteratively learns emphasizing and undermining projections for darkness clues. It then injects these learned visual prompts into a daytime tracker with fixed parameters across transformer layers. Moreover, a gated feature aggregation mechanism enables adaptive fusion between prompts and between prompts and the base model. Extensive experiments show state-of-the-art performance for DCPT on multiple dark scenario benchmarks. The unified end-to-end learning of enhancement and tracking in DCPT enables a more trainable system. The darkness clue prompting efficiently injects anti-dark knowledge without extra modules. Code and models will be released.Comment: Under revie

    Genomic monitoring of SARS-CoV-2 uncovers an Nsp1 deletion variant that modulates type I interferon response

    Get PDF
    The SARS-CoV-2 virus, the causative agent of COVID-19, is undergoing constant mutation. Here, we utilized an integrative approach combining epidemiology, virus genome sequencing, clinical phenotyping, and experimental validation to locate mutations of clinical importance. We identified 35 recurrent variants, some of which are associated with clinical phenotypes related to severity. One variant, containing a deletion in the Nsp1-coding region (D500-532), was found in more than 20% of our sequenced samples and associates with higher RT-PCR cycle thresholds and lower serum IFN-beta levels of infected patients. Deletion variants in this locus were found in 37 countries worldwide, and viruses isolated from clinical samples or engineered by reverse genetics with related deletions in Nsp1 also induce lower IFN-beta responses in infected Calu-3 cells. Taken together, our virologic surveillance characterizes recurrent genetic diversity and identified mutations in Nsp1 of biological and clinical importance, which collectively may aid molecular diagnostics and drug design.Peer reviewe

    Towards Understanding the Generalization of Graph Neural Networks

    Full text link
    Graph neural networks (GNNs) are the most widely adopted model in graph-structured data oriented learning and representation. Despite their extraordinary success in real-world applications, understanding their working mechanism by theory is still on primary stage. In this paper, we move towards this goal from the perspective of generalization. To be specific, we first establish high probability bounds of generalization gap and gradients in transductive learning with consideration of stochastic optimization. After that, we provide high probability bounds of generalization gap for popular GNNs. The theoretical results reveal the architecture specific factors affecting the generalization gap. Experimental results on benchmark datasets show the consistency between theoretical results and empirical evidence. Our results provide new insights in understanding the generalization of GNNs
    corecore