619 research outputs found

    Broadcasting in Moblie Ad Hoc Networks

    Get PDF

    The Very Dark Side of Internal Capital Markets: Evidence from Diversified Business Groups in Korea

    Get PDF
    This paper examines the capital allocation within Korean chaebol firms during the period from 1991 to 2000. We find strong evidence that, during the pre-Asian financial crisis period in the early 1990's, poorly performing firms with less investment opportunities invest more than well-performing firms with better growth opportunities. We also find the evidence of cross-subsidization among firms in the same chaebol group during the pre-crisis period. It appears that the existence of the "dark" side of internal capital markets explains most part of this striking phenomenon where "tunneling" practice has been common during the pre-crisis period. However, the inefficient capital allocation seems to disappear after the crisis as banks gain more power and market disciplines inefficient chaebol firms.

    Families in the household registers of Seventeenth-century Korea : capital, urban and rural areas

    Get PDF
    Altres ajuts: NRF-2019S1A5B5A07106883Because of the Japanese (1592-1598), and Manchu (1627, 1636-1637), invasions, the seventeenth century was a turning point in the Neo-Confucian transformation of Chosลn dynasty. Changes and continuities in Korean society and families can be seen in household registers published in the seventeenth century. Occupational records and family structures from the top to the bottom of society show that social hierarchies and governmental systems were well preserved even after the invasions. This study also highlights the value of household registers as a primary historical source for the study of Korean social and family histor

    BioCAD: an information fusion platform for bio-network inference and analysis

    Get PDF
    Background : As systems biology has begun to draw growing attention, bio-network inference and analysis have become more and more important. Though there have been many efforts for bio-network inference, they are still far from practical applications due to too many false inferences and lack of comprehensible interpretation in the biological viewpoints. In order for applying to real problems, they should provide effective inference, reliable validation, rational elucidation, and sufficient extensibility to incorporate various relevant information sources. Results : We have been developing an information fusion software platform called BioCAD. It is utilizing both of local and global optimization for bio-network inference, text mining techniques for network validation and annotation, and Web services-based workflow techniques. In addition, it includes an effective technique to elucidate network edges by integrating various information sources. This paper presents the architecture of BioCAD and essential modules for bio-network inference and analysis. Conclusion : BioCAD provides a convenient infrastructure for network inference and network analysis. It automates series of users' processes by providing data preprocessing tools for various formats of data. It also helps inferring more accurate and reliable bio-networks by providing network inference tools which utilize information from distinct sources. And it can be used to analyze and validate the inferred bio-networks using information fusion tools.ope

    Breaking the Spurious Causality of Conditional Generation via Fairness Intervention with Corrective Sampling

    Full text link
    To capture the relationship between samples and labels, conditional generative models often inherit spurious correlations from the training dataset. This can result in label-conditional distributions that are imbalanced with respect to another latent attribute. To mitigate this issue, which we call spurious causality of conditional generation, we propose a general two-step strategy. (a) Fairness Intervention (FI): emphasize the minority samples that are hard to generate due to the spurious correlation in the training dataset. (b) Corrective Sampling (CS): explicitly filter the generated samples and ensure that they follow the desired latent attribute distribution. We have designed the fairness intervention to work for various degrees of supervision on the spurious attribute, including unsupervised, weakly-supervised, and semi-supervised scenarios. Our experimental results demonstrate that FICS can effectively resolve spurious causality of conditional generation across various datasets.Comment: TMLR 202

    S-CLIP: Semi-supervised Vision-Language Learning using Few Specialist Captions

    Full text link
    Vision-language models, such as contrastive language-image pre-training (CLIP), have demonstrated impressive results in natural image domains. However, these models often struggle when applied to specialized domains like remote sensing, and adapting to such domains is challenging due to the limited number of image-text pairs available for training. To address this, we propose S-CLIP, a semi-supervised learning method for training CLIP that utilizes additional unpaired images. S-CLIP employs two pseudo-labeling strategies specifically designed for contrastive learning and the language modality. The caption-level pseudo-label is given by a combination of captions of paired images, obtained by solving an optimal transport problem between unpaired and paired images. The keyword-level pseudo-label is given by a keyword in the caption of the nearest paired image, trained through partial label learning that assumes a candidate set of labels for supervision instead of the exact one. By combining these objectives, S-CLIP significantly enhances the training of CLIP using only a few image-text pairs, as demonstrated in various specialist domains, including remote sensing, fashion, scientific figures, and comics. For instance, S-CLIP improves CLIP by 10% for zero-shot classification and 4% for image-text retrieval on the remote sensing benchmark, matching the performance of supervised CLIP while using three times fewer image-text pairs.Comment: NeurIPS 202

    MASKER: Masked Keyword Regularization for Reliable Text Classification

    Full text link
    Pre-trained language models have achieved state-of-the-art accuracies on various text classification tasks, e.g., sentiment analysis, natural language inference, and semantic textual similarity. However, the reliability of the fine-tuned text classifiers is an often underlooked performance criterion. For instance, one may desire a model that can detect out-of-distribution (OOD) samples (drawn far from training distribution) or be robust against domain shifts. We claim that one central obstacle to the reliability is the over-reliance of the model on a limited number of keywords, instead of looking at the whole context. In particular, we find that (a) OOD samples often contain in-distribution keywords, while (b) cross-domain samples may not always contain keywords; over-relying on the keywords can be problematic for both cases. In light of this observation, we propose a simple yet effective fine-tuning method, coined masked keyword regularization (MASKER), that facilitates context-based prediction. MASKER regularizes the model to reconstruct the keywords from the rest of the words and make low-confidence predictions without enough context. When applied to various pre-trained language models (e.g., BERT, RoBERTa, and ALBERT), we demonstrate that MASKER improves OOD detection and cross-domain generalization without degrading classification accuracy. Code is available at https://github.com/alinlab/MASKER.Comment: AAAI 2021. First two authors contributed equall
    • โ€ฆ
    corecore