491 research outputs found

    Contentious politics in protracted transition and the dynamics of actors: an analysis of South Korean movement history and party politics

    Get PDF
    Twentieth century has seen a significant number of social changes, taking in different forms of revolution, revolts and protests. Nevertheless, as the world stabilized with the termination of Cold War, contention also seemed to have died down. Dominating theories concluded with generalizations that contentions are inevitable process of social change; it comes and goes. South Korea, on the other hand, remains an anomaly due to contentious actors’ persisting influence in the society. In reality, contention does not exist in isolation from the society, but arises from the very soil of it. South Korea actors, the institutions and parties reflecting contentious identity attests its protracted existence beyond the contentious episodes. I argue that contentious politics is not an isolated event that belongs in the transitionary period, but is capable of creating a continuously interacting variable in the society. Thus, in the case of South Korea and its protracted democratization, contention needs to be understood as an organic product of South Korean history that continues to influence the contentious identity to fulfill their self-perceived historical duty of achieving a legitimate government

    BayesDLL: Bayesian Deep Learning Library

    Full text link
    We release a new Bayesian neural network library for PyTorch for large-scale deep networks. Our library implements mainstream approximate Bayesian inference algorithms: variational inference, MC-dropout, stochastic-gradient MCMC, and Laplace approximation. The main differences from other existing Bayesian neural network libraries are as follows: 1) Our library can deal with very large-scale deep networks including Vision Transformers (ViTs). 2) We need virtually zero code modifications for users (e.g., the backbone network definition codes do not neet to be modified at all). 3) Our library also allows the pre-trained model weights to serve as a prior mean, which is very useful for performing Bayesian inference with the large-scale foundation models like ViTs that are hard to optimise from scratch with the downstream data alone. Our code is publicly available at: \url{https://github.com/SamsungLabs/BayesDLL}\footnote{A mirror repository is also available at: \url{https://github.com/minyoungkim21/BayesDLL}.}

    First record of the family Prodoxidae (Lepidoptera: Adeloidea), Lampronia flavimitrella (Hübner), reported from Korea

    Get PDF
    AbstractThe family Prodoxidae is recorded for the first time from Korea, reporting Lampronia flavimitrella (Hübner) which was collected at Jeju-do Island. Redescription of the adult is given, with images of adult and male genitalia

    BayesTune: Bayesian Sparse Deep Model Fine-tuning

    Get PDF
    Deep learning practice is increasingly driven by powerful foundation models (FM), pre-trained at scale and then fine-tuned for specific tasks of interest. A key property of this workflow is the efficacy of performing sparse or parameter-efficient finetuning, meaning that by updating only a tiny fraction of the whole FM parameters on a downstream task can lead to surprisingly good performance, often even superior to a full model update. However, it is not clear what is the optimal and principled way to select which parameters to update. Although a growing number of sparse fine-tuning ideas have been proposed, they are mostly not satisfactory, relying on hand-crafted heuristics or heavy approximation. In this paper we propose a novel Bayesian sparse fine-tuning algorithm: we place a (sparse) Laplace prior for each parameter of the FM, with the mean equal to the initial value and the scale parameter having a hyper-prior that encourages small scale. Roughly speaking, the posterior means of the scale parameters indicate how important it is to update the corresponding parameter away from its initial value when solving the downstream task. Given the sparse prior, most scale parameters are small a posteriori, and the few large-valued scale parameters identify those FM parameters that crucially need to be updated away from their initial values. Based on this, we can threshold the scale parameters to decide which parameters to update or freeze, leading to a principled sparse fine-tuning strategy. To efficiently infer the posterior distribution of the scale parameters, we adopt the Langevin MCMC sampler, requiring only two times the complexity of the vanilla SGD. Tested on popular NLP benchmarks as well as the VTAB vision tasks, our approach shows significant improvement over the state-of-the-arts (e.g., 1% point higher than the best SOTA when fine-tuning RoBERTa for GLUE and SuperGLUE benchmarks)

    Artificial Neural Network estimation of soil erosion and nutrient concentrations in runoff from land application areas

    Get PDF
    The transport of sediment and nutrients from land application areas is an environmental concern. New methods are needed for estimating soil and nutrient concentrations of runoff from cropland areas on which manure is applied. Artificial Neural Networks (ANNs) trained with a backpropagation (BP) algorithm were used to estimate soil erosion, dissolved P (DP) and NH4–N concentrations of runoff from a land application site near Lincoln, Nebraska, USA. Simulation results from ANN-derived models showed that the amount of soil eroded is positively correlated with rainfall and runoff. In addition, concentrations of DP and NH4–N in overland flow were related to measurements of runoff, EC and pH. Coefficient of determination values (R2) relating predicted versus measured estimates of soil erosion, DP, and NH4–N were 0.62, 0.72 and 0.92, respectively. The ANN models derived from measurements of runoff, electrical conductivity (EC) and pH provided reliable estimates of DP and NH4–N concentrations in runoff
    • …
    corecore