3,256 research outputs found

    Plug-in, Trainable Gate for Streamlining Arbitrary Neural Networks

    Full text link
    Architecture optimization, which is a technique for finding an efficient neural network that meets certain requirements, generally reduces to a set of multiple-choice selection problems among alternative sub-structures or parameters. The discrete nature of the selection problem, however, makes this optimization difficult. To tackle this problem we introduce a novel concept of a trainable gate function. The trainable gate function, which confers a differentiable property to discretevalued variables, allows us to directly optimize loss functions that include non-differentiable discrete values such as 0-1 selection. The proposed trainable gate can be applied to pruning. Pruning can be carried out simply by appending the proposed trainable gate functions to each intermediate output tensor followed by fine-tuning the overall model, using any gradient-based training methods. So the proposed method can jointly optimize the selection of the pruned channels while fine-tuning the weights of the pruned model at the same time. Our experimental results demonstrate that the proposed method efficiently optimizes arbitrary neural networks in various tasks such as image classification, style transfer, optical flow estimation, and neural machine translation.Comment: Accepted to AAAI 2020 (Poster

    Comparing Sample-wise Learnability Across Deep Neural Network Models

    Full text link
    Estimating the relative importance of each sample in a training set has important practical and theoretical value, such as in importance sampling or curriculum learning. This kind of focus on individual samples invokes the concept of sample-wise learnability: How easy is it to correctly learn each sample (cf. PAC learnability)? In this paper, we approach the sample-wise learnability problem within a deep learning context. We propose a measure of the learnability of a sample with a given deep neural network (DNN) model. The basic idea is to train the given model on the training set, and for each sample, aggregate the hits and misses over the entire training epochs. Our experiments show that the sample-wise learnability measure collected this way is highly linearly correlated across different DNN models (ResNet-20, VGG-16, and MobileNet), suggesting that such a measure can provide deep general insights on the data's properties. We expect our method to help develop better curricula for training, and help us better understand the data itself.Comment: Accepted to AAAI 2019 Student Abstrac

    The Ten Sonatas for Piano and Violin by Ludwig Van Beethoven

    Get PDF
    Among the many sonatas for violin and piano, Beethoven's ten have been centerpieces of the genre, not only because of their intrinsic excellence, but also historically because they illustrate the stylistic transition from the Classical era to the Romantic. To perform all ten violin sonatas is a major milestone in the career of a violinist. Ludwig van Beethoven (1770-1827) composed his ten Sonatas for Piano and Violin between 1797 and 1812. His first three sonatas, Op.12, in D major, A major, and E flat major, represent his Early Period (to about 1802), displaying the clarity, purity, and balance of the Classical style. Sonatas Op.23 in A minor and Op.24 ("Spring") in F major I consider transitional works, exhibiting characteristics of both Early and Middle (c. 1802- 18 15) Periods. In the three sonatas of Op.30, in A major, C minor, and G major, and in Op.47 ('Kreutzer") in A major, Beethoven shows Romantic traits and, especially in the C minor and Op.47 sonatas, expansive structure characteristic of his Middle Period. Op.96 in G major is a late middle-period work which in many respects foreshadows Beethoven's late period. The ten sonatas were presented in three recitals in Gildenhorn Recital Hall at the University of Maryland's Clarice Smith Performing Arts Center. My teacher and musical coach for this project was Elisabeth Adkins. On March 19, 2004, pianist Eliza Ching and I played Sonatas Op. 12 #1 in D major, Op.30 #I in A major, and Op.96 in G major. Sonatas Op.12 #3 in E-flat major, Op.23 in A minor, Op.30 #2 in C minor, and Op. 30 #3 in G major, were performed on December 7, 2005 with Edward Newman at the piano. On March 9, 2006, Sonatas Op. 12 #2 in A major, 9.24 in F major, and Op.47 in A major, with Edward Newman as pianist, finished the cycle. These three concerts, with accompanying programs, are documented in a digital audio format

    Spectral Density Scaling of Fluctuating Interfaces

    Full text link
    Covariance matrix of heights measured relative to the average height of a growing self-affine surface in the steady state are investigated in the framework of random matrix theory. We show that the spectral density of the covariance matrix scales as ρ(λ)λν\rho(\lambda) \sim \lambda^{-\nu} deviating from the prediction of random matrix theory and has a scaling form, ρ(λ,L)=λνf(λ/Lϕ)\rho(\lambda, L) = \lambda^{-\nu} f(\lambda / L^{\phi}) for the lateral system size LL, where the scaling function f(x)f(x) approaches a constant for x1x \ll 1 and zero for x1x \gg 1. The obtained values of exponents by numerical simulations are ν1.73\nu \approx 1.73 and ϕ1.40\phi \approx 1.40 for the Edward-Wilkinson class and ν1.64\nu \approx 1.64 and ϕ1.79\phi \approx 1.79 for the Kardar-Parisi-Zhang class, respectively. The distribution of the largest eigenvalues follows a scaling form as ρ(λmax,L)=1/Lbfmax((λmaxLa)/Lb)\rho(\lambda_{max}, L) = 1/L^b f_{max} ((\lambda_{max} -L^a)/L^b), which is different from the Tracy-Widom distribution of random matrix theory while the exponents aa and bb are given by the same values for the two different classes

    Xenopus: An alternative model system for identifying muco-active agents

    Get PDF
    The airway epithelium in human plays a central role as the first line of defense against environmental contaminants. Most respiratory diseases such as chronic obstructive pulmonary disease (COPD), asthma, and respiratory infections, disturb normal muco-ciliary functions by stimulating the hypersecretion of mucus. Several muco-active agents have been used to treat hypersecretion symptoms in patients. Current muco-active reagents control mucus secretion by modulating either airway inflammation, cholinergic parasympathetic nerve activities or by reducing the viscosity by cleaving crosslinking in mucin and digesting DNAs in mucus. However, none of the current medication regulates mucus secretion by directly targeting airway goblet cells. The major hurdle for screening potential muco-active agents that directly affect the goblet cells, is the unavailability of in vivo model systems suitable for high-throughput screening. In this study, we developed a high-throughput in vivo model system for identifying muco-active reagents using Xenopus laevis embryos. We tested mucus secretion under various conditions and developed a screening strategy to identify potential muco-regulators. Using this novel screening technique, we identified narasin as a potential muco-regulator. Narasin treatment of developing Xenopus embryos significantly reduced mucus secretion. Furthermore, the human lung epithelial cell line, Calu-3, responded similarly to narasin treatment, validating our technique for discovering muco-active reagent
    corecore