230,967 research outputs found

    Characterization of Alaskan HMA Mixtures with the Simple Performance Tester

    Get PDF
    INE/AUTC 12.2

    Characterization of Asphalt Treated Base Course Material

    Get PDF
    INE/AUTC 11.0

    Verification of Job Mix Formula for Alaskan HMA

    Get PDF
    INE/AUTC 14.1

    Holographic superconductivity from higher derivative theory

    Full text link
    We construct a 66 derivative holographic superconductor model in the 44-dimensional bulk spacetimes, in which the normal state describes a quantum critical (QC) phase. The phase diagram (γ1,T^c)(\gamma_1,\hat{T}_c) and the condensation as the function of temperature are worked out numerically. We observe that with the decrease of the coupling parameter γ1\gamma_1, the critical temperature T^c\hat{T}_c decreases and the formation of charged scalar hair becomes harder. We also calculate the optical conductivity. An appealing characteristic is a wider extension of the superconducting energy gap, comparing with that of 44 derivative theory. It is expected that this phenomena can be observed in the real materials of high temperature superconductor. Also the Homes' law in our present models with 44 and 66 derivative corrections is explored. We find that in certain range of parameters γ\gamma and γ1\gamma_1, the experimentally measured value of the universal constant CC in Homes' law can be obtained.Comment: 16 pages, 5 figure

    Energy-efficient Amortized Inference with Cascaded Deep Classifiers

    Full text link
    Deep neural networks have been remarkable successful in various AI tasks but often cast high computation and energy cost for energy-constrained applications such as mobile sensing. We address this problem by proposing a novel framework that optimizes the prediction accuracy and energy cost simultaneously, thus enabling effective cost-accuracy trade-off at test time. In our framework, each data instance is pushed into a cascade of deep neural networks with increasing sizes, and a selection module is used to sequentially determine when a sufficiently accurate classifier can be used for this data instance. The cascade of neural networks and the selection module are jointly trained in an end-to-end fashion by the REINFORCE algorithm to optimize a trade-off between the computational cost and the predictive accuracy. Our method is able to simultaneously improve the accuracy and efficiency by learning to assign easy instances to fast yet sufficiently accurate classifiers to save computation and energy cost, while assigning harder instances to deeper and more powerful classifiers to ensure satisfiable accuracy. With extensive experiments on several image classification datasets using cascaded ResNet classifiers, we demonstrate that our method outperforms the standard well-trained ResNets in accuracy but only requires less than 20% and 50% FLOPs cost on the CIFAR-10/100 datasets and 66% on the ImageNet dataset, respectively
    corecore