230,967 research outputs found
Holographic superconductivity from higher derivative theory
We construct a derivative holographic superconductor model in the
-dimensional bulk spacetimes, in which the normal state describes a quantum
critical (QC) phase. The phase diagram and the
condensation as the function of temperature are worked out numerically. We
observe that with the decrease of the coupling parameter , the
critical temperature decreases and the formation of charged scalar
hair becomes harder. We also calculate the optical conductivity. An appealing
characteristic is a wider extension of the superconducting energy gap,
comparing with that of derivative theory. It is expected that this
phenomena can be observed in the real materials of high temperature
superconductor. Also the Homes' law in our present models with and
derivative corrections is explored. We find that in certain range of parameters
and , the experimentally measured value of the universal
constant in Homes' law can be obtained.Comment: 16 pages, 5 figure
Energy-efficient Amortized Inference with Cascaded Deep Classifiers
Deep neural networks have been remarkable successful in various AI tasks but
often cast high computation and energy cost for energy-constrained applications
such as mobile sensing. We address this problem by proposing a novel framework
that optimizes the prediction accuracy and energy cost simultaneously, thus
enabling effective cost-accuracy trade-off at test time. In our framework, each
data instance is pushed into a cascade of deep neural networks with increasing
sizes, and a selection module is used to sequentially determine when a
sufficiently accurate classifier can be used for this data instance. The
cascade of neural networks and the selection module are jointly trained in an
end-to-end fashion by the REINFORCE algorithm to optimize a trade-off between
the computational cost and the predictive accuracy. Our method is able to
simultaneously improve the accuracy and efficiency by learning to assign easy
instances to fast yet sufficiently accurate classifiers to save computation and
energy cost, while assigning harder instances to deeper and more powerful
classifiers to ensure satisfiable accuracy. With extensive experiments on
several image classification datasets using cascaded ResNet classifiers, we
demonstrate that our method outperforms the standard well-trained ResNets in
accuracy but only requires less than 20% and 50% FLOPs cost on the CIFAR-10/100
datasets and 66% on the ImageNet dataset, respectively
- …
