350 research outputs found

    Intervalley-Scattering Induced Electron-Phonon Energy Relaxation in Many-Valley Semiconductors at Low Temperatures

    Full text link
    We report on the effect of elastic intervalley scattering on the energy transport between electrons and phonons in many-valley semiconductors. We derive a general expression for the electron-phonon energy flow rate at the limit where elastic intervalley scattering dominates over diffusion. Electron heating experiments on heavily doped n-type Si samples with electron concentration in the range 3.516.0×10253.5-16.0\times 10^{25} m3^{-3} are performed at sub-1 K temperatures. We find a good agreement between the theory and the experiment.Comment: v2: Notations changed: Δi\Delta_i --> δvi\delta v_i, τeff\tau_{eff} removed. Eq. (1) changed, Eq. (2) added and complete derivation of Eq. (3) included. Some further discussion about single vs. many valley added [3rd paragraph after Eq. (7)]. End notes removed and new reference added [Kragler and Thomas]. Typos in references correcte

    The perceptron algorithm versus winnow: linear versus logarithmic mistake bounds when few input variables are relevant

    Get PDF
    AbstractWe give an adversary strategy that forces the Perceptron algorithm to make Ω(kN) mistakes in learning monotone disjunctions over N variables with at most k literals. In contrast, Littlestone's algorithm Winnow makes at most O(k log N) mistakes for the same problem. Both algorithms use thresholded linear functions as their hypotheses. However, Winnow does multiplicative updates to its weight vector instead of the additive updates of the Perceptron algorithm. In general, we call an algorithm additive if its weight vector is always a sum of a fixed initial weight vector and some linear combination of already seen instances. Thus, the Perceptron algorithm is an example of an additive algorithm. We show that an adversary can force any additive algorithm to make (N + k −1)2 mistakes in learning a monotone disjunction of at most k literals. Simple experiments show that for k ⪡ N, Winnow clearly outperforms the Perceptron algorithm also on nonadversarial random data

    Competing with stationary prediction strategies

    Get PDF
    In this paper we introduce the class of stationary prediction strategies and construct a prediction algorithm that asymptotically performs as well as the best continuous stationary strategy. We make mild compactness assumptions but no stochastic assumptions about the environment. In particular, no assumption of stationarity is made about the environment, and the stationarity of the considered strategies only means that they do not depend explicitly on time; we argue that it is natural to consider only stationary strategies even for highly non-stationary environments.Comment: 20 page

    Improved algorithms for online load balancing

    Full text link
    We consider an online load balancing problem and its extensions in the framework of repeated games. On each round, the player chooses a distribution (task allocation) over KK servers, and then the environment reveals the load of each server, which determines the computation time of each server for processing the task assigned. After all rounds, the cost of the player is measured by some norm of the cumulative computation-time vector. The cost is the makespan if the norm is LL_\infty-norm. The goal is to minimize the regret, i.e., minimizing the player's cost relative to the cost of the best fixed distribution in hindsight. We propose algorithms for general norms and prove their regret bounds. In particular, for LL_\infty-norm, our regret bound matches the best known bound and the proposed algorithm runs in polynomial time per trial involving linear programming and second order programming, whereas no polynomial time algorithm was previously known to achieve the bound.Comment: 16 pages; typos correcte

    Inhibition in multiclass classification

    Get PDF
    The role of inhibition is investigated in a multiclass support vector machine formalism inspired by the brain structure of insects. The so-called mushroom bodies have a set of output neurons, or classification functions, that compete with each other to encode a particular input. Strongly active output neurons depress or inhibit the remaining outputs without knowing which is correct or incorrect. Accordingly, we propose to use a classification function that embodies unselective inhibition and train it in the large margin classifier framework. Inhibition leads to more robust classifiers in the sense that they perform better on larger areas of appropriate hyperparameters when assessed with leave-one-out strategies. We also show that the classifier with inhibition is a tight bound to probabilistic exponential models and is Bayes consistent for 3-class problems. These properties make this approach useful for data sets with a limited number of labeled examples. For larger data sets, there is no significant comparative advantage to other multiclass SVM approaches

    Leading strategies in competitive on-line prediction

    Get PDF
    We start from a simple asymptotic result for the problem of on-line regression with the quadratic loss function: the class of continuous limited-memory prediction strategies admits a "leading prediction strategy", which not only asymptotically performs at least as well as any continuous limited-memory strategy but also satisfies the property that the excess loss of any continuous limited-memory strategy is determined by how closely it imitates the leading strategy. More specifically, for any class of prediction strategies constituting a reproducing kernel Hilbert space we construct a leading strategy, in the sense that the loss of any prediction strategy whose norm is not too large is determined by how closely it imitates the leading strategy. This result is extended to the loss functions given by Bregman divergences and by strictly proper scoring rules.Comment: 20 pages; a conference version is to appear in the ALT'2006 proceeding

    Inhibition in multiclass classification

    Get PDF
    The role of inhibition is investigated in a multiclass support vector machine formalism inspired by the brain structure of insects. The so-called mushroom bodies have a set of output neurons, or classification functions, that compete with each other to encode a particular input. Strongly active output neurons depress or inhibit the remaining outputs without knowing which is correct or incorrect. Accordingly, we propose to use a classification function that embodies unselective inhibition and train it in the large margin classifier framework. Inhibition leads to more robust classifiers in the sense that they perform better on larger areas of appropriate hyperparameters when assessed with leave-one-out strategies. We also show that the classifier with inhibition is a tight bound to probabilistic exponential models and is Bayes consistent for 3-class problems. These properties make this approach useful for data sets with a limited number of labeled examples. For larger data sets, there is no significant comparative advantage to other multiclass SVM approaches

    BDUOL: Double Updating Online Learning on a Fixed Budget

    Get PDF
    Ministry of Education, Singapore under its Academic Research Funding Tier 1; Microsoft Research gran

    Bayesian Generalized Probability Calculus for Density Matrices

    Full text link
    One of the main concepts in quantum physics is a density matrix, which is a symmetric positive definite matrix of trace one. Finite probability distributions can be seen as a special case when the density matrix is restricted to be diagonal. We develop a probability calculus based on these more general distributions that includes definitions of joints, conditionals and formulas that relate these, including analogs of the Theorem of Total Probability and various Bayes rules for the calculation of posterior density matrices. The resulting calculus parallels the familiar "conventional" probability calculus and always retains the latter as a special case when all matrices are diagonal. We motivate both the conventional and the generalized Bayes rule with a minimum relative entropy principle, where the Kullbach-Leibler version gives the conventional Bayes rule and Umegaki's quantum relative entropy the new Bayes rule for density matrices. Whereas the conventional Bayesian methods maintain uncertainty about which model has the highest data likelihood, the generalization maintains uncertainty about which unit direction has the largest variance. Surprisingly the bounds also generalize: as in the conventional setting we upper bound the negative log likelihood of the data by the negative log likelihood of the MAP estimator
    corecore