498 research outputs found

    The perceptron algorithm versus winnow: linear versus logarithmic mistake bounds when few input variables are relevant

    Get PDF
    AbstractWe give an adversary strategy that forces the Perceptron algorithm to make Ω(kN) mistakes in learning monotone disjunctions over N variables with at most k literals. In contrast, Littlestone's algorithm Winnow makes at most O(k log N) mistakes for the same problem. Both algorithms use thresholded linear functions as their hypotheses. However, Winnow does multiplicative updates to its weight vector instead of the additive updates of the Perceptron algorithm. In general, we call an algorithm additive if its weight vector is always a sum of a fixed initial weight vector and some linear combination of already seen instances. Thus, the Perceptron algorithm is an example of an additive algorithm. We show that an adversary can force any additive algorithm to make (N + k −1)2 mistakes in learning a monotone disjunction of at most k literals. Simple experiments show that for k ⪡ N, Winnow clearly outperforms the Perceptron algorithm also on nonadversarial random data

    Intervalley-Scattering Induced Electron-Phonon Energy Relaxation in Many-Valley Semiconductors at Low Temperatures

    Full text link
    We report on the effect of elastic intervalley scattering on the energy transport between electrons and phonons in many-valley semiconductors. We derive a general expression for the electron-phonon energy flow rate at the limit where elastic intervalley scattering dominates over diffusion. Electron heating experiments on heavily doped n-type Si samples with electron concentration in the range 3.516.0×10253.5-16.0\times 10^{25} m3^{-3} are performed at sub-1 K temperatures. We find a good agreement between the theory and the experiment.Comment: v2: Notations changed: Δi\Delta_i --> δvi\delta v_i, τeff\tau_{eff} removed. Eq. (1) changed, Eq. (2) added and complete derivation of Eq. (3) included. Some further discussion about single vs. many valley added [3rd paragraph after Eq. (7)]. End notes removed and new reference added [Kragler and Thomas]. Typos in references correcte

    Competing with stationary prediction strategies

    Get PDF
    In this paper we introduce the class of stationary prediction strategies and construct a prediction algorithm that asymptotically performs as well as the best continuous stationary strategy. We make mild compactness assumptions but no stochastic assumptions about the environment. In particular, no assumption of stationarity is made about the environment, and the stationarity of the considered strategies only means that they do not depend explicitly on time; we argue that it is natural to consider only stationary strategies even for highly non-stationary environments.Comment: 20 page

    On-line PCA with Optimal Regrets

    Full text link
    We carefully investigate the on-line version of PCA, where in each trial a learning algorithm plays a k-dimensional subspace, and suffers the compression loss on the next instance when projected into the chosen subspace. In this setting, we analyze two popular on-line algorithms, Gradient Descent (GD) and Exponentiated Gradient (EG). We show that both algorithms are essentially optimal in the worst-case. This comes as a surprise, since EG is known to perform sub-optimally when the instances are sparse. This different behavior of EG for PCA is mainly related to the non-negativity of the loss in this case, which makes the PCA setting qualitatively different from other settings studied in the literature. Furthermore, we show that when considering regret bounds as function of a loss budget, EG remains optimal and strictly outperforms GD. Next, we study the extension of the PCA setting, in which the Nature is allowed to play with dense instances, which are positive matrices with bounded largest eigenvalue. Again we can show that EG is optimal and strictly better than GD in this setting

    Inhibition in multiclass classification

    Get PDF
    The role of inhibition is investigated in a multiclass support vector machine formalism inspired by the brain structure of insects. The so-called mushroom bodies have a set of output neurons, or classification functions, that compete with each other to encode a particular input. Strongly active output neurons depress or inhibit the remaining outputs without knowing which is correct or incorrect. Accordingly, we propose to use a classification function that embodies unselective inhibition and train it in the large margin classifier framework. Inhibition leads to more robust classifiers in the sense that they perform better on larger areas of appropriate hyperparameters when assessed with leave-one-out strategies. We also show that the classifier with inhibition is a tight bound to probabilistic exponential models and is Bayes consistent for 3-class problems. These properties make this approach useful for data sets with a limited number of labeled examples. For larger data sets, there is no significant comparative advantage to other multiclass SVM approaches

    Improved algorithms for online load balancing

    Full text link
    We consider an online load balancing problem and its extensions in the framework of repeated games. On each round, the player chooses a distribution (task allocation) over KK servers, and then the environment reveals the load of each server, which determines the computation time of each server for processing the task assigned. After all rounds, the cost of the player is measured by some norm of the cumulative computation-time vector. The cost is the makespan if the norm is LL_\infty-norm. The goal is to minimize the regret, i.e., minimizing the player's cost relative to the cost of the best fixed distribution in hindsight. We propose algorithms for general norms and prove their regret bounds. In particular, for LL_\infty-norm, our regret bound matches the best known bound and the proposed algorithm runs in polynomial time per trial involving linear programming and second order programming, whereas no polynomial time algorithm was previously known to achieve the bound.Comment: 16 pages; typos correcte
    corecore