43,185 research outputs found

    The Power of Localization for Efficiently Learning Linear Separators with Noise

    Full text link
    We introduce a new approach for designing computationally efficient learning algorithms that are tolerant to noise, and demonstrate its effectiveness by designing algorithms with improved noise tolerance guarantees for learning linear separators. We consider both the malicious noise model and the adversarial label noise model. For malicious noise, where the adversary can corrupt both the label and the features, we provide a polynomial-time algorithm for learning linear separators in ℜd\Re^d under isotropic log-concave distributions that can tolerate a nearly information-theoretically optimal noise rate of η=Ω(ϵ)\eta = \Omega(\epsilon). For the adversarial label noise model, where the distribution over the feature vectors is unchanged, and the overall probability of a noisy label is constrained to be at most η\eta, we also give a polynomial-time algorithm for learning linear separators in ℜd\Re^d under isotropic log-concave distributions that can handle a noise rate of η=Ω(ϵ)\eta = \Omega\left(\epsilon\right). We show that, in the active learning model, our algorithms achieve a label complexity whose dependence on the error parameter ϵ\epsilon is polylogarithmic. This provides the first polynomial-time active learning algorithm for learning linear separators in the presence of malicious noise or adversarial label noise.Comment: Contains improved label complexity analysis communicated to us by Steve Hannek

    Efficient Learning of Linear Separators under Bounded Noise

    Full text link
    We study the learnability of linear separators in ℜd\Re^d in the presence of bounded (a.k.a Massart) noise. This is a realistic generalization of the random classification noise model, where the adversary can flip each example xx with probability η(x)≤η\eta(x) \leq \eta. We provide the first polynomial time algorithm that can learn linear separators to arbitrarily small excess error in this noise model under the uniform distribution over the unit ball in ℜd\Re^d, for some constant value of η\eta. While widely studied in the statistical learning theory community in the context of getting faster convergence rates, computationally efficient algorithms in this model had remained elusive. Our work provides the first evidence that one can indeed design algorithms achieving arbitrarily small excess error in polynomial time under this realistic noise model and thus opens up a new and exciting line of research. We additionally provide lower bounds showing that popular algorithms such as hinge loss minimization and averaging cannot lead to arbitrarily small excess error under Massart noise, even under the uniform distribution. Our work instead, makes use of a margin based technique developed in the context of active learning. As a result, our algorithm is also an active learning algorithm with label complexity that is only a logarithmic the desired excess error ϵ\epsilon

    The nature of the animacy organization in human ventral temporal cortex

    Full text link
    The principles underlying the animacy organization of the ventral temporal cortex (VTC) remain hotly debated, with recent evidence pointing to an animacy continuum rather than a dichotomy. What drives this continuum? According to the visual categorization hypothesis, the continuum reflects the degree to which animals contain animal-diagnostic features. By contrast, the agency hypothesis posits that the continuum reflects the degree to which animals are perceived as (social) agents. Here, we tested both hypotheses with a stimulus set in which visual categorizability and agency were dissociated based on representations in convolutional neural networks and behavioral experiments. Using fMRI, we found that visual categorizability and agency explained independent components of the animacy continuum in VTC. Modeled together, they fully explained the animacy continuum. Finally, clusters explained by visual categorizability were localized posterior to clusters explained by agency. These results show that multiple organizing principles, including agency, underlie the animacy continuum in VTC.Comment: 16 pages, 5 figures, code+data at - https://doi.org/10.17605/OSF.IO/VXWG9 Update - added supplementary results and edited abstrac
    • …
    corecore