8 research outputs found

    A Weighted Version of Erdős-Kac Theorem

    Get PDF
    Let ω(n)\omega(n) denote the number of distinct prime factors of a natural number nn. A celebrated result of Erd{\H o}s and Kac states that ω(n)\omega(n) as a Gaussian distribution. In this thesis, we establish a weighted version of Erd{\H o}s-Kac Theorem. Specifically, we show that the Gaussian limiting distribution is preserved, but shifted, when ω(n)\omega(n) is weighted by the k−k-fold divisor function τk(n)\tau_k(n). We establish this result by computing all positive integral moments of ω(n)\omega(n) weighted by τk(n)\tau_k(n). We also provide a proof of the classical identity of ζ(2n)\zeta(2n) for n∈Nn \in \mathbb{N} using Dirichlet\u27s kernel

    A Characterization of Online Multiclass Learnability

    Full text link
    We consider the problem of online multiclass learning when the number of labels is unbounded. We show that the Multiclass Littlestone dimension, first introduced in \cite{DanielyERMprinciple}, continues to characterize online learnability in this setting. Our result complements the recent work by \cite{Brukhimetal2022} who give a characterization of batch multiclass learnability when the label space is unbounded.Comment: 8 page

    Online Infinite-Dimensional Regression: Learning Linear Operators

    Full text link
    We consider the problem of learning linear operators under squared loss between two infinite-dimensional Hilbert spaces in the online setting. We show that the class of linear operators with uniformly bounded pp-Schatten norm is online learnable for any p∈[1,∞)p \in [1, \infty). On the other hand, we prove an impossibility result by showing that the class of uniformly bounded linear operators with respect to the operator norm is \textit{not} online learnable. Moreover, we show a separation between online uniform convergence and online learnability by identifying a class of bounded linear operators that is online learnable but uniform convergence does not hold. Finally, we prove that the impossibility result and the separation between uniform convergence and learnability also hold in the agnostic PAC setting.Comment: 17 page

    A Characterization of Multioutput Learnability

    Full text link
    We consider the problem of learning multioutput function classes in batch and online settings. In both settings, we show that a multioutput function class is learnable if and only if each single-output restriction of the function class is learnable. This provides a complete characterization of the learnability of multilabel classification and multioutput regression in both batch and online settings. As an extension, we also consider multilabel learnability in the bandit feedback setting and show a similar characterization as in the full-feedback setting.Comment: 37, Updated Online Sectio

    Revisiting the Learnability of Apple Tasting

    Full text link
    In online binary classification under \textit{apple tasting} feedback, the learner only observes the true label if it predicts "1". First studied by \cite{helmbold2000apple}, we revisit this classical partial-feedback setting and study online learnability from a combinatorial perspective. We show that the Littlestone dimension continues to prove a tight quantitative characterization of apple tasting in the agnostic setting, closing an open question posed by \cite{helmbold2000apple}. In addition, we give a new combinatorial parameter, called the Effective width, that tightly quantifies the minimax expected mistakes in the realizable setting. As a corollary, we use the Effective width to establish a \textit{trichotomy} of the minimax expected number of mistakes in the realizable setting. In particular, we show that in the realizable setting, the expected number of mistakes for any learner under apple tasting feedback can only be Θ(1),Θ(T)\Theta(1), \Theta(\sqrt{T}), or Θ(T)\Theta(T).Comment: 18 page

    Multiclass Online Learnability under Bandit Feedback

    Full text link
    We study online multiclass classification under bandit feedback. We extend the results of Daniely and Helbertal [2013] by showing that the finiteness of the Bandit Littlestone dimension is necessary and sufficient for bandit online multiclass learnability even when the label space is unbounded. Moreover, we show that, unlike the full-information setting, sequential uniform convergence is necessary but not sufficient for bandit online learnability. Our result complements the recent work by Hanneke, Moran, Raman, Subedi, and Tewari [2023] who show that the Littlestone dimension characterizes online multiclass learnability in the full-information setting even when the label space is unbounded.Comment: 11 page

    Online Learning with Set-Valued Feedback

    Full text link
    We study a variant of online multiclass classification where the learner predicts a single label but receives a \textit{set of labels} as feedback. In this model, the learner is penalized for not outputting a label contained in the revealed set. We show that unlike online multiclass learning with single-label feedback, deterministic and randomized online learnability are \textit{not equivalent} in the realizable setting under set-valued feedback. In addition, we show that deterministic and randomized realizable learnability are equivalent if the Helly number of the collection of sets that can be revealed as feedback is finite. In light of this separation, we give two new combinatorial dimensions, named the Set Littlestone and Measure Shattering dimension, whose finiteness characterizes deterministic and randomized realizable learnability respectively. Additionally, these dimensions lower- and upper bound the deterministic and randomized minimax regret in the realizable setting. Going beyond the realizable setting, we prove that the Measure shattering dimension continues to characterize learnability and quantify minimax regret in the agnostic setting. Finally, we use our results to establish bounds on the minimax regret for three practical learning settings: online multilabel ranking, online multilabel classification, and real-valued prediction with interval-valued response.Comment: 31 page

    Sums of random multiplicative functions over function fields with few irreducible factors

    Full text link
    We establish a normal approximation for the limiting distribution of partial sums of random Rademacher multiplicative functions over function fields, provided the number of irreducible factors of the polynomials is small enough. This parallels work of Harper for random Rademacher multiplicative functions over the integers.Comment: 10 pages. Simplification of the proof of Lemma 5 and typos corrected, one reference adde
    corecore