8 research outputs found

    A composition theorem for the Fourier Entropy-Influence conjecture

    Full text link
    The Fourier Entropy-Influence (FEI) conjecture of Friedgut and Kalai [FK96] seeks to relate two fundamental measures of Boolean function complexity: it states that H[f]≤CInf[f]H[f] \leq C Inf[f] holds for every Boolean function ff, where H[f]H[f] denotes the spectral entropy of ff, Inf[f]Inf[f] is its total influence, and C>0C > 0 is a universal constant. Despite significant interest in the conjecture it has only been shown to hold for a few classes of Boolean functions. Our main result is a composition theorem for the FEI conjecture. We show that if g1,...,gkg_1,...,g_k are functions over disjoint sets of variables satisfying the conjecture, and if the Fourier transform of FF taken with respect to the product distribution with biases E[g1],...,E[gk]E[g_1],...,E[g_k] satisfies the conjecture, then their composition F(g1(x1),...,gk(xk))F(g_1(x^1),...,g_k(x^k)) satisfies the conjecture. As an application we show that the FEI conjecture holds for read-once formulas over arbitrary gates of bounded arity, extending a recent result [OWZ11] which proved it for read-once decision trees. Our techniques also yield an explicit function with the largest known ratio of C≥6.278C \geq 6.278 between H[f]H[f] and Inf[f]Inf[f], improving on the previous lower bound of 4.615

    Decision Tree Heuristics Can Fail, Even in the Smoothed Setting

    Get PDF
    Greedy decision tree learning heuristics are mainstays of machine learning practice, but theoretical justification for their empirical success remains elusive. In fact, it has long been known that there are simple target functions for which they fail badly (Kearns and Mansour, STOC 1996). Recent work of Brutzkus, Daniely, and Malach (COLT 2020) considered the smoothed analysis model as a possible avenue towards resolving this disconnect. Within the smoothed setting and for targets f that are k-juntas, they showed that these heuristics successfully learn f with depth-k decision tree hypotheses. They conjectured that the same guarantee holds more generally for targets that are depth-k decision trees. We provide a counterexample to this conjecture: we construct targets that are depth-k decision trees and show that even in the smoothed setting, these heuristics build trees of depth 2^{?(k)} before achieving high accuracy. We also show that the guarantees of Brutzkus et al. cannot extend to the agnostic setting: there are targets that are very close to k-juntas, for which these heuristics build trees of depth 2^{?(k)} before achieving high accuracy

    Reasoning in Many Dimensions : Uncertainty and Products of Modal Logics

    Get PDF
    Probabilistic Description Logics (ProbDLs) are an extension of Description Logics that are designed to capture uncertainty. We study problems related to these logics. First, we investigate the monodic fragment of Probabilistic first-order logic, show that it has many nice properties, and are able to explain the complexity results obtained for ProbDLs. Second, in order to identify well-behaved, in best-case tractable ProbDLs, we study the complexity landscape for different fragments of ProbEL; amongst others, we are able to identify a tractable fragment. We then study the reasoning problem of ontological query answering, but apply it to probabilistic data. Therefore, we define the framework of ontology-based access to probabilistic data and study the computational complexity therein. In the final part of the thesis, we study the complexity of the satisfiability problem in the two-dimensional modal logic KxK. We are able to close a gap that has been open for more than ten years

    LIPIcs, Volume 258, SoCG 2023, Complete Volume

    Get PDF
    LIPIcs, Volume 258, SoCG 2023, Complete Volum
    corecore