92,155 research outputs found

    Nearly optimal solutions for the Chow Parameters Problem and low-weight approximation of halfspaces

    Get PDF
    The \emph{Chow parameters} of a Boolean function f:{1,1}n{1,1}f: \{-1,1\}^n \to \{-1,1\} are its n+1n+1 degree-0 and degree-1 Fourier coefficients. It has been known since 1961 (Chow, Tannenbaum) that the (exact values of the) Chow parameters of any linear threshold function ff uniquely specify ff within the space of all Boolean functions, but until recently (O'Donnell and Servedio) nothing was known about efficient algorithms for \emph{reconstructing} ff (exactly or approximately) from exact or approximate values of its Chow parameters. We refer to this reconstruction problem as the \emph{Chow Parameters Problem.} Our main result is a new algorithm for the Chow Parameters Problem which, given (sufficiently accurate approximations to) the Chow parameters of any linear threshold function ff, runs in time \tilde{O}(n^2)\cdot (1/\eps)^{O(\log^2(1/\eps))} and with high probability outputs a representation of an LTF ff' that is \eps-close to ff. The only previous algorithm (O'Donnell and Servedio) had running time \poly(n) \cdot 2^{2^{\tilde{O}(1/\eps^2)}}. As a byproduct of our approach, we show that for any linear threshold function ff over {1,1}n\{-1,1\}^n, there is a linear threshold function ff' which is \eps-close to ff and has all weights that are integers at most \sqrt{n} \cdot (1/\eps)^{O(\log^2(1/\eps))}. This significantly improves the best previous result of Diakonikolas and Servedio which gave a \poly(n) \cdot 2^{\tilde{O}(1/\eps^{2/3})} weight bound, and is close to the known lower bound of max{n,\max\{\sqrt{n}, (1/\eps)^{\Omega(\log \log (1/\eps))}\} (Goldberg, Servedio). Our techniques also yield improved algorithms for related problems in learning theory

    The chow parameters problem

    Full text link
    In the 2nd Annual FOCS (1961), C. K. Chow proved that every Boolean threshold function is uniquely determined by its degree-0 and degree-1 Fourier coefficients. These numbers became known as the Chow Parameters. Providing an algorithmic version of Chow’s theorem — i.e., efficiently construct-ing a representation of a threshold function given its Chow Parameters — has remained open ever since. This problem has received significant study in the fields of circuit complexity [Elg60, Cho61, Der65, Win71], game theory and the design of voting systems [DS79, Lee03, TT06, APL07], and learning theory [BDJ+98, Gol06]. In this paper we effectively solve the problem, giving a randomized PTAS with the following behav-ior: Theorem: Given the Chow Parameters of a Boolean threshold function f over n bits and any con-stant > 0, the algorithm runs in time O(n2 log2 n) and with high probability outputs a representation of a threshold function f ′ which is -close to f. Along the way we prove several new results of independent interest about Boolean threshold func-tions. In addition to various structural results, these include the following new algorithmic results i

    Pose-graph SLAM sparsification using factor descent

    Get PDF
    Since state of the art simultaneous localization and mapping (SLAM) algorithms are not constant time, it is often necessary to reduce the problem size while keeping as much of the original graph’s information content. In graph SLAM, the problem is reduced by removing nodes and rearranging factors. This is normally faced locally: after selecting a node to be removed, its Markov blanket sub-graph is isolated, the node is marginalized and its dense result is sparsified. The aim of sparsification is to compute an approximation of the dense and non-relinearizable result of node marginalization with a new set of factors. Sparsification consists on two processes: building the topology of new factors, and finding the optimal parameters that best approximate the original dense distribution. This best approximation can be obtained through minimization of the Kullback-Liebler divergence between the two distributions. Using simple topologies such as Chow-Liu trees, there is a closed form for the optimal solution. However, a tree is oftentimes too sparse and produces bad distribution approximations. On the contrary, more populated topologies require nonlinear iterative optimization. In the present paper, the particularities of pose-graph SLAM are exploited for designing new informative topologies and for applying the novel factor descent iterative optimization method for sparsification. Several experiments are provided comparing the proposed topology methods and factor descent optimization with state-of-the-art methods in synthetic and real datasets with regards to approximation accuracy and computational cost.Peer ReviewedPostprint (author's final draft

    Low-level laser therapy (LLLT) combined with swimming training improved the lipid profile in rats fed with high-fat diet

    Get PDF
    Obesity and associated dyslipidemia is the fastest growing health problem throughout the world. The combination of exercise and low-level laser therapy (LLLT) could be a new approach to the treatment of obesity and associated disease. In this work, the effects of LLLT associated with exercises on the lipid metabolism in regular and high-fat diet rats were verified. We used 64 rats divided in eight groups with eight rats each, designed: SC, sedentary chow diet; SCL, sedentary chow diet laser, TC, trained chow diet; TCL, trained chow diet laser; SH, sedentary high-fat diet; SHL, sedentary high-fat diet laser; TH, trained high-fat diet; and THL, trained high-fat diet laser. The exercise used was swimming during 8 weeks/90 min daily and LLLT (GA-Al-As, 830 nm) dose of 4.7 J/point and total energy 9.4 J per animal, applied to both gastrocnemius muscles after exercise. We analyzed biochemical parameters, percentage of fat, hepatic and muscular glycogen and relative mass of tissue, and weight percentage gain. The statistical test used was ANOVA, with post hoc Tukey–Kramer for multiple analysis between groups, and the significant level was p < 0.001, p < 0.01, and p < 0.05. LLLT decreased the total cholesterol (p < 0.05), triglycerides (p < 0.01), low-density lipoprotein cholesterol (p < 0.05), and relative mass of fat tissue (p < 0.05), suggesting increased metabolic activity and altered lipid pathways. The combination of exercise and LLLT increased the benefits of exercise alone. However, LLLT without exercise tended to increase body weight and fat content. LLLT may be a valuable addition to a regimen of diet and exercise for weight reduction and dyslipidemic control

    Public projects, Boolean functions and the borders of Border's theorem

    Full text link
    Border's theorem gives an intuitive linear characterization of the feasible interim allocation rules of a Bayesian single-item environment, and it has several applications in economic and algorithmic mechanism design. All known generalizations of Border's theorem either restrict attention to relatively simple settings, or resort to approximation. This paper identifies a complexity-theoretic barrier that indicates, assuming standard complexity class separations, that Border's theorem cannot be extended significantly beyond the state-of-the-art. We also identify a surprisingly tight connection between Myerson's optimal auction theory, when applied to public project settings, and some fundamental results in the analysis of Boolean functions.Comment: Accepted to ACM EC 201

    Learning Geometric Concepts with Nasty Noise

    Full text link
    We study the efficient learnability of geometric concept classes - specifically, low-degree polynomial threshold functions (PTFs) and intersections of halfspaces - when a fraction of the data is adversarially corrupted. We give the first polynomial-time PAC learning algorithms for these concept classes with dimension-independent error guarantees in the presence of nasty noise under the Gaussian distribution. In the nasty noise model, an omniscient adversary can arbitrarily corrupt a small fraction of both the unlabeled data points and their labels. This model generalizes well-studied noise models, including the malicious noise model and the agnostic (adversarial label noise) model. Prior to our work, the only concept class for which efficient malicious learning algorithms were known was the class of origin-centered halfspaces. Specifically, our robust learning algorithm for low-degree PTFs succeeds under a number of tame distributions -- including the Gaussian distribution and, more generally, any log-concave distribution with (approximately) known low-degree moments. For LTFs under the Gaussian distribution, we give a polynomial-time algorithm that achieves error O(ϵ)O(\epsilon), where ϵ\epsilon is the noise rate. At the core of our PAC learning results is an efficient algorithm to approximate the low-degree Chow-parameters of any bounded function in the presence of nasty noise. To achieve this, we employ an iterative spectral method for outlier detection and removal, inspired by recent work in robust unsupervised learning. Our aforementioned algorithm succeeds for a range of distributions satisfying mild concentration bounds and moment assumptions. The correctness of our robust learning algorithm for intersections of halfspaces makes essential use of a novel robust inverse independence lemma that may be of broader interest
    corecore