107 research outputs found

    Logistic Regression: Tight Bounds for Stochastic and Online Optimization

    Full text link
    The logistic loss function is often advocated in machine learning and statistics as a smooth and strictly convex surrogate for the 0-1 loss. In this paper we investigate the question of whether these smoothness and convexity properties make the logistic loss preferable to other widely considered options such as the hinge loss. We show that in contrast to known asymptotic bounds, as long as the number of prediction/optimization iterations is sub exponential, the logistic loss provides no improvement over a generic non-smooth loss function such as the hinge loss. In particular we show that the convergence rate of stochastic logistic optimization is bounded from below by a polynomial in the diameter of the decision set and the number of prediction iterations, and provide a matching tight upper bound. This resolves the COLT open problem of McMahan and Streeter (2012)

    Formal Language Recognition with the Java Type Checker

    Get PDF
    This paper is a theoretical study of a practical problem: the automatic generation of Java Fluent APIs from their specification. We explain why the problem\u27s core lies with the expressive power of Java generics. Our main result is that automatic generation is possible whenever the specification is an instance of the set of deterministic context-free languages, a set which contains most "practical" languages. Other contributions include a collection of techniques and idioms of the limited meta-programming possible with Java generics, and an empirical measurement demonstrating that the runtime of the "javac" compiler of Java may be exponential in the program\u27s length, even for programs composed of a handful of lines and which do not rely on overly complex use of generics

    KK-theoretic counterexamples to Ravenel's telescope conjecture

    Full text link
    At each prime pp and height n+12n+1 \ge 2, we prove that the telescopic and chromatic localizations of spectra differ. Specifically, for Z\mathbb{Z} acting by Adams operations on BPn\mathrm{BP}\langle n \rangle, we prove that the T(n+1)T(n+1)-localized algebraic KK-theory of BPnhZ\mathrm{BP}\langle n \rangle^{h\mathbb{Z}} is not K(n+1)K(n+1)-local. We also show that Galois hyperdescent, A1\mathbb{A}^1-invariance, and nil-invariance fail for the K(n+1)K(n+1)-localized algebraic KK-theory of K(n)K(n)-local E\mathbb{E}_{\infty}-rings. In the case n=1n=1 and p7p \ge 7 we make complete computations of T(2)K(R)T(2)_*\mathrm{K}(R), for RR certain finite Galois extensions of the K(1)K(1)-local sphere. We show for p5p\geq 5 that the algebraic KK-theory of the K(1)K(1)-local sphere is asymptotically L2fL_2^{f}-local.Comment: 100 pages. Comments very welcom

    Dual array EEG-fMRI : An approach for motion artifact suppression in EEG recorded simultaneously with fMRI

    Get PDF
    Objective: Although simultaneous recording of EEG and MRI has gained increasing popularity in recent years, the extent of its clinical use remains limited by various technical challenges. Motion interference is one of the major challenges in EEG-fMRI. Here we present an approach which reduces its impact with the aid of an MR compatible dual-array EEG (daEEG) in which the EEG itself is used both as a brain signal recorder and a motion sensor. Methods: We implemented two arrays of EEG electrodes organized into two sets of nearly orthogonally intersecting wire bundles. The EEG was recorded using referential amplifiers inside a 3 T MR-scanner. Virtual bipolar measurements were taken both along bundles (creating a small wire loop and therefore minimizing artifact) and across bundles (creating a large wire loop and therefore maximizing artifact). Independent component analysis (ICA) was applied. The resulting ICA components were classified into brain signal and noise using three criteria: 1) degree of two-dimensional spatial correlation between ICA coefficients along bundles and across bundles; 2) amplitude along bundles vs. across bundles; 3) correlation with ECG. The components which passed the criteria set were transformed back to the channel space. Motion artifact suppression and the ability to detect interictal epileptic spikes following daEEG and Optimal Basis Set (OBS) procedures were compared in 10 patients with epilepsy. Results: The SNR achieved by daEEG was 11.05 +/- 3.10 and by OBS was 8.25 +/- 1.01 (p <0.00001). In 9 of 10 patients, more spikes were detected after daEEG than after OBS (p <0.05). Significance: daEEG improves signal quality in EEG-fMRI recordings, expanding its clinical and research potential. (C) 2016 Elsevier Inc. All rights reserved.Peer reviewe

    Completeness and Ambiguity of Schema Cover

    Get PDF
    Given a schema and a set of concepts, representative of entities in the domain of discourse, schema cover defines correspondences between concepts and parts of the schema. Schema cover aims at interpreting the schema in terms of concepts and thus, vastly simplifying the task of schema integration. In this work we investigate two properties of schema cover, namely completeness and ambiguity. The former measures the part of a schema that can be covered by a set of concepts and the latter examines the amount of overlap between concepts in a cover. To study the tradeoffs between completeness and ambiguity we define a cover model to which previous frameworks are special cases. We analyze the theoretical complexity of variations of the cover problem, some aim at maximizing completeness while others aim at minimizing ambiguity. We show that variants of the schema cover problem are hard problems in general and formulate an exhaustive search solution using integer linear programming. We then provide a thorough empirical analysis, using both real-world and simulated data sets, showing empirically that the integer linear programming solution scales well for large schemata. We also show that some instantiations of the general schema cover problem are more effective than others

    Sensitivity and Bias in Decision-Making under Risk: Evaluating the Perception of Reward, Its Probability and Value

    Get PDF
    BACKGROUND: There are few clinical tools that assess decision-making under risk. Tests that characterize sensitivity and bias in decisions between prospects varying in magnitude and probability of gain may provide insights in conditions with anomalous reward-related behaviour. OBJECTIVE: We designed a simple test of how subjects integrate information about the magnitude and the probability of reward, which can determine discriminative thresholds and choice bias in decisions under risk. DESIGN/METHODS: Twenty subjects were required to choose between two explicitly described prospects, one with higher probability but lower magnitude of reward than the other, with the difference in expected value between the two prospects varying from 3 to 23%. RESULTS: Subjects showed a mean threshold sensitivity of 43% difference in expected value. Regarding choice bias, there was a 'risk premium' of 38%, indicating a tendency to choose higher probability over higher reward. An analysis using prospect theory showed that this risk premium is the predicted outcome of hypothesized non-linearities in the subjective perception of reward value and probability. CONCLUSIONS: This simple test provides a robust measure of discriminative value thresholds and biases in decisions under risk. Prospect theory can also make predictions about decisions when subjective perception of reward or probability is anomalous, as may occur in populations with dopaminergic or striatal dysfunction, such as Parkinson's disease and schizophrenia
    corecore