1,702 research outputs found

    On the critical curves of the Pinning and Copolymer models in Correlated Gaussian environment

    Get PDF
    We investigate the disordered copolymer and pinning models, in the case of a correlated Gaussian environment with summable correlations, and when the return distribution of the underlying renewal process has a polynomial tail. As far as the copolymer model is concerned, we prove disorder relevance both in terms of critical points and critical exponents, in the case of non-negative correlations. When some of the correlations are negative, even the annealed model becomes non-trivial. Moreover, when the return distribution has a finite mean, we are able to compute the weak coupling limit of the critical curves for both models, with no restriction on the correlations other than summability. This generalizes the result of Berger, Caravenna, Poisat, Sun and Zygouras \cite{cf:BCPSZ} to the correlated case. Interestingly, in the copolymer model, the weak coupling limit of the critical curve turns out to be the maximum of two quantities: one generalizing the limit found in the IID case \cite{cf:BCPSZ}, the other one generalizing the so-called Monthus bound.Comment: 35 page

    A Bounded Domain Property for an Expressive Fragment of First-Order Linear Temporal Logic

    Get PDF
    First-Order Linear Temporal Logic (FOLTL) is well-suited to specify infinite-state systems. However, FOLTL satisfiability is not even semi-decidable, thus preventing automated verification. To address this, a possible track is to constrain specifications to a decidable fragment of FOLTL, but known fragments are too restricted to be usable in practice. In this paper, we exhibit various fragments of increasing scope that provide a pertinent basis for abstract specification of infinite-state systems. We show that these fragments enjoy the Bounded Domain Property (any satisfiable FOLTL formula has a model with a finite, bounded FO domain), which provides a basis for complete, automated verification by reduction to LTL satisfiability. Finally, we present a simple case study illustrating the applicability and limitations of our results

    A framework for the comparison of different EEG acquisition solutions

    Full text link
    The purpose of this work is to propose a framework for the benchmarking of EEG amplifiers, headsets, and electrodes providing objective recommendation for a given application. The framework covers: data collection paradigm, data analysis, and statistical framework. To illustrate, data was collected from 12 different devices totaling up to 6 subjects per device. Two data acquisition protocols were implemented: a resting-state protocol eyes-open (EO) and eyes-closed (EC), and an Auditory Evoked Potential (AEP) protocol. Signal-to-noise ratio (SNR) on alpha band (EO/EC) and Event Related Potential (ERP) were extracted as objective quantification of physiologically meaningful information. Then, visual representation, univariate statistical analysis, and multivariate model were performed to increase results interpretability. Objective criteria show that the spectral SNR in alpha does not provide much discrimination between systems, suggesting that the acquisition quality might not be of primary importance for spectral and specifically alpha-based applications. On the contrary, AEP SNR proved much more variable stressing the importance of the acquisition setting for ERP experiments. The multivariate analysis identified some individuals and some systems as independent statistically significant contributors to the SNR. It highlights the importance of inter-individual differences in neurophysiological experiments (sample size) and suggests some device might objectively be superior to others when it comes to ERP recordings. However, the illustration of the proposed benchmarking framework suffers from severe limitations including small sample size and sound card jitter in the auditory stimulations. While these limitations hinders a definite ranking of the evaluated hardware, we believe the proposed benchmarking framework to be a modest yet valuable contribution to the field

    On the Uniqueness of Inverse Problems with Fourier-domain Measurements and Generalized TV Regularization

    Full text link
    We study the super-resolution problem of recovering a periodic continuous-domain function from its low-frequency information. This means that we only have access to possibly corrupted versions of its Fourier samples up to a maximum cut-off frequency. The reconstruction task is specified as an optimization problem with generalized total-variation regularization involving a pseudo-differential operator. Our special emphasis is on the uniqueness of solutions. We show that, for elliptic regularization operators (e.g., the derivatives of any order), uniqueness is always guaranteed. To achieve this goal, we provide a new analysis of constrained optimization problems over Radon measures. We demonstrate that either the solutions are always made of Radon measures of constant sign, or the solution is unique. Doing so, we identify a general sufficient condition for the uniqueness of the solution of a constrained optimization problem with TV-regularization, expressed in terms of the Fourier samples.Comment: 20 page

    Nell2RDF: Read the Web, and turn it into RDF

    Get PDF
    http://www.ke.tu-darmstadt.de/know-a-lod-2013/wp-content/uploads/2013/05/know@lod_2.pdfInternational audienceThis paper describes the Nell2RDF platform that provides Linked Data of general knowledge, based on data automatically constructed by a permanent machine learning process called NELL that reads the Web. As opposed to DBpedia, all facts recorded by NELL can be tracked according to its provenance and a degree of confidence. With our platform, we aim at capturing all the data generated by NELL a transform them into state of the art Linked Data, following best practices. We discuss the benefits of the platform in opening new lines of research, while the work is still in progress

    Learning to Recognize Touch Gestures: Recurrent vs. Convolutional Features and Dynamic Sampling

    Get PDF
    International audienceWe propose a fully automatic method for learning gestures on big touch devices in a potentially multiuser context. The goal is to learn general models capable of adapting to different gestures, user styles and hardware variations (e.g. device sizes, sampling frequencies and regularities). Based on deep neural networks, our method features a novel dynamic sampling and temporal normalization component, transforming variable length gestures into fixed length representations while preserving finger/surface contact transitions, that is, the topology of the signal. This sequential representation is then processed with a convolutional model capable, unlike recurrent networks, of learning hierarchical representations with different levels of abstraction. To demonstrate the interest of the proposed method, we introduce a new touch gestures dataset with 6591 gestures performed by 27 people, which is, up to our knowledge, the first of its kind: a publicly available multi-touch gesture dataset for interaction. We also tested our method on a standard dataset of symbolic touch gesture recognition, the MMG dataset, outperforming the state of the art and reporting close to perfect performance

    Not Just Pointing: Shannon's Information Theory as a General Tool for Performance Evaluation of Input Techniques

    Get PDF
    This article was submitted to the ACM CHI conference in September 2017, and rejected in December 2017. It is currently under revision.Input techniques serving, quite literally, to allow users to send information to the computer, the information theoretic approach seems tailor-made for their quantitative evaluation. Shannon's framework makes it straightforward to measure the performance of any technique as an effective information transmission rate, in bits/s. Apart from pointing, however, evaluators of input techniques have generally ignored Shannon, contenting themselves with less rigorous methods of speed and accuracy measurements borrowed from psychology. We plead for a serious consideration in HCI of Shannon's information theory as a tool for the evaluation of all sorts of input techniques. We start with a primer on Shannon's basic quantities and the theoretical entities of his communication model. We then discuss how the concepts should be applied to the input techniques evaluation problem. Finally we outline two concrete methodologies, one focused on the discrete timing and the other on the continuous time course of information gain by the computer
    • …
    corecore