3,234 research outputs found

    From average case complexity to improper learning complexity

    Full text link
    The basic problem in the PAC model of computational learning theory is to determine which hypothesis classes are efficiently learnable. There is presently a dearth of results showing hardness of learning problems. Moreover, the existing lower bounds fall short of the best known algorithms. The biggest challenge in proving complexity results is to establish hardness of {\em improper learning} (a.k.a. representation independent learning).The difficulty in proving lower bounds for improper learning is that the standard reductions from NP\mathbf{NP}-hard problems do not seem to apply in this context. There is essentially only one known approach to proving lower bounds on improper learning. It was initiated in (Kearns and Valiant 89) and relies on cryptographic assumptions. We introduce a new technique for proving hardness of improper learning, based on reductions from problems that are hard on average. We put forward a (fairly strong) generalization of Feige's assumption (Feige 02) about the complexity of refuting random constraint satisfaction problems. Combining this assumption with our new technique yields far reaching implications. In particular, 1. Learning DNF\mathrm{DNF}'s is hard. 2. Agnostically learning halfspaces with a constant approximation ratio is hard. 3. Learning an intersection of ω(1)\omega(1) halfspaces is hard.Comment: 34 page

    Fake View Analytics in Online Video Services

    Full text link
    Online video-on-demand(VoD) services invariably maintain a view count for each video they serve, and it has become an important currency for various stakeholders, from viewers, to content owners, advertizers, and the online service providers themselves. There is often significant financial incentive to use a robot (or a botnet) to artificially create fake views. How can we detect the fake views? Can we detect them (and stop them) using online algorithms as they occur? What is the extent of fake views with current VoD service providers? These are the questions we study in the paper. We develop some algorithms and show that they are quite effective for this problem.Comment: 25 pages, 15 figure

    Kernel method for nonlinear Granger causality

    Get PDF
    Important information on the structure of complex systems, consisting of more than one component, can be obtained by measuring to which extent the individual components exchange information among each other. Such knowledge is needed to reach a deeper comprehension of phenomena ranging from turbulent fluids to neural networks, as well as complex physiological signals. The linear Granger approach, to detect cause-effect relationships between time series, has emerged in recent years as a leading statistical technique to accomplish this task. Here we generalize Granger causality to the nonlinear case using the theory of reproducing kernel Hilbert spaces. Our method performs linear Granger causality in the feature space of suitable kernel functions, assuming arbitrary degree of nonlinearity. We develop a new strategy to cope with the problem of overfitting, based on the geometry of reproducing kernel Hilbert spaces. Applications to coupled chaotic maps and physiological data sets are presented.Comment: Revised version, accepted for publication on Physical Review Letter

    Improving classification for brain computer interfaces using transitions and a moving window

    Get PDF
    Proceeding of: Biosignals 2009. International Conference on Bio-inspired Systems and Signal Processing, BIOSTEC 2009. Porto (Portugal), 14-17 January 2009The context of this paper is the brain-computer interface (BCI), and in particular the classification of signals with machine learning methods. In this paper we intend to improve classification accuracy by taking advantage of a feature of BCIs: instances run in sequences belonging to the same class. In that case, the classiffication problem can be reformulated into two subproblems: detecting class transitions and determining the class for sequences of instances between transitions. We detect a transition when the Euclidean distance between the power spectra at two different times is larger than a threshold. To tackle the second problem, instances are classified by taking into account, not just the prediction for that instance, but a moving window of predictions for previous instances. Experimental results show that our transition detection method improves results for datasets of two out of three subjects of the BCI III competition. If the moving window is used, classification accuracy is further improved, depending on the window size.Publicad

    Geometrical complexity of data approximators

    Full text link
    There are many methods developed to approximate a cloud of vectors embedded in high-dimensional space by simpler objects: starting from principal points and linear manifolds to self-organizing maps, neural gas, elastic maps, various types of principal curves and principal trees, and so on. For each type of approximators the measure of the approximator complexity was developed too. These measures are necessary to find the balance between accuracy and complexity and to define the optimal approximations of a given type. We propose a measure of complexity (geometrical complexity) which is applicable to approximators of several types and which allows comparing data approximations of different types.Comment: 10 pages, 3 figures, minor correction and extensio

    Subsampling in Smoothed Range Spaces

    Full text link
    We consider smoothed versions of geometric range spaces, so an element of the ground set (e.g. a point) can be contained in a range with a non-binary value in [0,1][0,1]. Similar notions have been considered for kernels; we extend them to more general types of ranges. We then consider approximations of these range spaces through ε\varepsilon -nets and ε\varepsilon -samples (aka ε\varepsilon-approximations). We characterize when size bounds for ε\varepsilon -samples on kernels can be extended to these more general smoothed range spaces. We also describe new generalizations for ε\varepsilon -nets to these range spaces and show when results from binary range spaces can carry over to these smoothed ones.Comment: This is the full version of the paper which appeared in ALT 2015. 16 pages, 3 figures. In Algorithmic Learning Theory, pp. 224-238. Springer International Publishing, 201

    Optimal estimation for Large-Eddy Simulation of turbulence and application to the analysis of subgrid models

    Get PDF
    The tools of optimal estimation are applied to the study of subgrid models for Large-Eddy Simulation of turbulence. The concept of optimal estimator is introduced and its properties are analyzed in the context of applications to a priori tests of subgrid models. Attention is focused on the Cook and Riley model in the case of a scalar field in isotropic turbulence. Using DNS data, the relevance of the beta assumption is estimated by computing (i) generalized optimal estimators and (ii) the error brought by this assumption alone. Optimal estimators are computed for the subgrid variance using various sets of variables and various techniques (histograms and neural networks). It is shown that optimal estimators allow a thorough exploration of models. Neural networks are proved to be relevant and very efficient in this framework, and further usages are suggested

    A preliminary approach to the multilabel classification problem of Portuguese juridical documents

    Get PDF
    Portuguese juridical documents from Supreme Courts and the Attorney General’s Office are manually classified by juridical experts into a set of classes belonging to a taxonomy of concepts. In this paper, a preliminary approach to develop techniques to automat- ically classify these juridical documents, is proposed. As basic strategy, the integration of natural language processing techniques with machine learning ones is used. Support Vector Machines (SVM) are used as learn- ing algorithm and the obtained results are presented and compared with other approaches, such as C4.5 and Naive Bayes
    corecore