492,337 research outputs found

    Parameterized Uniform Complexity in Numerics: from Smooth to Analytic, from NP-hard to Polytime

    Full text link
    The synthesis of classical Computational Complexity Theory with Recursive Analysis provides a quantitative foundation to reliable numerics. Here the operators of maximization, integration, and solving ordinary differential equations are known to map (even high-order differentiable) polynomial-time computable functions to instances which are `hard' for classical complexity classes NP, #P, and CH; but, restricted to analytic functions, map polynomial-time computable ones to polynomial-time computable ones -- non-uniformly! We investigate the uniform parameterized complexity of the above operators in the setting of Weihrauch's TTE and its second-order extension due to Kawamura&Cook (2010). That is, we explore which (both continuous and discrete, first and second order) information and parameters on some given f is sufficient to obtain similar data on Max(f) and int(f); and within what running time, in terms of these parameters and the guaranteed output precision 2^(-n). It turns out that Gevrey's hierarchy of functions climbing from analytic to smooth corresponds to the computational complexity of maximization growing from polytime to NP-hard. Proof techniques involve mainly the Theory of (discrete) Computation, Hard Analysis, and Information-Based Complexity

    On the Implicit Graph Conjecture

    Get PDF
    The implicit graph conjecture states that every sufficiently small, hereditary graph class has a labeling scheme with a polynomial-time computable label decoder. We approach this conjecture by investigating classes of label decoders defined in terms of complexity classes such as P and EXP. For instance, GP denotes the class of graph classes that have a labeling scheme with a polynomial-time computable label decoder. Until now it was not even known whether GP is a strict subset of GR. We show that this is indeed the case and reveal a strict hierarchy akin to classical complexity. We also show that classes such as GP can be characterized in terms of graph parameters. This could mean that certain algorithmic problems are feasible on every graph class in GP. Lastly, we define a more restrictive class of label decoders using first-order logic that already contains many natural graph classes such as forests and interval graphs. We give an alternative characterization of this class in terms of directed acyclic graphs. By showing that some small, hereditary graph class cannot be expressed with such label decoders a weaker form of the implicit graph conjecture could be disproven.Comment: 13 pages, MFCS 201

    Parameterized Complexity of Binary CSP: Vertex Cover, Treedepth, and Related Parameters

    Get PDF
    We investigate the parameterized complexity of Binary CSP parameterized by the vertex cover number and the treedepth of the constraint graph, as well as by a selection of related modulator-based parameters. The main findings are as follows: - Binary CSP parameterized by the vertex cover number is W[3]-complete. More generally, for every positive integer d, Binary CSP parameterized by the size of a modulator to a treedepth-d graph is W[2d+1]-complete. This provides a new family of natural problems that are complete for odd levels of the W-hierarchy. - We introduce a new complexity class XSLP, defined so that Binary CSP parameterized by treedepth is complete for this class. We provide two equivalent characterizations of XSLP: the first one relates XSLP to a model of an alternating Turing machine with certain restrictions on conondeterminism and space complexity, while the second one links XSLP to the problem of model-checking first-order logic with suitably restricted universal quantification. Interestingly, the proof of the machine characterization of XSLP uses the concept of universal trees, which are prominently featured in the recent work on parity games. - We describe a new complexity hierarchy sandwiched between the W-hierarchy and the A-hierarchy: For every odd t, we introduce a parameterized complexity class S[t] with W[t] ? S[t] ? A[t], defined using a parameter that interpolates between the vertex cover number and the treedepth. We expect that many of the studied classes will be useful in the future for pinpointing the complexity of various structural parameterizations of graph problems

    Classification of Broadcast News Audio Data Employing Binary Decision Architecture

    Get PDF
    A novel binary decision architecture (BDA) for broadcast news audio classification task is presented in this paper. The idea of developing such architecture came from the fact that the appropriate combination of multiple binary classifiers for two-class discrimination problem can reduce a miss-classification error without rapid increase in computational complexity. The core element of classification architecture is represented by a binary decision (BD) algorithm that performs discrimination between each pair of acoustic classes, utilizing two types of decision functions. The first one is represented by a simple rule-based approach in which the final decision is made according to the value of selected discrimination parameter. The main advantage of this solution is relatively low processing time needed for classification of all acoustic classes. The cost for that is low classification accuracy. The second one employs support vector machine (SVM) classifier. In this case, the overall classification accuracy is conditioned by finding the optimal parameters for decision function resulting in higher computational complexity and better classification performance. The final form of proposed BDA is created by combining four BD discriminators supplemented by decision table. The effectiveness of proposed BDA, utilizing rule-based approach and the SVM classifier, is compared with two most popular strategies for multiclass classification, namely the binary decision trees (BDT) and the One-Against-One SVM (OAOSVM). Experimental results show that the proposed classification architecture can decrease the overall classification error in comparison with the BDT architecture. On the contrary, an optimization technique for selecting the optimal set of training data is needed in order to overcome the OAOSVM

    The Stochastic complexity of spin models: Are pairwise models really simple?

    Full text link
    Models can be simple for different reasons: because they yield a simple and computationally efficient interpretation of a generic dataset (e.g. in terms of pairwise dependences) - as in statistical learning - or because they capture the essential ingredients of a specific phenomenon - as e.g. in physics - leading to non-trivial falsifiable predictions. In information theory and Bayesian inference, the simplicity of a model is precisely quantified in the stochastic complexity, which measures the number of bits needed to encode its parameters. In order to understand how simple models look like, we study the stochastic complexity of spin models with interactions of arbitrary order. We highlight the existence of invariances with respect to bijections within the space of operators, which allow us to partition the space of all models into equivalence classes, in which models share the same complexity. We thus found that the complexity (or simplicity) of a model is not determined by the order of the interactions, but rather by their mutual arrangements. Models where statistical dependencies are localized on non-overlapping groups of few variables (and that afford predictions on independencies that are easy to falsify) are simple. On the contrary, fully connected pairwise models, which are often used in statistical learning, appear to be highly complex, because of their extended set of interactions

    Constant-space reasoning in dynamic Bayesian networks

    Get PDF
    AbstractDynamic Bayesian networks (DBNs) have been receiving increased attention as a tool for modeling complex stochastic processes, especially that they generalize the popular Hidden Markov Models (HMMs) and Kalman filters. Since DBNs are only a subclass of standard Bayesian networks, the structure-based algorithms developed for Bayesian networks can be immediately applied to reasoning with DBNs. Such structure-based algorithms, which are variations on elimination algorithms, take O(Nexp(w)) time and space to compute the likelihood of an event, where N is the number of nodes in the network and w is the width of a corresponding elimination order. DBNs, however, pose two specific computational challenges that require DBN-specific solutions. First, DBNs are typically heavily connected, therefore, admitting only elimination orders of high width. Second, even if one can find an elimination order of a reasonable width, one cannot afford the space complexity of O(Nexp(w)) since N=nT in this case, where n is the number of variables per time slice and T is the number of time slices in the DBN. For many applications, T is very large, making the space complexity of O(nTexp(w)) unrealistic. Therefore, one of the key challenges of DBNs is to develop efficient algorithms which space complexity is independent of the time span T, leading to what is known as constant-space algorithms. We study one of the main algorithms for achieving this constant-space complexity in this paper, which is based on “slice-by-slice” elimination orders, and then suggest improvements on it based on new classes of elimination orders. We identify two topological parameters for DBNs and use them to prove a number of tight bounds on the time complexity of algorithms that we study. We also observe (experimentally) that the newly identified elimination orders tend to be better than ones based on general purpose elimination heuristics, such as min-fill. This suggests that constant-space algorithms, such as the ones we study here, should be used on DBNs even when space is not a concern

    Out-of-sample generalizations for supervised manifold learning for classification

    Get PDF
    Supervised manifold learning methods for data classification map data samples residing in a high-dimensional ambient space to a lower-dimensional domain in a structure-preserving way, while enhancing the separation between different classes in the learned embedding. Most nonlinear supervised manifold learning methods compute the embedding of the manifolds only at the initially available training points, while the generalization of the embedding to novel points, known as the out-of-sample extension problem in manifold learning, becomes especially important in classification applications. In this work, we propose a semi-supervised method for building an interpolation function that provides an out-of-sample extension for general supervised manifold learning algorithms studied in the context of classification. The proposed algorithm computes a radial basis function (RBF) interpolator that minimizes an objective function consisting of the total embedding error of unlabeled test samples, defined as their distance to the embeddings of the manifolds of their own class, as well as a regularization term that controls the smoothness of the interpolation function in a direction-dependent way. The class labels of test data and the interpolation function parameters are estimated jointly with a progressive procedure. Experimental results on face and object images demonstrate the potential of the proposed out-of-sample extension algorithm for the classification of manifold-modeled data sets
    • …
    corecore