254 research outputs found

    LIPIcs, Volume 251, ITCS 2023, Complete Volume

    Get PDF
    LIPIcs, Volume 251, ITCS 2023, Complete Volum

    Space-Efficient Parameterized Algorithms on Graphs of Low Shrubdepth

    Full text link
    Dynamic programming on various graph decompositions is one of the most fundamental techniques used in parameterized complexity. Unfortunately, even if we consider concepts as simple as path or tree decompositions, such dynamic programming uses space that is exponential in the decomposition's width, and there are good reasons to believe that this is necessary. However, it has been shown that in graphs of low treedepth it is possible to design algorithms which achieve polynomial space complexity without requiring worse time complexity than their counterparts working on tree decompositions of bounded width. Here, treedepth is a graph parameter that, intuitively speaking, takes into account both the depth and the width of a tree decomposition of the graph, rather than the width alone. Motivated by the above, we consider graphs that admit clique expressions with bounded depth and label count, or equivalently, graphs of low shrubdepth (sd). Here, sd is a bounded-depth analogue of cliquewidth, in the same way as td is a bounded-depth analogue of treewidth. We show that also in this setting, bounding the depth of the decomposition is a deciding factor for improving the space complexity. Precisely, we prove that on nn-vertex graphs equipped with a tree-model (a decomposition notion underlying sd) of depth dd and using kk labels, we can solve - Independent Set in time 2O(dk)nO(1)2^{O(dk)}\cdot n^{O(1)} using O(dk2logn)O(dk^2\log n) space; - Max Cut in time nO(dk)n^{O(dk)} using O(dklogn)O(dk\log n) space; and - Dominating Set in time 2O(dk)nO(1)2^{O(dk)}\cdot n^{O(1)} using nO(1)n^{O(1)} space via a randomized algorithm. We also establish a lower bound, conditional on a certain assumption about the complexity of Longest Common Subsequence, which shows that at least in the case of IS the exponent of the parametric factor in the time complexity has to grow with dd if one wishes to keep the space complexity polynomial.Comment: Conference version to appear at the European Symposium on Algorithms (ESA 2023

    Identification of Novel Properties of Metabolic Systems Through Null-Space Analysis

    Get PDF
    Metabolic models provide a mathematical description of the complex network of biochemical reactions that sustain life. Among these, genome-scale models capture the entire metabolism of an organism, by encompassing all known biochemical reactions encoded by its genome. They are invaluable tools for exploring the metabolic potential of an organism, such as by predicting its response to different stimuli and identifying which reactions are essential for its survival. However, as the understanding of metabolism continues to grow, so too has the size and complexity of metabolic models, making the need for novel techniques that can simplify networks and extract specific features from them ever more important. This thesis addresses this challenge by leveraging the underlying structure of the network embodied by these models. Three different approaches are presented. Firstly, an algorithm that uses convex analysis techniques to decompose flux measurements into a set of fundamental flux pathways is developed and applied to a genome-scale model of Campylobacter jejuni in order to investigate its absolute requirement for environmental oxygen. This approach aims to overcome the computational limitations associated with the traditional technique of elementary mode analysis. Secondly, a method that can reduce the size of models by removing redundancies is introduced. This method identifies alternative pathways that lead from the same start to end product and is useful for identifying systematic errors that arise from model construction and for revealing information about the network’s flexibility. Finally, a novel technique for relating metabolites based on relationships between their concentration changes, or alternatively their chemical similarity, is developed based on the invariant properties of the left null-space of the stoichiometry matrix. Although various methods for relating the composition of metabolites exist, this technique has the advantage of not requiring any information apart from the model’s structure and allowed for the development of an algorithm that can simplify models and their analysis by extracting pathways containing metabolites that have similar composition. Furthermore, a method that uses the left null-space to facilitate the identification of un-balanced reactions in models is also presented

    FPT Approximations for Capacitated/Fair Clustering with Outliers

    Full text link
    Clustering problems such as kk-Median, and kk-Means, are motivated from applications such as location planning, unsupervised learning among others. In such applications, it is important to find the clustering of points that is not ``skewed'' in terms of the number of points, i.e., no cluster should contain too many points. This is modeled by capacity constraints on the sizes of clusters. In an orthogonal direction, another important consideration in clustering is how to handle the presence of outliers in the data. Indeed, these clustering problems have been generalized in the literature to separately handle capacity constraints and outliers. To the best of our knowledge, there has been very little work on studying the approximability of clustering problems that can simultaneously handle both capacities and outliers. We initiate the study of the Capacitated kk-Median with Outliers (CkkMO) problem. Here, we want to cluster all except mm outlier points into at most kk clusters, such that (i) the clusters respect the capacity constraints, and (ii) the cost of clustering, defined as the sum of distances of each non-outlier point to its assigned cluster-center, is minimized. We design the first constant-factor approximation algorithms for CkkMO. In particular, our algorithm returns a (3+\epsilon)-approximation for CkkMO in general metric spaces, and a (1+\epsilon)-approximation in Euclidean spaces of constant dimension, that runs in time in time f(k,m,ϵ)ImO(1)f(k, m, \epsilon) \cdot |I_m|^{O(1)}, where Im|I_m| denotes the input size. We can also extend these results to a broader class of problems, including Capacitated k-Means/k-Facility Location with Outliers, and Size-Balanced Fair Clustering problems with Outliers. For each of these problems, we obtain an approximation ratio that matches the best known guarantee of the corresponding outlier-free problem.Comment: Abstract shortened to meet arxiv requirement

    LIPIcs, Volume 261, ICALP 2023, Complete Volume

    Get PDF
    LIPIcs, Volume 261, ICALP 2023, Complete Volum

    Gaussian and Bootstrap Approximations for Suprema of Empirical Processes

    Full text link
    In this paper we develop non-asymptotic Gaussian approximation results for the sampling distribution of suprema of empirical processes when the indexing function class Fn\mathcal{F}_n varies with the sample size nn and may not be Donsker. Prior approximations of this type required upper bounds on the metric entropy of Fn\mathcal{F}_n and uniform lower bounds on the variance of fFnf \in \mathcal{F}_n which, both, limited their applicability to high-dimensional inference problems. In contrast, the results in this paper hold under simpler conditions on boundedness, continuity, and the strong variance of the approximating Gaussian process. The results are broadly applicable and yield a novel procedure for bootstrapping the distribution of empirical process suprema based on the truncated Karhunen-Lo{\`e}ve decomposition of the approximating Gaussian process. We demonstrate the flexibility of this new bootstrap procedure by applying it to three fundamental problems in high-dimensional statistics: simultaneous inference on parameter vectors, inference on the spectral norm of covariance matrices, and construction of simultaneous confidence bands for functions in reproducing kernel Hilbert spaces.Comment: 95 page

    Space-Efficient Parameterized Algorithms on Graphs of Low Shrubdepth

    Get PDF
    Dynamic programming on various graph decompositions is one of the most fundamental techniques used in parameterized complexity. Unfortunately, even if we consider concepts as simple as path or tree decompositions, such dynamic programming uses space that is exponential in the decomposition's width, and there are good reasons to believe that this is necessary. However, it has been shown that in graphs of low treedepth it is possible to design algorithms which achieve polynomial space complexity without requiring worse time complexity than their counterparts working on tree decompositions of bounded width. Here, treedepth is a graph parameter that, intuitively speaking, takes into account both the depth and the width of a tree decomposition of the graph, rather than the width alone. Motivated by the above, we consider graphs that admit clique expressions with bounded depth and label count, or equivalently, graphs of low shrubdepth (sd). Here, sd is a bounded-depth analogue of cliquewidth, in the same way as td is a bounded-depth analogue of treewidth. We show that also in this setting, bounding the depth of the decomposition is a deciding factor for improving the space complexity. Precisely, we prove that on n-vertex graphs equipped with a tree-model (a decomposition notion underlying sd) of depth d and using k labels, we can solve - Independent Set in time 2O(dk)⋅nO(1) using O(dk2logn) space; - Max Cut in time nO(dk) using O(dklogn) space; and - Dominating Set in time 2O(dk)⋅nO(1) using nO(1) space via a randomized algorithm. We also establish a lower bound, conditional on a certain assumption about the complexity of Longest Common Subsequence, which shows that at least in the case of IS the exponent of the parametric factor in the time complexity has to grow with d if one wishes to keep the space complexity polynomial

    An experimental and modelling approach to study the performance and degradation of low temperature electrolyzers

    Get PDF
    L'abstract è presente nell'allegato / the abstract is in the attachmen

    Deep Kalman Filters Can Filter

    Full text link
    Deep Kalman filters (DKFs) are a class of neural network models that generate Gaussian probability measures from sequential data. Though DKFs are inspired by the Kalman filter, they lack concrete theoretical ties to the stochastic filtering problem, thus limiting their applicability to areas where traditional model-based filters have been used, e.g.\ model calibration for bond and option prices in mathematical finance. We address this issue in the mathematical foundations of deep learning by exhibiting a class of continuous-time DKFs which can approximately implement the conditional law of a broad class of non-Markovian and conditionally Gaussian signal processes given noisy continuous-times measurements. Our approximation results hold uniformly over sufficiently regular compact subsets of paths, where the approximation error is quantified by the worst-case 2-Wasserstein distance computed uniformly over the given compact set of paths
    corecore