88 research outputs found

    A Linear-Time n 0.4-Approximation for Longest Common Subsequence

    Get PDF
    We consider the classic problem of computing the Longest Common Subsequence(LCS) of two strings of length nn. While a simple quadratic algorithm has beenknown for the problem for more than 40 years, no faster algorithm has beenfound despite an extensive effort. The lack of progress on the problem hasrecently been explained by Abboud, Backurs, and Vassilevska Williams [FOCS'15]and Bringmann and K\"unnemann [FOCS'15] who proved that there is nosubquadratic algorithm unless the Strong Exponential Time Hypothesis fails.This has led the community to look for subquadratic approximation algorithmsfor the problem. Yet, unlike the edit distance problem for which a constant-factorapproximation in almost-linear time is known, very little progress has beenmade on LCS, making it a notoriously difficult problem also in the realm ofapproximation. For the general setting, only a naiveO(nΔ/2)O(n^{\varepsilon/2})-approximation algorithm with running timeO~(n2−Δ)\tilde{O}(n^{2-\varepsilon}) has been known, for any constant 0Δ≀10 \varepsilon \le 1. Recently, a breakthrough result by Hajiaghayi, Seddighin,Seddighin, and Sun [SODA'19] provided a linear-time algorithm that yields aO(n0.497956)O(n^{0.497956})-approximation in expectation; improving upon the naiveO(n)O(\sqrt{n})-approximation for the first time. In this paper, we provide an algorithm that in time O(n2−Δ)O(n^{2-\varepsilon})computes an O~(n2Δ/5)\tilde{O}(n^{2\varepsilon/5})-approximation with highprobability, for any 00 \tilde{O}(n^{0.4})−approximationinlineartime,improvingupontheboundofHajiaghayi,Seddighin,Seddighin,andSun,(2)providesanalgorithmwhoseapproximationscaleswithanysubquadraticrunningtime-approximation in linear time, improving upon the bound ofHajiaghayi, Seddighin, Seddighin, and Sun, (2) provides an algorithm whoseapproximation scales with any subquadratic running time O(n^{2-\varepsilon}),improvinguponthenaiveboundof,improving upon the naive bound of O(n^{\varepsilon/2})forany for any \varepsilon$,and (3) instead of only in expectation, succeeds with high probability.<br

    Near-linear time approximation schemes for clustering in doubling metrics

    Get PDF
    We consider the classic Facility Location, k-Median, and k-Means problems in metric spaces of doubling dimension d. We give nearly linear-time approximation schemes for each problem. The complexity of our algorithms is Õ(2(1/Δ)O(d2) n), making a significant improvement over the state-of-the-art algorithms that run in time n(d/Δ)O(d). Moreover, we show how to extend the techniques used to get the first efficient approximation schemes for the problems of prize-collecting k-Median and k-Means and efficient bicriteria approximation schemes for k-Median with outliers, k-Means with outliers and k-Center

    Tensorial Constitutive Models for Disordered Foams, Dense Emulsions, and other Soft Nonergodic Materials

    Full text link
    In recent years, the paradigm of `soft glassy matter' has been used to describe diverse nonergodic materials exhibiting strong local disorder and slow mesoscopic rearrangement. As so far formulated, however, the resulting `soft glassy rheology' (SGR) model treats the shear stress in isolation, effectively `scalarizing' the stress and strain rate tensors. Here we offer generalizations of the SGR model that combine its nontrivial aging and yield properties with a tensorial structure that can be specifically adapted, for example, to the description of fluid film assemblies or disordered foams.Comment: 18 pages, 4 figure

    Almost tight lower bounds for hard cutting problems in embedded graphs

    Get PDF

    Bulk and surface rheology of Aculynℱ 22 and Aculynℱ 33 polymeric solutions and kinetics of foam drainage

    Get PDF
    This paper was accepted for publication in the journal Colloids and Surfaces A: Physicochemical and Engineering Aspects and the definitive published version is available at http://dx.doi.org/10.1016/j.colsurfa.2013.05.072Experimental investigations of both bulk and surface rheology of solutions of commercially available polymers AculynTM 22 and AculynTM 33 in presence of sodium chloride are performed in a wide range of the polymer and salt concentrations. It is shown that the bulk viscosity and the surface viscoelastic modulus of solutions of both polymers increases with the increase of polymer concentration and the decrease of the salt concentration. Solutions of both polymers demonstrate very good foamability and form stable foams. Foam drainage is governed mainly by the bulk viscosity when the latter is in the range of 100-500 mPa·s

    Balanced centroidal power diagrams for redistricting

    No full text

    Online optimization of smoothed piecewise constant functions

    No full text
    We study online optimization of smoothed piecewise constant functions over the domain [0, 1). This is motivated by the problem of adaptively picking parameters of learning algorithms as in the recently introduced framework by Gupta and Roughgarden (2016). Majority of the machine learning literature has focused on Lipschitz-continuous functions or functions with bounded gradients.1 This is with good reason—any learning algorithm suffers linear regret even against piecewise constant functions that are chosen adversarially, arguably the simplest of non-Lipschitz continuous functions. The smoothed setting we consider is inspired by the seminal work of Spielman and Teng (2004) and the recent work of Gupta and Roughgarden (2016)—in this setting, the sequence of functions may be chosen by an adversary, however, with some uncertainty in the location of discontinuities. We give algorithms that achieve sublinear regret in the full information and bandit settings

    Online k-means clustering

    No full text
    We study the problem of learning a clustering of an online set of points. The specific formulation we use is the k-means objective: At each time step the algorithm has to maintain a set of k candidate centers and the loss incurred by the algorithm is the squared distance between the new point and the closest center. The goal is to minimize regret with respect to the best solution to the k-means objective in hindsight. We show that provided the data lies in a bounded region, learning is possible, namely an implementation of the Multiplicative Weights Update Algorithm (MWUA) using a discretized grid achieves a regret bound of O~(T−−√) in expectation. We also present an online-to-offline reduction that shows that an efficient no-regret online algorithm (despite being allowed to choose a different set of candidate centres at each round) implies an offline efficient algorithm for the k-means problem, which is known to be NP-hard. In light of this hardness, we consider the slightly weaker requirement of comparing regret with respect to (1+Ï”)OPT and present a no-regret algorithm with runtime O(Tpoly(log(T),k,d,1/Ï”)O(kd)). Our algorithm is based on maintaining a set of points of bounded size which is a coreset that helps identifying the \emph{relevant} regions of the space for running an adaptive, more efficient, variant of the MWUA. We show that simpler online algorithms, such as \emph{Follow The Leader} (FTL), fail to produce sublinear regret in the worst case. We also report preliminary experiments with synthetic and real-world data. Our theoretical results answer an open question of Dasgupta (2008)
    • 

    corecore