443 research outputs found

    Why is it hard to beat O(n2)O(n^2) for Longest Common Weakly Increasing Subsequence?

    Full text link
    The Longest Common Weakly Increasing Subsequence problem (LCWIS) is a variant of the classic Longest Common Subsequence problem (LCS). Both problems can be solved with simple quadratic time algorithms. A recent line of research led to a number of matching conditional lower bounds for LCS and other related problems. However, the status of LCWIS remained open. In this paper we show that LCWIS cannot be solved in strongly subquadratic time unless the Strong Exponential Time Hypothesis (SETH) is false. The ideas which we developed can also be used to obtain a lower bound based on a safer assumption of NC-SETH, i.e. a version of SETH which talks about NC circuits instead of less expressive CNF formulas

    Counting Triangles in Large Graphs on GPU

    Full text link
    The clustering coefficient and the transitivity ratio are concepts often used in network analysis, which creates a need for fast practical algorithms for counting triangles in large graphs. Previous research in this area focused on sequential algorithms, MapReduce parallelization, and fast approximations. In this paper we propose a parallel triangle counting algorithm for CUDA GPU. We describe the implementation details necessary to achieve high performance and present the experimental evaluation of our approach. Our algorithm achieves 8 to 15 times speedup over the CPU implementation and is capable of finding 3.8 billion triangles in an 89 million edges graph in less than 10 seconds on the Nvidia Tesla C2050 GPU.Comment: 2016 IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW

    Tight Conditional Lower Bounds for Longest Common Increasing Subsequence

    Get PDF
    We consider the canonical generalization of the well-studied Longest Increasing Subsequence problem to multiple sequences, called k-LCIS: Given k integer sequences X_1,...,X_k of length at most n, the task is to determine the length of the longest common subsequence of X_1,...,X_k that is also strictly increasing. Especially for the case of k=2 (called LCIS for short), several algorithms have been proposed that require quadratic time in the worst case. Assuming the Strong Exponential Time Hypothesis (SETH), we prove a tight lower bound, specifically, that no algorithm solves LCIS in (strongly) subquadratic time. Interestingly, the proof makes no use of normalization tricks common to hardness proofs for similar problems such as LCS. We further strengthen this lower bound to rule out O((nL)^{1-epsilon}) time algorithms for LCIS, where L denotes the solution size, and to rule out O(n^{k-epsilon}) time algorithms for k-LCIS. We obtain the same conditional lower bounds for the related Longest Common Weakly Increasing Subsequence problem

    Bellman-Ford is optimal for shortest hop-bounded paths

    Get PDF
    This paper is about the problem of finding a shortest ss-tt path using at most hh edges in edge-weighted graphs. The Bellman--Ford algorithm solves this problem in O(hm)O(hm) time, where mm is the number of edges. We show that this running time is optimal, up to subpolynomial factors, under popular fine-grained complexity assumptions. More specifically, we show that under the APSP Hypothesis the problem cannot be solved faster already in undirected graphs with non-negative edge weights. This lower bound holds even restricted to graphs of arbitrary density and for arbitrary hO(m)h \in O(\sqrt{m}). Moreover, under a stronger assumption, namely the Min-Plus Convolution Hypothesis, we can eliminate the restriction hO(m)h \in O(\sqrt{m}). In other words, the O(hm)O(hm) bound is tight for the entire space of parameters hh, mm, and nn, where nn is the number of nodes. Our lower bounds can be contrasted with the recent near-linear time algorithm for the negative-weight Single-Source Shortest Paths problem, which is the textbook application of the Bellman--Ford algorithm

    Enhanced photoacoustic spectroscopy sensitivity through intra-cavity OPO excitation

    Get PDF
    We report an optical molecular gas sensor exhibiting high levels of selectivity and sensitivity. The outstanding sensitivity demonstrated by our technology is rooted in a novel combination of photoacoustic spectroscopy (PAS) operated within the cavity of a continuous-wave, intra-cavity Optical Parametric Oscillator (OPO). We exploit the very high circulating field present within the resonant down-converted cavity as the excitation source of the photoacoustic effect, conferring orders-of-magnitude improvement in optical excitation power. Additionally, the wide selectivity of the system arises from the inherent broad tunability and narrow optical linewidth of an OPO. Here we report the use of this technology for the detection of ammonia (NH3) as a simulant target molecule. A 3-D printed miniature PAS cell with microelectromechanical systems based (MEMS) microphone is used for the gas detection. The resonance frequency of the cell was measured at 17.9 kHz with a Q-factor of 9. The down-converted signal wave resonating within its optical cavity was tuned to 6605.6cm-1 (corresponding to a strong local NH3 absorption line) through a combination of phase matching and intra-cavity etalon control. The laser was amplitude modulated at the resonance frequency of the PAS cell, producing an average optical excitation power of ~10W in the signal arm of the OPO, to induce the photoacoustic effect for only 4W of primary diode pump power. In this work we show detection limit at the level of single parts-per-billion (ppb). Additionally, we will discuss how this technology could be readily refined to potentially demonstrate a sensitivity of tens parts-per-quadrillion

    Deterministic 3SUM-Hardness

    Full text link
    As one of the three main pillars of fine-grained complexity theory, the 3SUM problem explains the hardness of many diverse polynomial-time problems via fine-grained reductions. Many of these reductions are either directly based on or heavily inspired by P\u{a}tra\c{s}cu's framework involving additive hashing and are thus randomized. Some selected reductions were derandomized in previous work [Chan, He; SOSA'20], but the current techniques are limited and a major fraction of the reductions remains randomized. In this work we gather a toolkit aimed to derandomize reductions based on additive hashing. Using this toolkit, we manage to derandomize almost all known 3SUM-hardness reductions. As technical highlights we derandomize the hardness reductions to (offline) Set Disjointness, (offline) Set Intersection and Triangle Listing -- these questions were explicitly left open in previous work [Kopelowitz, Pettie, Porat; SODA'16]. The few exceptions to our work fall into a special category of recent reductions based on structure-versus-randomness dichotomies. We expect that our toolkit can be readily applied to derandomize future reductions as well. As a conceptual innovation, our work thereby promotes the theory of deterministic 3SUM-hardness. As our second contribution, we prove that there is a deterministic universe reduction for 3SUM. Specifically, using additive hashing it is a standard trick to assume that the numbers in 3SUM have size at most n3n^3. We prove that this assumption is similarly valid for deterministic algorithms.Comment: To appear at ITCS 202

    Preparation of components for a CNG cogeneration unit

    Get PDF
    V této práci je řešena část vývoje kogenerační jednotky na CNG. Cílem této práce bylo upravit manuální ovládání plynu na elektronické, ověření jeho funkčností na skutečném spalovacím motoru a jeho následné využití jako pohonu pro generátor. Dále je v této práci vyřešeno napojení elektromotoru na spalovací motor, tak aby mohl být elektromotor použit jako generátor elektrické energie. V závěru práce jsou vyhodnoceny naměřené hodnoty výstupních parametrů generátoru.In this work, is solved part of the development of the CNG cogeneration unit. The target of this work was to edit the manual control of the gas to the electronic control, to verify its functionality on the combustion engine and use like a generator. The next point of work is solved connection of an electric motor and internal combustion engine, that the electric motor can be used for a generator of electric energy. At the end of work is measured values of the output parameters of the generator.632 - Katedra materiálů a technologií pro automobilyvelmi dobř

    Knapsack and Subset Sum with Small Items

    Get PDF
    Knapsack and Subset Sum are fundamental NP-hard problems in combinatorial optimization. Recently there has been a growing interest in understanding the best possible pseudopolynomial running times for these problems with respect to various parameters. In this paper we focus on the maximum item size s and the maximum item value v. We give algorithms that run in time O(n + s³) and O(n + v³) for the Knapsack problem, and in time Õ(n + s^{5/3}) for the Subset Sum problem. Our algorithms work for the more general problem variants with multiplicities, where each input item comes with a (binary encoded) multiplicity, which succinctly describes how many times the item appears in the instance. In these variants n denotes the (possibly much smaller) number of distinct items. Our results follow from combining and optimizing several diverse lines of research, notably proximity arguments for integer programming due to Eisenbrand and Weismantel (TALG 2019), fast structured (min,+)-convolution by Kellerer and Pferschy (J. Comb. Optim. 2004), and additive combinatorics methods originating from Galil and Margalit (SICOMP 1991)
    corecore