443 research outputs found
Why is it hard to beat for Longest Common Weakly Increasing Subsequence?
The Longest Common Weakly Increasing Subsequence problem (LCWIS) is a variant
of the classic Longest Common Subsequence problem (LCS). Both problems can be
solved with simple quadratic time algorithms. A recent line of research led to
a number of matching conditional lower bounds for LCS and other related
problems. However, the status of LCWIS remained open.
In this paper we show that LCWIS cannot be solved in strongly subquadratic
time unless the Strong Exponential Time Hypothesis (SETH) is false.
The ideas which we developed can also be used to obtain a lower bound based
on a safer assumption of NC-SETH, i.e. a version of SETH which talks about NC
circuits instead of less expressive CNF formulas
Counting Triangles in Large Graphs on GPU
The clustering coefficient and the transitivity ratio are concepts often used
in network analysis, which creates a need for fast practical algorithms for
counting triangles in large graphs. Previous research in this area focused on
sequential algorithms, MapReduce parallelization, and fast approximations.
In this paper we propose a parallel triangle counting algorithm for CUDA GPU.
We describe the implementation details necessary to achieve high performance
and present the experimental evaluation of our approach. Our algorithm achieves
8 to 15 times speedup over the CPU implementation and is capable of finding 3.8
billion triangles in an 89 million edges graph in less than 10 seconds on the
Nvidia Tesla C2050 GPU.Comment: 2016 IEEE International Parallel and Distributed Processing Symposium
Workshops (IPDPSW
Recommended from our members
Signal Processing in Wireless Communications: Device Fingerprinting and Wide-Band Interference Rejection
The rapid progress of wireless communication technologies that has taken place in recent years has significantly improved the quality of everyday life. However with this expansion of wireless communication systems come significant security threats and significant technological challenges, both of which are due to the fact that the communication medium is shared. The ubiquity of open wireless Internet access networks creates a new avenue for cyber-criminals to impersonate and act in an unauthorized way. The increasing number of deployed wide-band wireless communication systems entails technological challenges for effective utilization of the shared medium, which implies the need for advanced interference rejection methods. Wireless security and interference rejection in wide-band wireless communications are therefore often considered as the two main challenges in wireless network\u27s design and research. Important aspects of these challenges are illuminated and addressed in this dissertation.
This dissertation considers signal processing approaches for exploiting or mitigating the effects of non-ideal components in wireless communication systems. In the first part of the dissertation, we introduce and study a novel, model-based approach to wireless device identification that exploits imperfections in the transmitter caused by manufacturing process nonidealities. Previous approaches to device identification based on hardware imperfections vary from transient analysis to machine learning but have not provided verifiable accuracy. Here, we detail a model-based approach, that uses statistical models of RF transmitter components: digital-to-analog converter, power amplifier and RF oscillator, which are amenable for analysis. Our proposed approach examines the key device characteristics that cause anonymity loss, countermeasures that can be applied by the nodes to regain the anonymity, and ways of thwarting such countermeasures. We develop identification algorithms based on statistical signal processing methods and address the challenging scenario when the units that need to be distinguished from one another are of the same model and from the same manufacturer. Using simulations and measurements of components that are commonly used in commercial communications systems, we show that our anonymity breaking techniques are effective.
In the second part of the dissertation, we consider innovative approaches for the acquisition of frequency-sparse signals with wide-band receivers when a weak signal of interest is received in the presence of a very strong interference, and the effects of the nonlinearities in the low-noise amplifier at the receiver must be mitigated. All samples with amplitude above a given threshold, dictated by the linear input range of the receiver, are discarded to avoid the distortion caused by saturation of the low noise amplifier. Such a sampling scheme, while avoiding nonlinear distortion that cannot be corrected in the digital domain, poses challenges for signal reconstruction techniques, as the samples are taken non-uniformly, but also non-randomly. The considered approaches fall into the field of compressive sensing (CS); however, what differentiates them from conventional CS is that a structure is forced upon the measurement scheme. Such a structure causes a violation of the core CS assumption of the measurements\u27 randomness. We consider two different types of structured acquisition: signal independent and signal dependent structured acquisition. For the first case, we derive bounds on the number of samples needed for successful CS recovery when samples are drawn at random in predefined groups. For the second case, we consider enhancements of CS recovery methods when only small-amplitude samples of the signal that needs to be recovered are available for the recovery. Finally, we address a problem of spectral leakage due to the limited processing block size of block processing, wide-band receivers and propose an adaptive block size adjustment method, which leads to significant dynamic range improvements
Tight Conditional Lower Bounds for Longest Common Increasing Subsequence
We consider the canonical generalization of the well-studied Longest Increasing Subsequence problem to multiple sequences, called k-LCIS: Given k integer sequences X_1,...,X_k of length at most n, the task is to determine the length of the longest common subsequence of X_1,...,X_k that is also strictly increasing. Especially for the case of k=2 (called LCIS for short), several algorithms have been proposed that require quadratic time in the worst case.
Assuming the Strong Exponential Time Hypothesis (SETH), we prove a tight lower bound, specifically, that no algorithm solves LCIS in (strongly) subquadratic time. Interestingly, the proof makes no use of normalization tricks common to hardness proofs for similar problems such as LCS. We further strengthen this lower bound to rule out O((nL)^{1-epsilon}) time algorithms for LCIS, where L denotes the solution size, and to rule out O(n^{k-epsilon}) time algorithms for k-LCIS. We obtain the same conditional lower bounds for the related Longest Common Weakly Increasing Subsequence problem
Bellman-Ford is optimal for shortest hop-bounded paths
This paper is about the problem of finding a shortest - path using at
most edges in edge-weighted graphs. The Bellman--Ford algorithm solves this
problem in time, where is the number of edges. We show that this
running time is optimal, up to subpolynomial factors, under popular
fine-grained complexity assumptions.
More specifically, we show that under the APSP Hypothesis the problem cannot
be solved faster already in undirected graphs with non-negative edge weights.
This lower bound holds even restricted to graphs of arbitrary density and for
arbitrary . Moreover, under a stronger assumption, namely
the Min-Plus Convolution Hypothesis, we can eliminate the restriction . In other words, the bound is tight for the entire space
of parameters , , and , where is the number of nodes.
Our lower bounds can be contrasted with the recent near-linear time algorithm
for the negative-weight Single-Source Shortest Paths problem, which is the
textbook application of the Bellman--Ford algorithm
Enhanced photoacoustic spectroscopy sensitivity through intra-cavity OPO excitation
We report an optical molecular gas sensor exhibiting high levels of selectivity and sensitivity. The outstanding sensitivity demonstrated by our technology is rooted in a novel combination of photoacoustic spectroscopy (PAS) operated within the cavity of a continuous-wave, intra-cavity Optical Parametric Oscillator (OPO). We exploit the very high circulating field present within the resonant down-converted cavity as the excitation source of the photoacoustic effect, conferring orders-of-magnitude improvement in optical excitation power. Additionally, the wide selectivity of the system arises from the inherent broad tunability and narrow optical linewidth of an OPO. Here we report the use of this technology for the detection of ammonia (NH3) as a simulant target molecule. A 3-D printed miniature PAS cell with microelectromechanical systems based (MEMS) microphone is used for the gas detection. The resonance frequency of the cell was measured at 17.9 kHz with a Q-factor of 9. The down-converted signal wave resonating within its optical cavity was tuned to 6605.6cm-1 (corresponding to a strong local NH3 absorption line) through a combination of phase matching and intra-cavity etalon control. The laser was amplitude modulated at the resonance frequency of the PAS cell, producing an average optical excitation power of ~10W in the signal arm of the OPO, to induce the photoacoustic effect for only 4W of primary diode pump power. In this work we show detection limit at the level of single parts-per-billion (ppb). Additionally, we will discuss how this technology could be readily refined to potentially demonstrate a sensitivity of tens parts-per-quadrillion
Deterministic 3SUM-Hardness
As one of the three main pillars of fine-grained complexity theory, the 3SUM
problem explains the hardness of many diverse polynomial-time problems via
fine-grained reductions. Many of these reductions are either directly based on
or heavily inspired by P\u{a}tra\c{s}cu's framework involving additive hashing
and are thus randomized. Some selected reductions were derandomized in previous
work [Chan, He; SOSA'20], but the current techniques are limited and a major
fraction of the reductions remains randomized.
In this work we gather a toolkit aimed to derandomize reductions based on
additive hashing. Using this toolkit, we manage to derandomize almost all known
3SUM-hardness reductions. As technical highlights we derandomize the hardness
reductions to (offline) Set Disjointness, (offline) Set Intersection and
Triangle Listing -- these questions were explicitly left open in previous work
[Kopelowitz, Pettie, Porat; SODA'16]. The few exceptions to our work fall into
a special category of recent reductions based on structure-versus-randomness
dichotomies.
We expect that our toolkit can be readily applied to derandomize future
reductions as well. As a conceptual innovation, our work thereby promotes the
theory of deterministic 3SUM-hardness.
As our second contribution, we prove that there is a deterministic universe
reduction for 3SUM. Specifically, using additive hashing it is a standard trick
to assume that the numbers in 3SUM have size at most . We prove that this
assumption is similarly valid for deterministic algorithms.Comment: To appear at ITCS 202
Preparation of components for a CNG cogeneration unit
V této práci je řešena část vývoje kogenerační jednotky na CNG. Cílem této práce bylo upravit manuální ovládání plynu na elektronické, ověření jeho funkčností na skutečném spalovacím motoru a jeho následné využití jako pohonu pro generátor. Dále je v této práci vyřešeno napojení elektromotoru na spalovací motor, tak aby mohl být elektromotor použit jako generátor elektrické energie. V závěru práce jsou vyhodnoceny naměřené hodnoty výstupních parametrů generátoru.In this work, is solved part of the development of the CNG cogeneration unit. The target of this work was to edit the manual control of the gas to the electronic control, to verify its functionality on the combustion engine and use like a generator. The next point of work is solved connection of an electric motor and internal combustion engine, that the electric motor can be used for a generator of electric energy. At the end of work is measured values of the output parameters of the generator.632 - Katedra materiálů a technologií pro automobilyvelmi dobř
Knapsack and Subset Sum with Small Items
Knapsack and Subset Sum are fundamental NP-hard problems in combinatorial optimization. Recently there has been a growing interest in understanding the best possible pseudopolynomial running times for these problems with respect to various parameters. In this paper we focus on the maximum item size s and the maximum item value v. We give algorithms that run in time O(n + s³) and O(n + v³) for the Knapsack problem, and in time Õ(n + s^{5/3}) for the Subset Sum problem. Our algorithms work for the more general problem variants with multiplicities, where each input item comes with a (binary encoded) multiplicity, which succinctly describes how many times the item appears in the instance. In these variants n denotes the (possibly much smaller) number of distinct items. Our results follow from combining and optimizing several diverse lines of research, notably proximity arguments for integer programming due to Eisenbrand and Weismantel (TALG 2019), fast structured (min,+)-convolution by Kellerer and Pferschy (J. Comb. Optim. 2004), and additive combinatorics methods originating from Galil and Margalit (SICOMP 1991)
- …