89 research outputs found
How brokers can optimally plot against traders
Traders buy and sell financial instruments in hopes of making profit, and
brokers are responsible for the transaction. Some brokers, known as
market-makers, take the position opposite to the trader's. If the trader buys,
they sell; if the trader sells, they buy. Said differently, brokers make money
whenever their traders lose money. From this somewhat strange mechanism emerge
various conspiracy theories, notably that brokers manipulate prices in order to
maximize their traders' losses. In this paper, our goal is to perform this evil
task optimally. Assuming total control over the price of an asset (ignoring the
usual aspects of finance such as market conditions, external influence or
stochasticity), we show how in cubic time, given a set of trades specified by a
stop-loss and a take-profit price, a broker can find a maximum loss price
movement. We also study the same problem under a model of probabilistic trades.
We finally look at the online trade setting, where broker and trader exchange
turns, each trying to make a profit. We show that the best option for the
trader is to never trade
Hamiltonian chordal graphs are not cycle extendible
In 1990, Hendry conjectured that every Hamiltonian chordal graph is cycle
extendible; that is, the vertices of any non-Hamiltonian cycle are contained in
a cycle of length one greater. We disprove this conjecture by constructing
counterexamples on vertices for any . Furthermore, we show that
there exist counterexamples where the ratio of the length of a non-extendible
cycle to the total number of vertices can be made arbitrarily small. We then
consider cycle extendibility in Hamiltonian chordal graphs where certain
induced subgraphs are forbidden, notably and the bull.Comment: Some results from Section 3 were incorrect and have been removed. To
appear in SIAM Journal on Discrete Mathematic
The Tandem Duplication Distance Is NP-Hard
In computational biology, tandem duplication is an important biological phenomenon which can occur either at the genome or at the DNA level. A tandem duplication takes a copy of a genome segment and inserts it right after the segment - this can be represented as the string operation AXB ? AXXB. Tandem exon duplications have been found in many species such as human, fly or worm, and have been largely studied in computational biology.
The Tandem Duplication (TD) distance problem we investigate in this paper is defined as follows: given two strings S and T over the same alphabet, compute the smallest sequence of tandem duplications required to convert S to T. The natural question of whether the TD distance can be computed in polynomial time was posed in 2004 by Leupold et al. and had remained open, despite the fact that tandem duplications have received much attention ever since. In this paper, we prove that this problem is NP-hard, settling the 16-year old open problem. We further show that this hardness holds even if all characters of S are distinct. This is known as the exemplar TD distance, which is of special relevance in bioinformatics. One of the tools we develop for the reduction is a new problem called the Cost-Effective Subgraph, for which we obtain W[1]-hardness results that might be of independent interest. We finally show that computing the exemplar TD distance between S and T is fixed-parameter tractable. Our results open the door to many other questions, and we conclude with several open problems
Predicting Horizontal Gene Transfers with Perfect Transfer Networks
Horizontal gene transfer inference approaches are usually based on gene sequences: parametric methods search for patterns that deviate from a particular genomic signature, while phylogenetic methods use sequences to reconstruct the gene and species trees. However, it is well-known that sequences have difficulty identifying ancient transfers since mutations have enough time to erase all evidence of such events. In this work, we ask whether character-based methods can predict gene transfers. Their advantage over sequences is that homologous genes can have low DNA similarity, but still have retained enough important common motifs that allow them to have common character traits, for instance the same functional or expression profile. A phylogeny that has two separate clades that acquired the same character independently might indicate the presence of a transfer even in the absence of sequence similarity.
We introduce perfect transfer networks, which are phylogenetic networks that can explain the character diversity of a set of taxa. This problem has been studied extensively in the form of ancestral recombination networks, but these only model hybridation events and do not differentiate between direct parents and lateral donors. We focus on tree-based networks, in which edges representing vertical descent are clearly distinguished from those that represent horizontal transmission. Our model is a direct generalization of perfect phylogeny models to such networks. Our goal is to initiate a study on the structural and algorithmic properties of perfect transfer networks. We then show that in polynomial time, one can decide whether a given network is a valid explanation for a set of taxa, and show how, for a given tree, one can add transfer edges to it so that it explains a set of taxa
The complexity of speedrunning video games
Speedrunning is a popular activity in which the goal is to finish a video game as fast as possible. Players around the world spend hours each day on live stream, perfecting their skills to achieve a world record in well-known games such as Super Mario Bros, Castlevania or Mega Man. But human execution is not the only factor in a successful speed run. Some common techniques such as damage boosting or routing require careful planning to optimize time gains. In this paper, we show that optimizing these mechanics is in fact a profound algorithmic problem, as they lead to novel generalizations of the well-known NP-hard knapsack and feedback arc set problems.
We show that the problem of finding the optimal damage boosting locations in a game admits an FPTAS and is FPT in k + r, the number k of enemy types in the game and r the number of health refill locations. However, if the player is allowed to lose a life to regain health, the problem becomes hard to approximate within a factor 1/2 but admits a (1/2 - epsilon)-approximation with two lives. Damage boosting can also be solved in pseudo-polynomial time. As for routing, we show various hardness results, including W[2]-hardness in the time lost in a game, even on bounded treewidth stage graphs. On the positive side, we exhibit an FPT algorithm for stage graphs of bounded treewidth and bounded in-degree
Early identification of important patents through network centrality
One of the most challenging problems in technological forecasting is to
identify as early as possible those technologies that have the potential to
lead to radical changes in our society. In this paper, we use the US patent
citation network (1926-2010) to test our ability to early identify a list of
historically significant patents through citation network analysis. We show
that in order to effectively uncover these patents shortly after they are
issued, we need to go beyond raw citation counts and take into account both the
citation network topology and temporal information. In particular, an
age-normalized measure of patent centrality, called rescaled PageRank, allows
us to identify the significant patents earlier than citation count and PageRank
score. In addition, we find that while high-impact patents tend to rely on
other high-impact patents in a similar way as scientific papers, the patents'
citation dynamics is significantly slower than that of papers, which makes the
early identification of significant patents more challenging than that of
significant papers.Comment: 14 page
Preprocessing Complexity for Some Graph Problems Parameterized by Structural Parameters
Structural graph parameters play an important role in parameterized
complexity, including in kernelization. Notably, vertex cover, neighborhood
diversity, twin-cover, and modular-width have been studied extensively in the
last few years. However, there are many fundamental problems whose
preprocessing complexity is not fully understood under these parameters.
Indeed, the existence of polynomial kernels or polynomial Turing kernels for
famous problems such as Clique, Chromatic Number, and Steiner Tree has only
been established for a subset of structural parameters. In this work, we use
several techniques to obtain a complete preprocessing complexity landscape for
over a dozen of fundamental algorithmic problems.Comment: 24 pages, 1 table, 1 figur
- …