99 research outputs found
A hybrid algorithm for the longest common transposition-invariant subsequence problem
The longest common transposition-invariant subsequence (LCTS) problem is a music information retrieval oriented variation of the classic LCS problem. There are basically only two known efficient approaches to calculate the length of the LCTS, one based on sparse dynamic programming and the other on bit-parallelism. In this work, we propose a hybrid algorithm picking the better of the two algorithms for individual subproblems. Experiments on music (MIDI), with 32-bit and 64-bit implementations, show that the proposed algorithm outperforms the faster of the two component algorithms by a factor of 1.4–2.0, depending on sequence lengths. Similar, if not better, improvements can be observed for random data with Gaussian distribution. Also for uniformly random data, the hybrid algorithm is the winner if the alphabet is neither too small (at least 32 symbols) nor too large (up to 128 symbols). Part of the success of our scheme is attributed to a quite robust component selection heuristic
New algorithms for exact and approximate text matching
Praca przedstawia główne wyniki z tematyki algorytmów tekstowych otrzymane w Katedrze Informatyki Stosowanej w latach 2004-2009. Algorytmy te dotyczą wybranych rozmaitych problemów wyszukiwania dokładnego i przybliżonego, również w intensywnie w ostatnich latach badanym scenariuszu z wykorzystaniem kompresji.This work presents main results in the domain of text algorithms obtained in Computer Engineering Dept. in the years 2004-2009. The algorithms concern various exact and approximate string matching problems, also in the recently actively developed scenario involving compression
A Comparative Study for String Metrics and the Feasibility of Joining them as Combined Text Similarity Measures
This paper aims to introduce an optimized Damerau–Levenshtein and dice-coefficients using enumeration operations (ODADNEN) for providing fast string similarity measure with maintaining the results accuracy; searching to find specific words within a large text is a hard job which takes a lot of time and efforts. The string similarity measure plays a critical role in many searching problems. In this paper, different experiments were conducted to handle some spelling mistakes. An enhanced algorithm for string similarity assessment was proposed. This algorithm is a combined set of well-known algorithms with some improvements (e.g. the dice-coefficient was modified to deal with numbers instead of characters using certain conditions). These algorithms were adopted after conducting on a number of experimental tests to check its suitability. The ODADNN algorithm was tested using real data; its performance was compared with the original similarity measure. The results indicated that the most convincing measure is the proposed hybrid measure, which uses the Damerau–Levenshtein and dicedistance based on n-gram of each word to handle; also, it requires less processing time in comparison with the standard algorithms. Furthermore, it provides efficient results to assess the similarity between two words without the need to restrict the word length
An improved Levenshtein algorithm for spelling correction word candidate list generation
Candidates’ list generation in spelling correction is a process of finding words from a lexicon that should be close to the incorrect word. The most widely used algorithm for generating candidates’ list for incorrect words is based on Levenshtein distance. However, this algorithm takes too much time when there is a large number of spelling errors. The reason is that calculating Levenshtein algorithm includes operations that create an array and fill the cells of this array by comparing the characters of an incorrect word with the characters of a word from a lexicon. Since most lexicons contain millions of words, then these operations will be repeated millions of times for each incorrect word to generate its candidates list. This dissertation improved Levenshtein algorithm by designing an operational technique that has been included in this algorithm. The proposed operational technique enhances Levenshtein algorithm in terms of the processing time of its executing without affecting its accuracy. It reduces the operations required to measure cells’ values in the first row, first column, second row, second column, third row, and third column in Levenshtein array. The improved Levenshtein algorithm was evaluated against the original algorithm. Experimental results show that the proposed algorithm outperforms Levenshtein algorithm in terms of the processing time by 36.45% while the accuracy of both algorithms is still the same
Orthogonal polynomial ensembles in probability theory
We survey a number of models from physics, statistical mechanics, probability
theory and combinatorics, which are each described in terms of an orthogonal
polynomial ensemble. The most prominent example is apparently the Hermite
ensemble, the eigenvalue distribution of the Gaussian Unitary Ensemble (GUE),
and other well-known ensembles known in random matrix theory like the Laguerre
ensemble for the spectrum of Wishart matrices. In recent years, a number of
further interesting models were found to lead to orthogonal polynomial
ensembles, among which the corner growth model, directed last passage
percolation, the PNG droplet, non-colliding random processes, the length of the
longest increasing subsequence of a random permutation, and others. Much
attention has been paid to universal classes of asymptotic behaviors of these
models in the limit of large particle numbers, in particular the spacings
between the particles and the fluctuation behavior of the largest particle.
Computer simulations suggest that the connections go even farther and also
comprise the zeros of the Riemann zeta function. The existing proofs require a
substantial technical machinery and heavy tools from various parts of
mathematics, in particular complex analysis, combinatorics and variational
analysis. Particularly in the last decade, a number of fine results have been
achieved, but it is obvious that a comprehensive and thorough understanding of
the matter is still lacking. Hence, it seems an appropriate time to provide a
surveying text on this research area.Comment: Published at http://dx.doi.org/10.1214/154957805100000177 in the
Probability Surveys (http://www.i-journals.org/ps/) by the Institute of
Mathematical Statistics (http://www.imstat.org
Machine Annotation of Traditional Irish Dance Music
The work presented in this thesis is validated in experiments using 130 realworld field recordings of traditional music from sessions, classes, concerts and commercial recordings. Test audio includes solo and ensemble playing on a variety of instruments recorded in real-world settings such as noisy public sessions. Results are reported using standard measures from the field of information retrieval (IR) including accuracy, error, precision and recall and the system is compared to alternative approaches for CBMIR common in the literature
A Similarity Matrix for Irish Traditional Dance Music
It is estimated that there are between seven and ten thousand Irish traditional dance tunes in existence. As Irish musicians travelled the world they carried their repertoire in their memories and rarely recorded these pieces in writing. When the music was passed down from generation to generation by ear the names of these pieces of music and the melodies themselves were forgotten or changed over time. This has led to problems for musicians and archivists when identifying the names of traditional Irish tunes.
Almost all of this music is now available in ABC notation from online collections. An ABC file is a text file containing a transcription of one or more melodies, the tune title, musical key, time signature and other relevant details.
The principal aim of this project is to define a process by which Irish music can be compared using string distance algorithms. An online survey will then be conducted to assess if human participants agree with the computer comparisons. Improvements will then be made to the string distance algorithms by considering music theory. Two other methods of assessing musical similarity, Breandán Breathnach‟s Melodic Indexing System and Parsons Code will be computerised and integrated into a Combined Ranking System (CRS). An hypothesis will be formed based on the results and experiences of creating this system. This hypothesis will be tested on humans and if successful, used to achieve the final aim of the project, to construct a similarity matrix
Multivariate Fine-Grained Complexity of Longest Common Subsequence
We revisit the classic combinatorial pattern matching problem of finding a
longest common subsequence (LCS). For strings and of length , a
textbook algorithm solves LCS in time , but although much effort has
been spent, no -time algorithm is known. Recent work
indeed shows that such an algorithm would refute the Strong Exponential Time
Hypothesis (SETH) [Abboud, Backurs, Vassilevska Williams + Bringmann,
K\"unnemann FOCS'15].
Despite the quadratic-time barrier, for over 40 years an enduring scientific
interest continued to produce fast algorithms for LCS and its variations.
Particular attention was put into identifying and exploiting input parameters
that yield strongly subquadratic time algorithms for special cases of interest,
e.g., differential file comparison. This line of research was successfully
pursued until 1990, at which time significant improvements came to a halt. In
this paper, using the lens of fine-grained complexity, our goal is to (1)
justify the lack of further improvements and (2) determine whether some special
cases of LCS admit faster algorithms than currently known.
To this end, we provide a systematic study of the multivariate complexity of
LCS, taking into account all parameters previously discussed in the literature:
the input size , the length of the shorter string
, the length of an LCS of and , the numbers of
deletions and , the alphabet size, as well as
the numbers of matching pairs and dominant pairs . For any class of
instances defined by fixing each parameter individually to a polynomial in
terms of the input size, we prove a SETH-based lower bound matching one of
three known algorithms. Specifically, we determine the optimal running time for
LCS under SETH as .
[...]Comment: Presented at SODA'18. Full Version. 66 page
Recommended from our members
A general state-based temporal pattern recognition
Time-series and state-sequences are ubiquitous patterns in temporal logic and are widely used to present temporal data in data mining. Generally speaking, there are three known choices for the time primitive: points, intervals, points and intervals. In this thesis, a formal characterization of time-series and state-sequences is presented for both complete and incomplete situations, where a state-sequence is defined as a list of sequential data validated on the corresponding time-series. In addition, subsequence matching is addressed to associate the state-sequences, where both non-temporal aspects as well as rich temporal aspects including temporal order, temporal duration and temporal gap should be taken into account.
Firstly, based on the typed point based time-elements and time-series, a formal characterization of time-series and state-sequences is introduced for both complete and incomplete situations, where a state-sequence is defined as a list of sequential data validated on the corresponding time-series. A time-series is formalized as a tetrad (T, R, Tdur, Tgap), which denotes: the temporal order of time- elements; the temporal relationship between time-elements; the temporal duration of each time-element and the temporal gap between each adjacent pair of time-elements respectively.
Secondly, benefiting from the formal characterization of time-series and state-sequences, a general similarity measurement (GSM) that takes into account both non-temporal and rich temporal information, including temporal order as well as temporal duration and temporal gap, is introduced for subsequence matching. This measurement is general enough to subsume most of the popular existing measurements as special cases. In particular, a new conception of temporal common subsequence is proposed. Furthermore, a new LCS-based algorithm named Optimal Temporal Common Subsequence (OTCS), which takes into account rich temporal information, is designed. The experimental results on 6 benchmark datasets demonstrate the effectiveness and robustness of GSM and its new case OTCS. Compared with binary-value distance measurements, GSM can distinguish between the distance caused by different states in the same operation; compared with the real-penalty distance measurements, it can filter out the noise that may push the similarity into abnormal levels.
Finally, two case studies are investigated for temporal pattern recognition: basketball zone-defence detection and video copy detection.
In the case of basketball zone-defence detection, the computational technique and algorithm for detecting zone-defence patterns from basketball videos is introduced, where the Laplacian Matrix-based algorithm is extended to take into account the effects from zoom and single defender‘s translation in zone-defence graph matching and a set of character-angle based features was proposed to describe the zone-defence graph. The experimental results show that the approach explored is useful in helping the coach of the defensive side check whether the players are keeping to the correct zone-defence strategy, as well as detecting the strategy of the opponent side. It can describe the structure relationship between defender-lines for basketball zone-defence, and has a robust performance in both simulation and real-life applications, especially when disturbances exist.
In the case of video copy detection, a framework for subsequence matching is introduced. A hybrid similarity framework addressing both non-temporal and temporal relationships between state-sequences, represented by bipartite graphs, is proposed. The experimental results using real-life video databases demonstrated that the proposed similarity framework is robust to states alignment with different numbers and different values, and various reordering including inversion and crossover
- …