100,954 research outputs found

    How to determine linear complexity and kk-error linear complexity in some classes of linear recurring sequences

    Get PDF
    Several fast algorithms for the determination of the linear complexity of dd-periodic sequences over a finite field \F_q, i.e. sequences with characteristic polynomial f(x)=xd1f(x) = x^d-1, have been proposed in the literature. In this contribution fast algorithms for determining the linear complexity of binary sequences with characteristic polynomial f(x)=(x1)df(x) = (x-1)^d for an arbitrary positive integer dd, and f(x)=(x2+x+1)2vf(x) = (x^2+x+1)^{2^v} are presented. The result is then utilized to establish a fast algorithm for determining the kk-error linear complexity of binary sequences with characteristic polynomial (x2+x+1)2v(x^2+x+1)^{2^v}

    Short interval control for the cost estimate baseline of novel high value manufacturing products – a complexity based approach

    Get PDF
    Novel high value manufacturing products by default lack the minimum a priori data needed for forecasting cost variance over of time using regression based techniques. Forecasts which attempt to achieve this therefore suffer from significant variance which in turn places significant strain on budgetary assumptions and financial planning. The authors argue that for novel high value manufacturing products short interval control through continuous revision is necessary until the context of the baseline estimate stabilises sufficiently for extending the time intervals for revision. Case study data from the United States Department of Defence Scheduled Annual Summary Reports (1986-2013) is used to exemplify the approach. In this respect it must be remembered that the context of a baseline cost estimate is subject to a large number of assumptions regarding future plausible scenarios, the probability of such scenarios, and various requirements related to such. These assumptions change over time and the degree of their change is indicated by the extent that cost variance follows a forecast propagation curve that has been defined in advance. The presented approach determines the stability of this context by calculating the effort required to identify a propagation pattern for cost variance using the principles of Kolmogorov complexity. Only when that effort remains stable over a sufficient period of time can the revision periods for the cost estimate baseline be changed from continuous to discrete time intervals. The practical implication of the presented approach for novel high value manufacturing products is that attention is shifted from the bottom up or parametric estimation activity to the continuous management of the context for that cost estimate itself. This in turn enables a faster and more sustainable stabilisation of the estimating context which then creates the conditions for reducing cost estimate uncertainty in an actionable and timely manner

    An approach for selecting cost estimation techniques for innovative high value manufacturing products

    Get PDF
    This paper presents an approach for determining the most appropriate technique for cost estimation of innovative high value manufacturing products depending on the amount of prior data available. Case study data from the United States Scheduled Annual Summary Reports for the Joint Strike Fighter (1997-2010) is used to exemplify how, depending on the attributes of a priori data certain techniques for cost estimation are more suitable than others. The data attribute focused on is the computational complexity involved in identifying whether or not there are patterns suited for propagation. Computational complexity is calculated based upon established mathematical principles for pattern recognition which argue that at least 42 data sets are required for the application of standard regression analysis techniques. The paper proposes that below this threshold a generic dependency model and starting conditions should be used and iteratively adapted to the context. In the special case of having less than four datasets available it is suggested that no contemporary cost estimating techniques other than analogy or expert opinion are currently applicable and alternate techniques must be explored if more quantitative results are desired. By applying the mathematical principles of complexity groups the paper argues that when less than four consecutive datasets are available the principles of topological data analysis should be applied. The preconditions being that the cost variance of at least three cost variance types for one to three time discrete continuous intervals is available so that it can be quantified based upon its geometrical attributes, visualised as an n-dimensional point cloud and then evaluated based upon the symmetrical properties of the evolving shape. Further work is suggested to validate the provided decision-trees in cost estimation practice

    Finding approximate palindromes in strings

    Full text link
    We introduce a novel definition of approximate palindromes in strings, and provide an algorithm to find all maximal approximate palindromes in a string with up to kk errors. Our definition is based on the usual edit operations of approximate pattern matching, and the algorithm we give, for a string of size nn on a fixed alphabet, runs in O(k2n)O(k^2 n) time. We also discuss two implementation-related improvements to the algorithm, and demonstrate their efficacy in practice by means of both experiments and an average-case analysis

    Subclasses of Presburger Arithmetic and the Weak EXP Hierarchy

    Full text link
    It is shown that for any fixed i>0i>0, the Σi+1\Sigma_{i+1}-fragment of Presburger arithmetic, i.e., its restriction to i+1i+1 quantifier alternations beginning with an existential quantifier, is complete for ΣiEXP\mathsf{\Sigma}^{\mathsf{EXP}}_{i}, the ii-th level of the weak EXP hierarchy, an analogue to the polynomial-time hierarchy residing between NEXP\mathsf{NEXP} and EXPSPACE\mathsf{EXPSPACE}. This result completes the computational complexity landscape for Presburger arithmetic, a line of research which dates back to the seminal work by Fischer & Rabin in 1974. Moreover, we apply some of the techniques developed in the proof of the lower bound in order to establish bounds on sets of naturals definable in the Σ1\Sigma_1-fragment of Presburger arithmetic: given a Σ1\Sigma_1-formula Φ(x)\Phi(x), it is shown that the set of non-negative solutions is an ultimately periodic set whose period is at most doubly-exponential and that this bound is tight.Comment: 10 pages, 2 figure

    The k-mismatch problem revisited

    Get PDF
    We revisit the complexity of one of the most basic problems in pattern matching. In the k-mismatch problem we must compute the Hamming distance between a pattern of length m and every m-length substring of a text of length n, as long as that Hamming distance is at most k. Where the Hamming distance is greater than k at some alignment of the pattern and text, we simply output "No". We study this problem in both the standard offline setting and also as a streaming problem. In the streaming k-mismatch problem the text arrives one symbol at a time and we must give an output before processing any future symbols. Our main results are as follows: 1) Our first result is a deterministic O(nk2logk/m+npolylogm)O(n k^2\log{k} / m+n \text{polylog} m) time offline algorithm for k-mismatch on a text of length n. This is a factor of k improvement over the fastest previous result of this form from SODA 2000 by Amihood Amir et al. 2) We then give a randomised and online algorithm which runs in the same time complexity but requires only O(k2polylogm)O(k^2\text{polylog} {m}) space in total. 3) Next we give a randomised (1+ϵ)(1+\epsilon)-approximation algorithm for the streaming k-mismatch problem which uses O(k2polylogm/ϵ2)O(k^2\text{polylog} m / \epsilon^2) space and runs in O(polylogm/ϵ2)O(\text{polylog} m / \epsilon^2) worst-case time per arriving symbol. 4) Finally we combine our new results to derive a randomised O(k2polylogm)O(k^2\text{polylog} {m}) space algorithm for the streaming k-mismatch problem which runs in O(klogk+polylogm)O(\sqrt{k}\log{k} + \text{polylog} {m}) worst-case time per arriving symbol. This improves the best previous space complexity for streaming k-mismatch from FOCS 2009 by Benny Porat and Ely Porat by a factor of k. We also improve the time complexity of this previous result by an even greater factor to match the fastest known offline algorithm (up to logarithmic factors)

    String Synchronizing Sets: Sublinear-Time BWT Construction and Optimal LCE Data Structure

    Full text link
    Burrows-Wheeler transform (BWT) is an invertible text transformation that, given a text TT of length nn, permutes its symbols according to the lexicographic order of suffixes of TT. BWT is one of the most heavily studied algorithms in data compression with numerous applications in indexing, sequence analysis, and bioinformatics. Its construction is a bottleneck in many scenarios, and settling the complexity of this task is one of the most important unsolved problems in sequence analysis that has remained open for 25 years. Given a binary string of length nn, occupying O(n/logn)O(n/\log n) machine words, the BWT construction algorithm due to Hon et al. (SIAM J. Comput., 2009) runs in O(n)O(n) time and O(n/logn)O(n/\log n) space. Recent advancements (Belazzougui, STOC 2014, and Munro et al., SODA 2017) focus on removing the alphabet-size dependency in the time complexity, but they still require Ω(n)\Omega(n) time. In this paper, we propose the first algorithm that breaks the O(n)O(n)-time barrier for BWT construction. Given a binary string of length nn, our procedure builds the Burrows-Wheeler transform in O(n/logn)O(n/\sqrt{\log n}) time and O(n/logn)O(n/\log n) space. We complement this result with a conditional lower bound proving that any further progress in the time complexity of BWT construction would yield faster algorithms for the very well studied problem of counting inversions: it would improve the state-of-the-art O(mlogm)O(m\sqrt{\log m})-time solution by Chan and P\v{a}tra\c{s}cu (SODA 2010). Our algorithm is based on a novel concept of string synchronizing sets, which is of independent interest. As one of the applications, we show that this technique lets us design a data structure of the optimal size O(n/logn)O(n/\log n) that answers Longest Common Extension queries (LCE queries) in O(1)O(1) time and, furthermore, can be deterministically constructed in the optimal O(n/logn)O(n/\log n) time.Comment: Full version of a paper accepted to STOC 201

    The Effect of Musical Characteristics, Exposure, and Individual Difference Variables on String Student Musical Preference: Implications for Introducing Western Art Music

    Get PDF
    This study explores the influences of various musical, environmental, and personal factors on string students\u27 preferences for selections of Western Art Music. The purpose of this study was to provide insight into the information of music preferences by young string students in order to allow teachers to introduce Western Art music most effectively. Participants (n = 498) from northwest Arkansas public school string programs were given the String Student Music Preference Questionnaire (SSMPQ) developed by the author. Part One of the SS~MPQ measured preference for six one-minute selections of Western Art music by Beethoven, Berlioz, Mahle1; Saint-Siiens, and Schoenberg. In Part Two, the researcher collected data on participants\u27 age, gender, musical experience, social influence and listening habits. It was determined that the musical examples with characteristics similar to popular music were most preferred. These characteristics included: fast tempo, steady rhythm, stable dynamics, identifiable instntmentation, and moderate complexity. Age, social influences, and listening habits did not significantly affect preference, while the gender and the live attendance portion of the musical experience variables significantly influenced participant preference
    corecore