5,198 research outputs found

    Testing of random matrices

    Get PDF
    Let nn be a positive integer and X=[xij]1≀i,j≀nX = [x_{ij}]_{1 \leq i, j \leq n} be an n×nn \times n\linebreak \noindent sized matrix of independent random variables having joint uniform distribution \hbox{Pr} {x_{ij} = k \hbox{for} 1 \leq k \leq n} = \frac{1}{n} \quad (1 \leq i, j \leq n) \koz. A realization M=[mij]\mathcal{M} = [m_{ij}] of XX is called \textit{good}, if its each row and each column contains a permutation of the numbers 1,2,...,n1, 2,..., n. We present and analyse four typical algorithms which decide whether a given realization is good

    COORDINATION THROUGH DE BRUIJN SEQUENCES

    Get PDF
    Let ” be a rational distribution over a finite alphabet, and ( ) be a n-periodic sequences which first n elements are drawn i.i.d. according to ”. We consider automata of bounded size that input and output at stage t. We prove the existence of a constant C such that, whenever , with probability close to 1 there exists an automaton of size m such that the empirical frequency of stages such that is close to 1. In particular, one can take , where and .Coordination, complexity, De Bruijn sequences, automata

    Extreme Scale De Novo Metagenome Assembly

    Get PDF
    Metagenome assembly is the process of transforming a set of short, overlapping, and potentially erroneous DNA segments from environmental samples into the accurate representation of the underlying microbiomes's genomes. State-of-the-art tools require big shared memory machines and cannot handle contemporary metagenome datasets that exceed Terabytes in size. In this paper, we introduce the MetaHipMer pipeline, a high-quality and high-performance metagenome assembler that employs an iterative de Bruijn graph approach. MetaHipMer leverages a specialized scaffolding algorithm that produces long scaffolds and accommodates the idiosyncrasies of metagenomes. MetaHipMer is end-to-end parallelized using the Unified Parallel C language and therefore can run seamlessly on shared and distributed-memory systems. Experimental results show that MetaHipMer matches or outperforms the state-of-the-art tools in terms of accuracy. Moreover, MetaHipMer scales efficiently to large concurrencies and is able to assemble previously intractable grand challenge metagenomes. We demonstrate the unprecedented capability of MetaHipMer by computing the first full assembly of the Twitchell Wetlands dataset, consisting of 7.5 billion reads - size 2.6 TBytes.Comment: Accepted to SC1

    The asymptotical error of broadcast gossip averaging algorithms

    Full text link
    In problems of estimation and control which involve a network, efficient distributed computation of averages is a key issue. This paper presents theoretical and simulation results about the accumulation of errors during the computation of averages by means of iterative "broadcast gossip" algorithms. Using martingale theory, we prove that the expectation of the accumulated error can be bounded from above by a quantity which only depends on the mixing parameter of the algorithm and on few properties of the network: its size, its maximum degree and its spectral gap. Both analytical results and computer simulations show that in several network topologies of applicative interest the accumulated error goes to zero as the size of the network grows large.Comment: 10 pages, 3 figures. Based on a draft submitted to IFACWC201

    Algorithmic complexity theory detects decreases in the relative efficiency of stock markets in the aftermath of the 2008 financial crisis

    Get PDF
    The relative efficiency of financial markets can be evaluated using algorithmic complexity theory. Using this approach we detect decreases in efficiency rates of the major stocks listed on the Sao Paulo Stock Exchange in the aftermath of the 2008 financial crisis.market efficiency, stock markets, econophysics

    Decreasing Diagrams for Confluence and Commutation

    Full text link
    Like termination, confluence is a central property of rewrite systems. Unlike for termination, however, there exists no known complexity hierarchy for confluence. In this paper we investigate whether the decreasing diagrams technique can be used to obtain such a hierarchy. The decreasing diagrams technique is one of the strongest and most versatile methods for proving confluence of abstract rewrite systems. It is complete for countable systems, and it has many well-known confluence criteria as corollaries. So what makes decreasing diagrams so powerful? In contrast to other confluence techniques, decreasing diagrams employ a labelling of the steps with labels from a well-founded order in order to conclude confluence of the underlying unlabelled relation. Hence it is natural to ask how the size of the label set influences the strength of the technique. In particular, what class of abstract rewrite systems can be proven confluent using decreasing diagrams restricted to 1 label, 2 labels, 3 labels, and so on? Surprisingly, we find that two labels suffice for proving confluence for every abstract rewrite system having the cofinality property, thus in particular for every confluent, countable system. Secondly, we show that this result stands in sharp contrast to the situation for commutation of rewrite relations, where the hierarchy does not collapse. Thirdly, investigating the possibility of a confluence hierarchy, we determine the first-order (non-)definability of the notion of confluence and related properties, using techniques from finite model theory. We find that in particular Hanf's theorem is fruitful for elegant proofs of undefinability of properties of abstract rewrite systems

    On a Problem of Steinhaus

    Full text link
    Let NN be a positive integer. A sequence X=(x1,x2,
,xN)X=(x_1,x_2,\ldots,x_N) of points in the unit interval [0,1)[0,1) is piercing if {x1,x2,
,xn}∩[in,i+1n)≠∅\{x_1,x_2,\ldots,x_n\}\cap \left[\frac{i}{n},\frac{i+1}{n} \right) \neq\emptyset holds for every n=1,2,
,Nn=1,2,\ldots, N and every i=0,1,
,n−1i=0,1,\ldots,n-1. In 1958 Steinhaus asked whether piercing sequences can be arbitrarily long. A negative answer was provided by Schinzel, who proved that any such sequence may have at most 7474 elements. This was later improved to the best possible value of 1717 by Warmus, and independently by Berlekamp and Graham. In this paper we study a more general variant of piercing sequences. Let f(n)≄nf(n)\geq n be an infinite nondecreasing sequence of positive integers. A sequence X=(x1,x2,
,xf(N))X=(x_1,x_2,\ldots,x_{f(N)}) is ff-piercing if {x1,x2,
,xf(n)}∩[in,i+1n)≠∅\{x_1,x_2,\ldots,x_{f(n)}\}\cap \left[\frac{i}{n},\frac{i+1}{n} \right) \neq\emptyset holds for every n=1,2,
,Nn=1,2,\ldots, N and every i=0,1,
,n−1i=0,1,\ldots,n-1. A special case of f(n)=n+df(n)=n+d, with dd a fixed nonnegative integer, was studied by Berlekamp and Graham. They noticed that for each d≄0d\geq 0, the maximum length of any (n+d)(n+d)-piercing sequence is finite. Expressing this maximum length as s(d)+ds(d)+d, they obtained an exponential upper bound on the function s(d)s(d), which was later improved to s(d)=O(d3)s(d)=O(d^3) by Graham and Levy. Recently, Konyagin proved that 2dâ©œs(d)<200d2d\leqslant s(d)< 200d holds for all sufficiently big dd. Using a different technique based on the Farey fractions and stick-breaking games, we prove here that the function s(d)s(d) satisfies ⌊c1d⌋⩜s(d)â©œc2d+o(d)\left\lfloor{}c_1d\right\rfloor{}\leqslant s(d)\leqslant c_2d+o(d), where c1=ln⁥21−ln⁥2≈2.25c_1=\frac{\ln 2}{1-\ln 2}\approx2.25 and c2=1+ln⁥21−ln⁥2≈5.52c_2=\frac{1+\ln2}{1-\ln2}\approx5.52. We also prove that there exists an infinite ff-piercing sequence with f(n)=Îłn+o(n)f(n)= \gamma n+o(n) if and only if γ≄1ln⁥2≈1.44\gamma\geq\frac{1}{\ln 2}\approx 1.44.Comment: 16 page

    New developments and applications in quantitative electron spectroscopic imaging of iron in human liver biopsies

    Get PDF
    Reliable iron concentration data can be obtained by quantitative analyses of image sequences, acquired by electron spectroscopic imaging. A number of requirements are formulated for the successful application of this recently developed in situ quantitative type of analysis. A demonstration of the procedures is given. By application of the technique it is established that there are no significant differences in the average iron loading of structures analysed in liver parenchymal cells of a patient with an iron storage disease, before and after phlebotomy. This supports the hypothesis that the process of iron unloading is an organelle specific process. Measurement of the binary morphology, represented by the area and contour ratio of the iron containing objects revealed no information about differences between the objects. This finding contradicts the visual suggestion that ferritin clusters are more irregularly shaped than the other iron objects. Also, no differences could be found in this sense between the situations before and after phlebotomy. With respect to the density appearance, objects that have an inhomogeneous iron loading averagely contain more iron. This observation does correspond well with the visual impression of the increasingly irregular appearance of more well-loaded structures
    • 

    corecore