3,834 research outputs found

    Network of Earthquakes and Recurrences Therein

    Full text link
    We quantify the correlation between earthquakes and use the same to distinguish between relevant causally connected earthquakes. Our correlation metric is a variation on the one introduced by Baiesi and Paczuski (2004). A network of earthquakes is constructed, which is time ordered and with links between the more correlated ones. Data pertaining to the California region has been used in the study. Recurrences to earthquakes are identified employing correlation thresholds to demarcate the most meaningful ones in each cluster. The distribution of recurrence lengths and recurrence times are analyzed subsequently to extract information about the complex dynamics. We find that the unimodal feature of recurrence lengths helps to associate typical rupture lengths with different magnitude earthquakes. The out-degree of the network shows a hub structure rooted on the large magnitude earthquakes. In-degree distribution is seen to be dependent on the density of events in the neighborhood. Power laws are also obtained with recurrence time distribution agreeing with the Omori law.Comment: 17 pages, 5 figure

    Minimizing inner product data dependencies in conjugate gradient iteration

    Get PDF
    The amount of concurrency available in conjugate gradient iteration is limited by the summations required in the inner product computations. The inner product of two vectors of length N requires time c log(N), if N or more processors are available. This paper describes an algebraic restructuring of the conjugate gradient algorithm which minimizes data dependencies due to inner product calculations. After an initial start up, the new algorithm can perform a conjugate gradient iteration in time c*log(log(N))

    On the efficiency of reductions in ”-SIMD media extensions

    Get PDF
    Many important multimedia applications contain a significant fraction of reduction operations. Although, in general, multimedia applications are characterized for having high amounts of Data Level Parallelism, reductions and accumulations are difficult to parallelize and show a poor tolerance to increases in the latency of the instructions. This is specially significant for ”-SIMD extensions such as MMX or AltiVec. To overcome the problem of reductions in ”-SIMD ISAs, designers tend to include more and more complex instructions able to deal with the most common forms of reductions in multimedia. As long as the number of processor pipeline stages grows, the number of cycles needed to execute these multimedia instructions increases with every processor generation, severely compromising performance. The paper presents an in-depth discussion of how reductions/accumulations are performed in current ”-SIMD architectures and evaluates the performance trade-offs for near-future highly aggressive superscalar processors with three different styles of ”-SIMD extensions. We compare a MMX-like alternative to a MDMX-like extension that has packed accumulators to attack the reduction problem, and we also compare it to MOM, a matrix register ISA. We show that while packed accumulators present several advantages, they introduce artificial recurrences that severely degrade performance for processors with high number of registers and long latency operations. On the other hand, the paper demonstrates that longer SIMD media extensions such as MOM can take great advantage of accumulators by exploiting the associative parallelism implicit in reductions.Peer ReviewedPostprint (published version

    Semi-classical Orthogonal Polynomial Systems on Non-uniform Lattices, Deformations of the Askey Table and Analogs of Isomonodromy

    Full text link
    A D\mathbb{D}-semi-classical weight is one which satisfies a particular linear, first order homogeneous equation in a divided-difference operator D\mathbb{D}. It is known that the system of polynomials, orthogonal with respect to this weight, and the associated functions satisfy a linear, first order homogeneous matrix equation in the divided-difference operator termed the spectral equation. Attached to the spectral equation is a structure which constitutes a number of relations such as those arising from compatibility with the three-term recurrence relation. Here this structure is elucidated in the general case of quadratic lattices. The simplest examples of the D\mathbb{D}-semi-classical orthogonal polynomial systems are precisely those in the Askey table of hypergeometric and basic hypergeometric orthogonal polynomials. However within the D\mathbb{D}-semi-classical class it is entirely natural to define a generalisation of the Askey table weights which involve a deformation with respect to new deformation variables. We completely construct the analogous structures arising from such deformations and their relations with the other elements of the theory. As an example we treat the first non-trivial deformation of the Askey-Wilson orthogonal polynomial system defined by the qq-quadratic divided-difference operator, the Askey-Wilson operator, and derive the coupled first order divided-difference equations characterising its evolution in the deformation variable. We show that this system is a member of a sequence of classical solutions to the E7(1) E^{(1)}_7 qq-Painlev\'e system.Comment: Submitted to Duke Mathematical Journal on 5th April 201

    In search of lost introns

    Full text link
    Many fundamental questions concerning the emergence and subsequent evolution of eukaryotic exon-intron organization are still unsettled. Genome-scale comparative studies, which can shed light on crucial aspects of eukaryotic evolution, require adequate computational tools. We describe novel computational methods for studying spliceosomal intron evolution. Our goal is to give a reliable characterization of the dynamics of intron evolution. Our algorithmic innovations address the identification of orthologous introns, and the likelihood-based analysis of intron data. We discuss a compression method for the evaluation of the likelihood function, which is noteworthy for phylogenetic likelihood problems in general. We prove that after O(nL)O(nL) preprocessing time, subsequent evaluations take O(nL/log⁥L)O(nL/\log L) time almost surely in the Yule-Harding random model of nn-taxon phylogenies, where LL is the input sequence length. We illustrate the practicality of our methods by compiling and analyzing a data set involving 18 eukaryotes, more than in any other study to date. The study yields the surprising result that ancestral eukaryotes were fairly intron-rich. For example, the bilaterian ancestor is estimated to have had more than 90% as many introns as vertebrates do now

    Modulo scheduling with integrated register spilling for clustered VLIW architectures

    Get PDF
    Clustering is a technique to decentralize the design of future wide issue VLIW cores and enable them to meet the technology constraints in terms of cycle time, area and power dissipation. In a clustered design, registers and functional units are grouped in clusters so that new instructions are needed to move data between them. New aggressive instruction scheduling techniques are required to minimize the negative effect of resource clustering and delays in moving data around. In this paper we present a novel software pipelining technique that performs instruction scheduling with reduced register requirements, register allocation, register spilling and inter-cluster communication in a single step. The algorithm uses limited backtracking to reconsider previously taken decisions. This backtracking provides the algorithm with additional possibilities for obtaining high throughput schedules with low spill code requirements for clustered architectures. We show that the proposed approach outperforms previously proposed techniques and that it is very scalable independently of the number of clusters, the number of communication buses and communication latency. The paper also includes an exploration of some parameters in the design of future clustered VLIW cores.Peer ReviewedPostprint (published version

    Moddicom: a Complete and Easily Accessible Library for Prognostic Evaluations Relying on Image Features

    Get PDF
    Decision Support Systems (DSSs) are increasingly exploited in the area of prognostic evaluations. For predicting the effect of therapies on patients, the trend is now to use image features, i.e. information that can be automatically computed by considering images resulting by analysis. The DSSs application as predictive tools is particularly suitable for cancer treatment, given the peculiarities of the disease –which is highly localised and lead to significant social costs– and the large number of images that are available for each patient. At the state of the art, there exists tools that allow to handle image features for prognostic evaluations, but they are not designed for medical experts. They require either a strong engineering or computer science background since they do not integrate all the required functions, such as image retrieval and storage. In this paper we fill this gap by proposing Moddicom, a user-friendly complete library specifically designed to be exploited by physicians. A preliminary experimental analysis, performed by a medical expert that used the tool, demonstrates the efficiency and the effectiveness of Moddicom
    • 

    corecore