6,734 research outputs found
Developing and applying heterogeneous phylogenetic models with XRate
Modeling sequence evolution on phylogenetic trees is a useful technique in
computational biology. Especially powerful are models which take account of the
heterogeneous nature of sequence evolution according to the "grammar" of the
encoded gene features. However, beyond a modest level of model complexity,
manual coding of models becomes prohibitively labor-intensive. We demonstrate,
via a set of case studies, the new built-in model-prototyping capabilities of
XRate (macros and Scheme extensions). These features allow rapid implementation
of phylogenetic models which would have previously been far more
labor-intensive. XRate's new capabilities for lineage-specific models,
ancestral sequence reconstruction, and improved annotation output are also
discussed. XRate's flexible model-specification capabilities and computational
efficiency make it well-suited to developing and prototyping phylogenetic
grammar models. XRate is available as part of the DART software package:
http://biowiki.org/DART .Comment: 34 pages, 3 figures, glossary of XRate model terminolog
Efficient calculation of molecular integrals over London atomic orbitals
The use of London atomic orbitals (LAOs) in a non-perturbative manner enables the determination of gauge-origin invariant energies and properties for molecular species in arbitrarily strong magnetic fields. Central to the efficient implementation of such calculations for molecular systems is the evaluation of molecular integrals, particularly the electron repulsion integrals (ERIs). We present an implementation of several different algorithms for the evaluation of ERIs over Gaussian-type LAOs at arbitrary magnetic field strengths. The efficiency of generalized McMurchie-Davidson (MD), Head-Gordon-Pople (HGP) and Rys quadrature schemes is compared. For the Rys quadrature implementation, we avoid the use of high precision arithmetic and interpolation schemes in the computation of the quadrature roots and weights, enabling the application of this algorithm seamlessly to a wide range of magnetic fields. The efficiency of each generalised algorithm is compared by numerical application, classifying the ERIs according to their total angular momenta and evaluating their performance for primitive and contracted basis sets. In common with zero-field integral evaluation, no single algorithm is optimal for all angular momenta thus a simple mixed scheme is put forward, which selects the most efficient approach to calculate the ERIs for each shell quartet. The mixed approach is significantly more efficient than the exclusive use of any individual algorithm
High-precision computation of uniform asymptotic expansions for special functions
In this dissertation, we investigate new methods to obtain uniform asymptotic expansions for the numerical evaluation of special functions to high-precision. We shall first present the theoretical and computational fundamental aspects required for the development and ultimately implementation of such methods. Applying some of these methods, we obtain efficient new convergent and uniform expansions for numerically evaluating the confluent hypergeometric functions and the Lerch transcendent at high-precision. In addition, we also investigate a new scheme of computation for the generalized exponential integral, obtaining on the fastest and most robust implementations in double-precision floating-point arithmetic.
In this work, we aim to combine new developments in asymptotic analysis with fast and effective open-source implementations. These implementations are comparable and often faster than current open-source and commercial stateof-the-art software for the evaluation of special functions.Esta tesis presenta nuevos métodos para obtener expansiones uniformes asintóticas, para la evaluación numérica de funciones especiales en alta precisión. En primer lugar, se introducen fundamentos teóricos y de carácter computacional necesarios para el desarrollado y posterior implementación de tales métodos. Aplicando varios de dichos métodos, se obtienen nuevas expansiones uniformes convergentes para la evaluación numérica de las funciones hipergeométricas confluentes y de la función transcendental de Lerch. Por otro lado, se estudian nuevos esquemas de computo para evaluar la integral exponencial generalizada, desarrollando una de las implementaciones más eficientes y robustas en aritmética de punto flotante de doble precisión. En este trabajo, se combinan nuevos desarrollos en análisis asintótico con implementaciones rigurosas, distribuidas en código abierto. Las implementaciones resultantes son comparables, y en ocasiones superiores, a las soluciones comerciales y de código abierto actuales, que representan el estado de la técnica en el campo de la evaluación de funciones especiales
High-precision computation of uniform asymptotic expansions for special functions
In this dissertation, we investigate new methods to obtain uniform asymptotic expansions for the numerical evaluation of special functions to high-precision. We shall first present the theoretical and computational fundamental aspects required for the development and ultimately implementation of such methods. Applying some of these methods, we obtain efficient new convergent and uniform expansions for numerically evaluating the confluent hypergeometric functions and the Lerch transcendent at high-precision. In addition, we also investigate a new scheme of computation for the generalized exponential integral, obtaining on the fastest and most robust implementations in double-precision floating-point arithmetic.
In this work, we aim to combine new developments in asymptotic analysis with fast and effective open-source implementations. These implementations are comparable and often faster than current open-source and commercial stateof-the-art software for the evaluation of special functions.Esta tesis presenta nuevos métodos para obtener expansiones uniformes asintóticas, para la evaluación numérica de funciones especiales en alta precisión. En primer lugar, se introducen fundamentos teóricos y de carácter computacional necesarios para el desarrollado y posterior implementación de tales métodos. Aplicando varios de dichos métodos, se obtienen nuevas expansiones uniformes convergentes para la evaluación numérica de las funciones hipergeométricas confluentes y de la función transcendental de Lerch. Por otro lado, se estudian nuevos esquemas de computo para evaluar la integral exponencial generalizada, desarrollando una de las implementaciones más eficientes y robustas en aritmética de punto flotante de doble precisión. En este trabajo, se combinan nuevos desarrollos en análisis asintótico con implementaciones rigurosas, distribuidas en código abierto. Las implementaciones resultantes son comparables, y en ocasiones superiores, a las soluciones comerciales y de código abierto actuales, que representan el estado de la técnica en el campo de la evaluación de funciones especiales.Postprint (published version
Simplified Energy Landscape for Modularity Using Total Variation
Networks capture pairwise interactions between entities and are frequently
used in applications such as social networks, food networks, and protein
interaction networks, to name a few. Communities, cohesive groups of nodes,
often form in these applications, and identifying them gives insight into the
overall organization of the network. One common quality function used to
identify community structure is modularity. In Hu et al. [SIAM J. App. Math.,
73(6), 2013], it was shown that modularity optimization is equivalent to
minimizing a particular nonconvex total variation (TV) based functional over a
discrete domain. They solve this problem, assuming the number of communities is
known, using a Merriman, Bence, Osher (MBO) scheme.
We show that modularity optimization is equivalent to minimizing a convex
TV-based functional over a discrete domain, again, assuming the number of
communities is known. Furthermore, we show that modularity has no convex
relaxation satisfying certain natural conditions. We therefore, find a
manageable non-convex approximation using a Ginzburg Landau functional, which
provably converges to the correct energy in the limit of a certain parameter.
We then derive an MBO algorithm with fewer hand-tuned parameters than in Hu et
al. and which is 7 times faster at solving the associated diffusion equation
due to the fact that the underlying discretization is unconditionally stable.
Our numerical tests include a hyperspectral video whose associated graph has
2.9x10^7 edges, which is roughly 37 times larger than was handled in the paper
of Hu et al.Comment: 25 pages, 3 figures, 3 tables, submitted to SIAM J. App. Mat
25 Years of Self-Organized Criticality: Solar and Astrophysics
Shortly after the seminal paper {\sl "Self-Organized Criticality: An
explanation of 1/f noise"} by Bak, Tang, and Wiesenfeld (1987), the idea has
been applied to solar physics, in {\sl "Avalanches and the Distribution of
Solar Flares"} by Lu and Hamilton (1991). In the following years, an inspiring
cross-fertilization from complexity theory to solar and astrophysics took
place, where the SOC concept was initially applied to solar flares, stellar
flares, and magnetospheric substorms, and later extended to the radiation belt,
the heliosphere, lunar craters, the asteroid belt, the Saturn ring, pulsar
glitches, soft X-ray repeaters, blazars, black-hole objects, cosmic rays, and
boson clouds. The application of SOC concepts has been performed by numerical
cellular automaton simulations, by analytical calculations of statistical
(powerlaw-like) distributions based on physical scaling laws, and by
observational tests of theoretically predicted size distributions and waiting
time distributions. Attempts have been undertaken to import physical models
into the numerical SOC toy models, such as the discretization of
magneto-hydrodynamics (MHD) processes. The novel applications stimulated also
vigorous debates about the discrimination between SOC models, SOC-like, and
non-SOC processes, such as phase transitions, turbulence, random-walk
diffusion, percolation, branching processes, network theory, chaos theory,
fractality, multi-scale, and other complexity phenomena. We review SOC studies
from the last 25 years and highlight new trends, open questions, and future
challenges, as discussed during two recent ISSI workshops on this theme.Comment: 139 pages, 28 figures, Review based on ISSI workshops "Self-Organized
Criticality and Turbulence" (2012, 2013, Bern, Switzerland
Gamma-based clustering via ordered means with application to gene-expression analysis
Discrete mixture models provide a well-known basis for effective clustering
algorithms, although technical challenges have limited their scope. In the
context of gene-expression data analysis, a model is presented that mixes over
a finite catalog of structures, each one representing equality and inequality
constraints among latent expected values. Computations depend on the
probability that independent gamma-distributed variables attain each of their
possible orderings. Each ordering event is equivalent to an event in
independent negative-binomial random variables, and this finding guides a
dynamic-programming calculation. The structuring of mixture-model components
according to constraints among latent means leads to strict concavity of the
mixture log likelihood. In addition to its beneficial numerical properties, the
clustering method shows promising results in an empirical study.Comment: Published in at http://dx.doi.org/10.1214/10-AOS805 the Annals of
Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical
Statistics (http://www.imstat.org
Development and Implementation of Fully 3D Statistical Image Reconstruction Algorithms for Helical CT and Half-Ring PET Insert System
X-ray computed tomography: CT) and positron emission tomography: PET) have become widely used imaging modalities for screening, diagnosis, and image-guided treatment planning. Along with the increased clinical use are increased demands for high image quality with reduced ionizing radiation dose to the patient. Despite their significantly high computational cost, statistical iterative reconstruction algorithms are known to reconstruct high-quality images from noisy tomographic datasets. The overall goal of this work is to design statistical reconstruction software for clinical x-ray CT scanners, and for a novel PET system that utilizes high-resolution detectors within the field of view of a whole-body PET scanner. The complex choices involved in the development and implementation of image reconstruction algorithms are fundamentally linked to the ways in which the data is acquired, and they require detailed knowledge of the various sources of signal degradation. Both of the imaging modalities investigated in this work have their own set of challenges. However, by utilizing an underlying statistical model for the measured data, we are able to use a common framework for this class of tomographic problems. We first present the details of a new fully 3D regularized statistical reconstruction algorithm for multislice helical CT. To reduce the computation time, the algorithm was carefully parallelized by identifying and taking advantage of the specific symmetry found in helical CT. Some basic image quality measures were evaluated using measured phantom and clinical datasets, and they indicate that our algorithm achieves comparable or superior performance over the fast analytical methods considered in this work. Next, we present our fully 3D reconstruction efforts for a high-resolution half-ring PET insert. We found that this unusual geometry requires extensive redevelopment of existing reconstruction methods in PET. We redesigned the major components of the data modeling process and incorporated them into our reconstruction algorithms. The algorithms were tested using simulated Monte Carlo data and phantom data acquired by a PET insert prototype system. Overall, we have developed new, computationally efficient methods to perform fully 3D statistical reconstructions on clinically-sized datasets
- …