1,526 research outputs found
Summing curious, slowly convergent, harmonic subseries
The harmonic series diverges. But if we delete from the harmonic series all terms whose denominators contain any string of digits such as "9", "42", or "314159", then the sum of the remaining terms converges. These series converge far too slowly to compute their sums directly. We describe an algorithm to compute these and related sums to high precision. For example, the sum of the series whose denominators contain no "314159" is approximately 2302582.33386. We explain why this sum is so close to 106 log 10 by developing asymptotic estimates for sums that omit strings of length n, as n approaches infinity. \ud
\ud
The first author is supported by a Rhodes Scholarship
Seven Sins in Portfolio Optimization
Although modern portfolio theory has been in existence for over 60 years,
fund managers often struggle to get its models to produce reliable portfolio
allocations without strongly constraining the decision vector by tight bands of
strategic allocation targets. The two main root causes to this problem are
inadequate parameter estimation and numerical artifacts. When both obstacles
are overcome, portfolio models yield excellent allocations. In this paper,
which is primarily aimed at practitioners, we discuss the most common mistakes
in setting up portfolio models and in solving them algorithmically
Evaluating matrix functions for exponential integrators via Carathéodory-Fejér approximation and contour integrals
Among the fastest methods for solving stiff PDE are exponential integrators, which require the evaluation of , where is a negative definite matrix and is the exponential function or one of the related `` functions'' such as . Building on previous work by Trefethen and Gutknecht, Gonchar and Rakhmanov, and Lu, we propose two methods for the fast evaluation of that are especially useful when shifted systems can be solved efficiently, e.g. by a sparse direct solver. The first method method is based on best rational approximations to on the negative real axis computed via the Carathéodory-Fejér procedure, and we conjecture that the accuracy scales as , where is the number of complex matrix solves. In particular, three matrix solves suffice to evaluate to approximately six digits of accuracy. The second method is an application of the trapezoid rule on a Talbot-type contour
Computing the Gamma function using contour integrals and rational approximations
Some of the best methods for computing the gamma function are based on numerical evaluation of Hankel's contour integral. For example, Temme evaluates this integral based on steepest-decent contours by the trapezoid rule. Here we investigate a different approach to the integral: the application of the trapezoid rule on Talbot-type contours using optimal parameters recently derived by Weideman for computing inverse Laplace transforms. Relatedly, we also investigate quadrature formulas derived from best approximations to exp(z) on the negative real axis, following Cody, Meinardus and Varga. The two methods are closely related and both converge geometrically. We find that the new methods are competitive with existing ones, even though they are based on generic tools rather than on specific analysis of the gamma function
Talbot quadratures and rational approximations
Many computational problems can be solved with the aid of contour integrals containing in the the integrand: examples include inverse Laplace transforms, special functions, functions of matrices and operators, parabolic PDEs, and reaction-diffusion equations. One approach to the numerical quadrature of such integrals is to apply the trapezoid rule on a Hankel contour defined by a suitable change of variables. Optimal parameters for three classes of such contours have recently been derived: (a) parabolas, (b) hyperbolas, and (c) cotangent contours, following Talbot in 1979. The convergence rates for these optimized quadrature formulas are very fast: roughly , where is the number of sample points or function evaluations. On the other hand, convergence at a rate apparently about twice as fast, , can be achieved by using a different approach: best supremum-norm rational approximants to for , following Cody, Meinardus and Varga in 1969. (All these rates are doubled in the case of self-adjoint operators or real integrands.) It is shown that the quadrature formulas can be interpreted as rational approximations and the rational approximations as quadrature formulas, and the strengths and weaknesses of the different approaches are discussed in the light of these connections. A MATLAB function is provided for computing Cody--Meinardus--Varga approximants by the method of Carathèodory-Fejèr approximation
Updating the QR decomposition of block tridiagonal and block Hessenberg matrices
Abstract We present an efficient block-wise update scheme for the QR decomposition of block tridiagonal and block Hessenberg matrices. For example, such matrices come up in generalizations of the Krylov space solvers MinRes, SymmLQ, GMRes, and QMR to block methods for linear systems of equations with multiple right-hand sides. In the non-block case it is very efficient (and, in fact, standard) to use Givens rotations for these QR decompositions. Normally, the same approach is also used with column-wise updates in the block case. However, we show that, even for small block sizes, block-wise updates using (in general, complex) Householder reflections instead of Givens rotations are far more efficient in this case, in particular if the unitary transformations that incorporate the reflections determined by a whole block are computed explicitly. Naturally, the bigger the block size the bigger the savings. We discuss the somewhat complicated algorithmic details of this block-wise update, and present numerical experiments on accuracy and timing for the various options (Givens vs. Householder, block-wise vs. column-wise update, explicit vs. implicit computation of unitary transformations). Our treatment allows variable block sizes and can be adapted to block Hessenberg matrices that do not have the special structure encountered in the above mentioned block Krylov space solvers
Process economics evaluation of Adeno-associated Viral Vector (AAV) manufacturing
With two products currently approved for gene therapy purposes, adeno-associated virus (AAV) manufacturing has seen consistent pressure to develop scalable strategies, particularly during cell culture and purification. Traditionally, cell culture in adherent mode and gradient density-based polishing steps have been adopted for AAV processing. However, these technologies rely on scale-out strategies to increase capacity in a commercial environment, thereby incurring large costs and facility footprints. This presentation presents insights from an advanced economics analysis using a decisional tool developed at UCL to identify the most cost-effective route for large-scale manufacture of AAV. A cost of goods (COG) comparison will be presented between adherent and suspension cell culture, as well as anion-exchange chromatography versus batch ultracentrifugation at the base case. This provides benchmark COG values, the savings that can be achieved moving to more scalable technologies and the contributions of materials, labour, QC and facility-related costs. A stochastic cost comparison is used to reveal the impact of key input uncertainties (e.g. productivities) on the robustness of each strategy and the likelihood of achieving target COG values. Next, the case study looks at a broad range of USP (e.g. multi-layer cell factories, fixed-bed reactor, single-use stirred tank bioreactor) and DSP (e.g. packed-bed chromatography, batch and continuous ultracentrifugation) technology options and uses an optimisation algorithm to identify the optimal flowsheet for AAV manufacture in terms of both cost-effectiveness and meeting purity targets. This analysis highlights how the optimal flowsheet will change depending on the purity target that needs to be met, with specific focus upon HCPs, DNA and empty capsids Finally, the presentation will highlight how the optimal solutions change for different combinations of demand, batch size and AEX yield, highlighting the trade-off between achieving high yields and purities, whilst opting for scalable technologies. These insights help make better decisions early on in development to facilitate successful commercialisation
Effects of Ubiquinol-10 on MicroRNA-146a Expression In Vitro and In Vivo
MicroRNAs (miRs) are involved in key biological processes via suppression of gene expression at posttranscriptional levels. According to their superior functions, subtle modulation of miR expression by certain compounds or nutrients is desirable under particular conditions. Bacterial lipopolysaccharide (LPS) induces a reactive oxygen species-/NF-κB-dependent pathway which increases the expression of the anti-inflammatory miR-146a. We hypothesized that this induction could be modulated by the antioxidant ubiquinol-10. Preincubation of human monocytic THP-1 cells with ubiquinol-10 reduced the LPS-induced expression level of miR-146a to 78.9 ± 13.22%. In liver samples of mice injected with LPS, supplementation with ubiquinol-10 leads to a reduction of LPS-induced miR-146a expression to 78.12 ± 21.25%. From these consistent in vitro and in vivo data, we conclude that ubiquinol-10 may fine-tune the inflammatory response via moderate reduction of miR-146a expression
Process economics evaluation and optimization of adeno-associated virus downstream processing
Adeno-associated virus (AAV) manufacturing has traditionally focused upon lab-scale techniques to culture and purify vector products, leading to limitations in production capacity. The tool presented in this paper assesses the feasibility of using non-scalable technologies at high AAV demands and identifies optimal flowsheets at large-scale that meet both cost and purity targets. The decisional tool comprises (a) a detailed process economics model with the relevant mass balance, sizing, and costing equations for AAV upstream and downstream technologies, (b) a built-in Monte Carlo simulation to assess uncertainties, and (c) a brute-force optimization algorithm for rapid investigation into the optimal purification combinations. The results overall highlighted that switching to more scalable upstream and downstream processing alternatives is economically advantageous. The base case analysis showed the cost and robustness advantages of utilizing suspension cell culture over adherent, as well as a fully chromatographic purification platform over batch ultracentrifugation. Expanding the set of purification options available gave insights into the optimal combination to satisfy both cost and purity targets. As the purity target increased, the optimal polishing solution moved from the non-capsid purifying multimodal chromatography to anion-exchange chromatography or continuous ultracentrifugation
- …