255 research outputs found

    Computation of Carson formulas using piecewise approximation of kernel function

    Get PDF
    Novel approach for the high-accurate computation of Carson formulas is presented. Carson formulas are used for computation of per-unit length (pul) self and mutual impedances of infinitely long parallel conductors. Numerical algorithm described in this paper uses a piecewise approximation of the kernel function which appears in the Carson formula corrections. Approximated kernel function is multiplied by the rest of the integrands in the impedance correction expressions and analytically integrated. By using the proposed algorithm, highaccurate results with the desired computed n-digit accuracy can easily be obtained. Results computed by the proposed algorithm are compared with the two most commonly used approximation methods for large frequency range

    Numerical algorithms for the valuation of installment options

    Get PDF
    Mestrado em Matemática FinanceiraInstallment options are financial derivatives in which part of the initial premium is paid up-front and the other part is paid discretely or continuously in installments during the option’s lifetime. This work deals with the numerical valuation of European installment options. Trough the study of the continuous case, we can show that numerical inversion of Laplace transform works well for computing the option value. In particular, we will investigate the De Hoog algorithm and compare it to other methods for the inverse Laplace transformation, namely Euler summation, Gaver-Stehfest and Kryzhnyi methods.Installment options são derivados financeiros cuja parte inicial do prémio é paga antecipadamente e a outra parte é dividida, discretamente ou continuamente, em parcelas durante o “tempo de vida” do contrato. Este trabalho estuda a valorização numérica de installment options do tipo Europeu. Estudando principalmente o caso contínuo podemos mostrar que a inversão numérica da transformada de Laplace é um bom método para calcular o valor da opção. Em particular, vamos investigar o algoritmo conhecido por De Hoog e compará-lo a outros métodos numéricos, sendo eles conhecidos por Euler summation, Gaver-Stehfest e método de Kryzhnyi

    Efficient High-Order Space-Angle-Energy Polytopic Discontinuous Galerkin Finite Element Methods for Linear Boltzmann Transport

    Full text link
    We introduce an hphp-version discontinuous Galerkin finite element method (DGFEM) for the linear Boltzmann transport problem. A key feature of this new method is that, while offering arbitrary order convergence rates, it may be implemented in an almost identical form to standard multigroup discrete ordinates methods, meaning that solutions can be computed efficiently with high accuracy and in parallel within existing software. This method provides a unified discretisation of the space, angle, and energy domains of the underlying integro-differential equation and naturally incorporates both local mesh and local polynomial degree variation within each of these computational domains. Moreover, general polytopic elements can be handled by the method, enabling efficient discretisations of problems posed on complicated spatial geometries. We study the stability and hphp-version a priori error analysis of the proposed method, by deriving suitable hphp-approximation estimates together with a novel inf-sup bound. Numerical experiments highlighting the performance of the method for both polyenergetic and monoenergetic problems are presented.Comment: 27 pages, 2 figure

    Numerical optimal control with applications in aerospace

    Get PDF
    This thesis explores various computational aspects of solving nonlinear, continuous-time dynamic optimization problems (DOPs) numerically. Firstly, a direct transcription method for solving DOPs is proposed, named the integrated residual method (IRM). Instead of forcing the dynamic constraints to be satisfied only at a selected number of points as in direct collocation, this new approach alternates between minimizing and constraining the squared norm of the dynamic constraint residuals integrated along the whole solution trajectories. The method is capable of obtaining solutions of higher accuracy for the same mesh compared to direct collocation methods, enabling a flexible trade-off between solution accuracy and optimality, and providing reliable solutions for challenging problems, including those with singular arcs and high-index differential-algebraic equations. A number of techniques have also been proposed in this work for efficient numerical solution of large scale and challenging DOPs. A general approach for direct implementation of rate constraints on the discretization mesh is proposed. Unlike conventional approaches that may lead to singular control arcs, the solution of this on-mesh implementation has better numerical properties, while achieving computational speedups. Another development is related to the handling of inactive constraints, which do not contribute to the solution of DOPs, but increase the problem size and burden the numerical computations. A strategy to systematically remove the inactive and redundant constraints under a mesh refinement framework is proposed. The last part of this work focuses on the use of DOPs in aerospace applications, with a number of topics studied. Using example scenarios of intercontinental flights, the benefits of formulating DOPs directly according to problem specifications are demonstrated, with notable savings in fuel usage. The numerical challenges with direct collocation are also identified, with the IRM obtaining solutions of higher accuracy, and at the same time suppressing the singular arc fluctuations.Open Acces

    Reconstruction algorithms for multispectral diffraction imaging

    Full text link
    Thesis (Ph.D.)--Boston UniversityIn conventional Computed Tomography (CT) systems, a single X-ray source spectrum is used to radiate an object and the total transmitted intensity is measured to construct the spatial linear attenuation coefficient (LAC) distribution. Such scalar information is adequate for visualization of interior physical structures, but additional dimensions would be useful to characterize the nature of the structures. By imaging using broadband radiation and collecting energy-sensitive measurement information, one can generate images of additional energy-dependent properties that can be used to characterize the nature of specific areas in the object of interest. In this thesis, we explore novel imaging modalities that use broadband sources and energy-sensitive detection to generate images of energy-dependent properties of a region, with the objective of providing high quality information for material component identification. We explore two classes of imaging problems: 1) excitation using broad spectrum sub-millimeter radiation in the Terahertz regime and measure- ment of the diffracted Terahertz (THz) field to construct the spatial distribution of complex refractive index at multiple frequencies; 2) excitation using broad spectrum X-ray sources and measurement of coherent scatter radiation to image the spatial distribution of coherent-scatter form factors. For these modalities, we extend approaches developed for multimodal imaging and propose new reconstruction algorithms that impose regularization structure such as common object boundaries across reconstructed regions at different frequencies. We also explore reconstruction techniques that incorporate prior knowledge in the form of spectral parametrization, sparse representations over redundant dictionaries and explore the advantage and disadvantages of these techniques in terms of image quality and potential for accurate material characterization. We use the proposed reconstruction techniques to explore alternative architectures with reduced scanning time and increased signal-to-noise ratio, including THz diffraction tomography, limited angle X-ray diffraction tomography and the use of coded aperture masks. Numerical experiments and Monte Carlo simulations were conducted to compare performances of the developed methods, and validate the studied architectures as viable options for imaging of energy-dependent properties

    Development and implementation of efficient noise suppression methods for emission computed tomography

    Get PDF
    In PET and SPECT imaging, iterative reconstruction is now widely used due to its capability of incorporating into the reconstruction process a physics model and Bayesian statistics involved in photon detection. Iterative reconstruction methods rely on regularization terms to suppress image noise and render radiotracer distribution with good image quality. The choice of regularization method substantially affects the appearances of reconstructed images, and is thus a critical aspect of the reconstruction process. Major contributions of this work include implementation and evaluation of various new regularization methods. Previously, our group developed a preconditioned alternating projection algorithm (PAPA) to optimize the emission computed tomography (ECT) objective function with the non-differentiable total variation (TV) regularizer. The algorithm was modified to optimize the proposed reconstruction objective functions. First, two novel TV-based regularizers—high-order total variation (HOTV) and infimal convolution total variation (ICTV)—were proposed as alternative choices to the customary TV regularizer in SPECT reconstruction, to reduce “staircase” artifacts produced by TV. We have evaluated both proposed reconstruction methods (HOTV-PAPA and ICTV-PAPA), and compared them with the TV regularized reconstruction (TV-PAPA) and the clinical standard, Gaussian post-filtered, expectation-maximization reconstruction method (GPF-EM) using both Monte Carlo-simulated data and anonymized clinical data. Model-observer studies using Monte Carlo-simulated data indicate that ICTV-PAPA is able to reconstruct images with similar or better lesion detectability, compared with clinical standard GPF-EM methods, but at lower detected count levels. This implies that switching from GPF-EM to ICTV-PAPA can reduce patient dose while maintaining image quality for diagnostic use. Second, the 1 norm of discrete cosine transform (DCT)-induced framelet regularization was studied. We decomposed the image into high and low spatial-frequency components, and then preferentially penalized the high spatial-frequency components. The DCT-induced framelet transform of the natural radiotracer distribution image is sparse. By using this property, we were able to effectively suppress image noise without overly compromising spatial resolution or image contrast. Finally, the fractional norm of the first-order spatial gradient was introduced as a regularizer. We implemented 2/3 and 1/2 norms to suppress image spatial variability. Due to the strong penalty of small differences between neighboring pixels, fractional-norm regularizers suffer from similar cartoon-like artifacts as with the TV regularizer. However, when penalty weights are properly selected, fractional-norm regularizers outperform TV in terms of noise suppression and contrast recovery

    Computational applications in stochastic operations research

    Get PDF
    Several computational applications in stochastic operations research are presented, where, for each application, a computational engine is used to achieve results that are otherwise overly tedious by hand calculations, or in some cases mathematically intractable. Algorithms and code are developed and implemented with specific emphasis placed on achieving exact results and substantiated via Monte Carlo simulation. The code for each application is provided in the software language utilized and algorithms are available for coding in another environment. The topics include univariate and bivariate nonparametric random variate generation using a piecewise-linear cumulative distribution, deriving exact statistical process control chart constants for non-normal sampling, testing probability distribution conformance to Benford\u27s law, and transient analysis of M/M/s queueing systems. The nonparametric random variate generation chapters provide the modeler with a method of generating univariate and bivariate samples when only observed data is available. The method is completely nonparametric and is capable of mimicking multimodal joint distributions. The algorithm is black-box, where no decisions are required from the modeler in generating variates for simulation. The statistical process control chart constant chapter develops constants for select non-normal distributions, and provides tabulated results for researchers who have identified a given process as non-normal The constants derived are bias correction factors for the sample range and sample standard deviation. The Benford conformance testing chapter offers the Kolmogorov-Smirnov test as an alternative to the standard chi-square goodness-of-fit test when testing whether leading digits of a data set are distributed according to Benford\u27s law. The alternative test has the advantage of being an exact test for all sample sizes, removing the usual sample size restriction involved with the chi-square goodness-of-fit test. The transient queueing analysis chapter develops and automates the construction of the sojourn time distribution for the nth customer in an M/M/s queue with k customers initially present at time 0 (k ≥ 0) without the usual limit on traffic intensity, rho \u3c 1, providing an avenue to conduct transient analysis on various measures of performance for a given initial number of customers in the system. It also develops and automates the construction of the sojourn time joint probability distribution function for pairs of customers, allowing the calculation of the exact covariance between customer sojourn times
    • …
    corecore