29,447 research outputs found

    Incorporating Relaxivities to More Accurately Reconstruct MR Images

    Get PDF
    Purpose To develop a mathematical model that incorporates the magnetic resonance relaxivities into the image reconstruction process in a single step. Materials and methods In magnetic resonance imaging, the complex-valued measurements of the acquired signal at each point in frequency space are expressed as a Fourier transformation of the proton spin density weighted by Fourier encoding anomalies: T2⁎, T1, and a phase determined by magnetic field inhomogeneity (∆B) according to the MR signal equation. Such anomalies alter the expected symmetry and the signal strength of the k-space observations, resulting in images distorted by image warping, blurring, and loss in image intensity. Although T1 on tissue relaxation time provides valuable quantitative information on tissue characteristics, the T1 recovery term is typically neglected by assuming a long repetition time. In this study, the linear framework presented in the work of Rowe et al., 2007, and of Nencka et al., 2009 is extended to develop a Fourier reconstruction operation in terms of a real-valued isomorphism that incorporates the effects of T2⁎, ∆B, and T1. This framework provides a way to precisely quantify the statistical properties of the corrected image-space data by offering a linear relationship between the observed frequency space measurements and reconstructed corrected image-space measurements. The model is illustrated both on theoretical data generated by considering T2⁎, T1, and/or ∆B effects, and on experimentally acquired fMRI data by focusing on the incorporation of T1. A comparison is also made between the activation statistics computed from the reconstructed data with and without the incorporation of T1 effects. Result Accounting for T1 effects in image reconstruction is shown to recover image contrast that exists prior to T1 equilibrium. The incorporation of T1 is also shown to induce negligible correlation in reconstructed images and preserve functional activations. Conclusion With the use of the proposed method, the effects of T2⁎ and ∆B can be corrected, and T1 can be incorporated into the time series image-space data during image reconstruction in a single step. Incorporation of T1 provides improved tissue segmentation over the course of time series and therefore can improve the precision of motion correction and image registration

    Analyzing Boltzmann Samplers for Bose-Einstein Condensates with Dirichlet Generating Functions

    Full text link
    Boltzmann sampling is commonly used to uniformly sample objects of a particular size from large combinatorial sets. For this technique to be effective, one needs to prove that (1) the sampling procedure is efficient and (2) objects of the desired size are generated with sufficiently high probability. We use this approach to give a provably efficient sampling algorithm for a class of weighted integer partitions related to Bose-Einstein condensation from statistical physics. Our sampling algorithm is a probabilistic interpretation of the ordinary generating function for these objects, derived from the symbolic method of analytic combinatorics. Using the Khintchine-Meinardus probabilistic method to bound the rejection rate of our Boltzmann sampler through singularity analysis of Dirichlet generating functions, we offer an alternative approach to analyze Boltzmann samplers for objects with multiplicative structure.Comment: 20 pages, 1 figur

    A Survey on Software Testing Techniques using Genetic Algorithm

    Full text link
    The overall aim of the software industry is to ensure delivery of high quality software to the end user. To ensure high quality software, it is required to test software. Testing ensures that software meets user specifications and requirements. However, the field of software testing has a number of underlying issues like effective generation of test cases, prioritisation of test cases etc which need to be tackled. These issues demand on effort, time and cost of the testing. Different techniques and methodologies have been proposed for taking care of these issues. Use of evolutionary algorithms for automatic test generation has been an area of interest for many researchers. Genetic Algorithm (GA) is one such form of evolutionary algorithms. In this research paper, we present a survey of GA approach for addressing the various issues encountered during software testing.Comment: 13 Page

    Adiabatic stability under semi-strong interactions: The weakly damped regime

    Get PDF
    We rigorously derive multi-pulse interaction laws for the semi-strong interactions in a family of singularly-perturbed and weakly-damped reaction-diffusion systems in one space dimension. Most significantly, we show the existence of a manifold of quasi-steady N-pulse solutions and identify a "normal-hyperbolicity" condition which balances the asymptotic weakness of the linear damping against the algebraic evolution rate of the multi-pulses. Our main result is the adiabatic stability of the manifolds subject to this normal hyperbolicity condition. More specifically, the spectrum of the linearization about a fixed N-pulse configuration contains essential spectrum that is asymptotically close to the origin as well as semi-strong eigenvalues which move at leading order as the pulse positions evolve. We characterize the semi-strong eigenvalues in terms of the spectrum of an explicit N by N matrix, and rigorously bound the error between the N-pulse manifold and the evolution of the full system, in a polynomially weighted space, so long as the semi-strong spectrum remains strictly in the left-half complex plane, and the essential spectrum is not too close to the origin

    Modeling and forecasting electricity spot prices: A functional data perspective

    Get PDF
    Classical time series models have serious difficulties in modeling and forecasting the enormous fluctuations of electricity spot prices. Markov regime switch models belong to the most often used models in the electricity literature. These models try to capture the fluctuations of electricity spot prices by using different regimes, each with its own mean and covariance structure. Usually one regime is dedicated to moderate prices and another is dedicated to high prices. However, these models show poor performance and there is no theoretical justification for this kind of classification. The merit order model, the most important micro-economic pricing model for electricity spot prices, however, suggests a continuum of mean levels with a functional dependence on electricity demand. We propose a new statistical perspective on modeling and forecasting electricity spot prices that accounts for the merit order model. In a first step, the functional relation between electricity spot prices and electricity demand is modeled by daily price-demand functions. In a second step, we parameterize the series of daily price-demand functions using a functional factor model. The power of this new perspective is demonstrated by a forecast study that compares our functional factor model with two established classical time series models as well as two alternative functional data models.Comment: Published in at http://dx.doi.org/10.1214/13-AOAS652 the Annals of Applied Statistics (http://www.imstat.org/aoas/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Exact Meander Asymptotics: a Numerical Check

    Full text link
    This note addresses the meander enumeration problem: "Count all topologically inequivalent configurations of a closed planar non self-intersecting curve crossing a line through a given number of points". We review a description of meanders introduced recently in terms of the coupling to gravity of a two-flavored fully-packed loop model. The subsequent analytic predictions for various meandric configuration exponents are checked against exact enumeration, using a transfer matrix method, with an excellent agreement.Comment: 48 pages, 24 figures, tex, harvmac, eps

    Video Interpolation using Optical Flow and Laplacian Smoothness

    Full text link
    Non-rigid video interpolation is a common computer vision task. In this paper we present an optical flow approach which adopts a Laplacian Cotangent Mesh constraint to enhance the local smoothness. Similar to Li et al., our approach adopts a mesh to the image with a resolution up to one vertex per pixel and uses angle constraints to ensure sensible local deformations between image pairs. The Laplacian Mesh constraints are expressed wholly inside the optical flow optimization, and can be applied in a straightforward manner to a wide range of image tracking and registration problems. We evaluate our approach by testing on several benchmark datasets, including the Middlebury and Garg et al. datasets. In addition, we show application of our method for constructing 3D Morphable Facial Models from dynamic 3D data
    corecore