19 research outputs found

    Models of Distorted and Evolving Dark Matter Halos

    Get PDF
    We investigate the ability of basis function expansions to reproduce the evolution of a Milky Way-like dark matter halo, extracted from a cosmological zoom-in simulation. For each snapshot, the density of the halo is reduced to a basis function expansion, with interpolation used to recreate the evolution between snapshots. The angular variation of the halo density is described by spherical harmonics, and the radial variation either by biorthonormal basis functions adapted to handle truncated haloes or by splines. High fidelity orbit reconstructions are attainable using either method with similar computational expense. We quantify how the error in the reconstructed orbits varies with expansion order and snapshot spacing. Despite the many possible biorthonormal expansions, it is hard to beat a conventional Hernquist-Ostriker expansion with a moderate number of terms (15\gtrsim15 radial and 6\gtrsim6 angular). As two applications of the developed machinery, we assess the impact of the time-dependence of the potential on (i) the orbits of Milky Way satellites, and (ii) planes of satellites as observed in the Milky Way and other nearby galaxies. Time evolution over the last 5 Gyr introduces an uncertainty in the Milky Way satellites' orbital parameters of 15\sim 15 per cent, comparable to that induced by the observational errors or the uncertainty in the present-day Milky Way potential. On average, planes of satellites grow at similar rates in evolving and time-independent potentials. There can be more, or less, growth in the plane's thickness, if the plane becomes less, or more, aligned with the major or minor axis of the evolving halo.Comment: MNRAS, submitte

    Models of Distorted and Evolving Dark Matter Halos

    Get PDF
    We investigate the ability of basis function expansions to reproduce the evolution of a Milky Way-like dark matter halo, extracted from a cosmological zoom-in simulation. For each snapshot, the density of the halo is reduced to a basis function expansion, with interpolation used to recreate the evolution between snapshots. The angular variation of the halo density is described by spherical harmonics, and the radial variation either by biorthonormal basis functions adapted to handle truncated haloes or by splines. High fidelity orbit reconstructions are attainable using either method with similar computational expense. We quantify how the error in the reconstructed orbits varies with expansion order and snapshot spacing. Despite the many possible biorthonormal expansions, it is hard to beat a conventional Hernquist-Ostriker expansion with a moderate number of terms (15\gtrsim15 radial and 6\gtrsim6 angular). As two applications of the developed machinery, we assess the impact of the time-dependence of the potential on (i) the orbits of Milky Way satellites, and (ii) planes of satellites as observed in the Milky Way and other nearby galaxies. Time evolution over the last 5 Gyr introduces an uncertainty in the Milky Way satellites' orbital parameters of 15\sim 15 per cent, comparable to that induced by the observational errors or the uncertainty in the present-day Milky Way potential. On average, planes of satellites grow at similar rates in evolving and time-independent potentials. There can be more, or less, growth in the plane's thickness, if the plane becomes less, or more, aligned with the major or minor axis of the evolving halo

    Accurate photoionisation cross section for He at non-resonant photon energies

    Full text link
    The total single-photon ionisation cross section was calculated for helium atoms in their ground state. Using a full configuration-interaction approach the photoionisation cross section was extracted from the complex-scaled resolvent. In the energy range from ionisation threshold to 59\,eV our results agree with an earlier BB-spline based calculation in which the continuum is box discretised within a relative error of 0.01%0.01\% in the non-resonant part of the spectrum. Above the \He^{++} threshold our results agree on the other hand very well to a recent Floquet calculation. Thus our calculation confirms the previously reported deviations from the experimental reference data outside the claimed error estimate. In order to extend the calculated spectrum to very high energies, an analytical hydrogenic-type model tail is introduced that should become asymptotically exact for infinite photon energies. Its universality is investigated considering also H^-, Li+^+, and HeH+^+. With the aid of the tail corrections to the dipole approximation are estimated.Comment: 20 pages, 7 figures, 2 table

    Isotropic Polyharmonic B-Splines: Scaling Functions and Wavelets

    Get PDF
    In this paper, we use polyharmonic B-splines to build multidimensional wavelet bases. These functions are nonseparable, multidimensional basis functions that are localized versions of radial basis functions. We show that Rabut's elementary polyharmonic B-splines do not converge to a Gaussian as the order parameter increases, as opposed to their separable B-spline counterparts. Therefore, we introduce a more isotropic localization operator that guarantees this convergence, resulting into the isotropic polyharmonic B-splines. Next, we focus on the two-dimensional quincunx subsampling scheme. This configuration is of particular interest for image processing, because it yields a finer scale progression than the standard dyadic approach. However, up until now, the design of appropriate filters for the quincunx scheme has mainly been done using the McClellan transform. In our approach, we start from the scaling functions, which are the polyharmonic B-splines and, as such, explicitly known, and we derive a family of polyharmonic spline wavelets corresponding to different flavors of the semi-orthogonal wavelet transform; e.g., orthonormal, B-spline, and dual. The filters are automatically specified by the scaling relations satisfied by these functions. We prove that the isotropic polyharmonic B-spline wavelet converges to a combination of four Gabor atoms, which are well separated in the frequency domain. We also show that these wavelets are nearly isotropic and that they behave as an iterated Laplacian operator at low frequencies. We describe an efficient fast Fourier transform-based implementation of the discrete wavelet transform based on polyharmonic B-splines

    Least-Squares Image Resizing Using Finite Differences

    Get PDF
    We present an optimal spline-based algorithm for the enlargement or reduction of digital images with arbitrary (noninteger) scaling factors. This projection-based approach can be realized thanks to a new finite difference method that allows the computation of inner products with analysis functions that are B-splines of any degree n. A noteworthy property of the algorithm is that the computational complexity per pixel does not depend on the scaling factor a. For a given choice of basis functions, the results of our method are consistently better than those of the standard interpolation procedure; the present scheme achieves a reduction of artifacts such as aliasing and blocking and a significant improvement of the signal-to-noise ratio. The method can be generalized to include other classes of piecewise polynomial functions, expressed as linear combinations of B-splines and their derivatives

    Reconstruction of Functions From Non-uniformly Distributed Sampled Data in Shift-Invariant Frame Subspaces

    Get PDF
    The focus of this research is to study and implement efficient iterative reconstruction algorithms. Iterative reconstruction algorithms are used to reconstruct bandlimited signals in shift-invariant L2 subspaces from a set of non-uniformly distributed sampled data. The Shannon-Whittaker reconstruction formula commonly used in uniform sampling problems is insufficient in reconstructing function from non-uniformly distributed sampled data. Therefore new techniques are required. There are many traditional approaches for non-uniform sampling and reconstruction methods where the Adaptive Weights (AW) algorithm is considered to be the most efficient. Recently, the Partitions of Unity (PoU) algorithm has been suggested to outperform the AW although there has been much literature covering its numerical performance. A study and analysis of the implementation of the Adaptive Weights (AW) and Partitions of Unity (PoU) reconstruction methods is conducted. The algorithms consider the missing data problem, defined as reconstructing continuous-time (CT) signals from non-uniform samples which resulted from missing samples on a uniform grid. Mainly, the algorithms convert the non-uniform grid to a uniform grid. The implemented iterative methods construct CT bandlimited functions in frame subspaces. Bandlimited functions are considered to be a superposition of basis functions, named frames. PoU is a variation of AW, they differ by the choice of frame because each frame produces a different approximation operator and convergence rate. If efficiency is defined as the norm convergence and computational time of an algorithm, then among the two methods, discussed, the PoU method is more efficient. The AW method is slow and converged to a higher error than that of the PoU. However, AW compensates for its slowness and less accuracy by being convergent and robust for large sampling gaps and less sensitive to the sampling irregularities. The impact of additive white Gaussian noise on the performance of the two algorithms is also investigated. The numerical tools utilized in this research consist of the theory of discrete irregular sampling, frames, and iterative techniques. The developed software provides a platform for sampling signals under non-ideal conditions with real devices

    Spherically symmetric continuum approach to the simulation of molecular ionization processes

    Get PDF
    Photoelectron and autoionization spectroscopy are widespread tools to analyze the molecular electronic structure. An unambiguous assignment of the experimental data is often only possible with support from theoretical modeling. Unfortunately, the quantum mechanical simulation of the outgoing electron is not feasible for many scientifically interesting molecules. This thesis suggests a spherically symmetric continuum orbital model that takes into account the spherically averaged molecular potential to improve the description of the outgoing electrons.Photoelektronen- und Autoionisationsspektroskopie sind häufig genutzte Werkzeuge, um die molekulare Elektronenstruktur zu untersuchen. Eine eindeutige Interpretation experimenteller Daten ist häufig nur mit simulationsseitiger Unterstützung möglich. Leider ist die quantenmechanische Beschreibung des ausgehenden Elektrons für viele Moleküle von wissenschaftlichem Interesse nicht durchführbar. Diese Dissertation schlägt ein kugelsymmetrisches Kontinuumsorbitalmodell vor, welches das sphärisch gemittelte molekulare Potential verwendet, um die Beschreibung der ausgehenden Elektronen zu verbessern

    The multiresolution Fourier transform : a general purpose tool for image analysis

    Get PDF
    The extraction of meaningful features from an image forms an important area of image analysis. It enables the task of understanding visual information to be implemented in a coherent and well defined manner. However, although many of the traditional approaches to feature extraction have proved to be successful in specific areas, recent work has suggested that they do not provide sufficient generality when dealing with complex analysis problems such as those presented by natural images. This thesis considers the problem of deriving an image description which could form the basis of a more general approach to feature extraction. It is argued that an essential property of such a description is that it should have locality in both the spatial domain and in some classification space over a range of scales. Using the 2-d Fourier domain as a classification space, a number of image transforms that might provide the required description are investigated. These include combined representations such as a 2-d version of the short-time Fourier transform (STFT), and multiscale or pyramid representations such as the wavelet transform. However, it is shown that these are limited in their ability to provide sufficient locality in both domains and as such do not fulfill the requirement for generality. To overcome this limitation, an alternative approach is proposed in the form of the multiresolution Fourier transform (MFT). This has a hierarchical structure in which the outermost levels are the image and its discrete Fourier transform (DFT), whilst the intermediate levels are combined representations in space and spatial frequency. These levels are defined to be optimal in terms of locality and their resolution is such that within the transform as a whole there is a uniform variation in resolution between the spatial domain and the spatial frequency domain. This ensures that locality is provided in both domains over a range of scales. The MFT is also invertible and amenable to efficient computation via familiar signal processing techniques. Examples and experiments illustrating its properties are presented. The problem of extracting local image features such as lines and edges is then considered. A multiresolution image model based on these features is defined and it is shown that the MET provides an effective tool for estimating its parameters.. The model is also suitable for representing curves and a curve extraction algorithm is described. The results presented for synthetic and natural images compare favourably with existing methods. Furthermore, when coupled with the previous work in this area, they demonstrate that the MFT has the potential to provide a basis for the solution of general image analysis problems

    Non-linear Recovery of Sparse Signal Representations with Applications to Temporal and Spatial Localization

    Get PDF
    Foundations of signal processing are heavily based on Shannon's sampling theorem for acquisition, representation and reconstruction. This theorem states that signals should not contain frequency components higher than the Nyquist rate, which is half of the sampling rate. Then, the signal can be perfectly reconstructed from its samples. Increasing evidence shows that the requirements imposed by Shannon's sampling theorem are too conservative for many naturally-occurring signals, which can be accurately characterized by sparse representations that require lower sampling rates closer to the signal's intrinsic information rates. Finite rate of innovation (FRI) is a new theory that allows to extract underlying sparse signal representations while operating at a reduced sampling rate. The goal of this PhD work is to advance reconstruction techniques for sparse signal representations from both theoretical and practical points of view. Specifically, the FRI framework is extended to deal with applications that involve temporal and spatial localization of events, including inverse source problems from radiating fields. We propose a novel reconstruction method using a model-fitting approach that is based on minimizing the fitting error subject to an underlying annihilation system given by the Prony's method. First, we showed that this is related to the problem known as structured low-rank matrix approximation as in structured total least squares problem. Then, we proposed to solve our problem under three different constraints using the iterative quadratic maximum likelihood algorithm. Our analysis and simulation results indicate that the proposed algorithms improve the robustness of the results with respect to common FRI reconstruction schemes. We have further developed the model-fitting approach to analyze spontaneous brain activity as measured by functional magnetic resonance imaging (fMRI). For this, we considered the noisy fMRI time course for every voxel as a convolution between an underlying activity inducing signal (i.e., a stream of Diracs) and the hemodynamic response function (HRF). We then validated this method using experimental fMRI data acquired during an event-related study. The results showed for the first time evidence for the practical usage of FRI for fMRI data analysis. We also addressed the problem of retrieving a sparse source distribution from the boundary measurements of a radiating field. First, based on Green's theorem, we proposed a sensing principle that allows to relate the boundary measurements to the source distribution. We focused on characterizing these sensing functions with particular attention for those that can be derived from holomorphic functions as they allow to control spatial decay of the sensing functions. With this selection, we developed an FRI-inspired non-iterative reconstruction algorithm. Finally, we developed an extension to the sensing principle (termed eigensensing) where we choose the spatial eigenfunctions of the Laplace operator as the sensing functions. With this extension, we showed that eigensensing principle allows to extract partial Fourier measurements of the source functions from boundary measurements. We considered photoacoustic tomography as a potential application of these theoretical developments
    corecore