1,233 research outputs found

    Making distributed computing infrastructures interoperable and accessible for e-scientists at the level of computational workflows

    Get PDF
    As distributed computing infrastructures evolve, and as their take up by user communities is growing, the importance of making different types of infrastructures based on a heterogeneous set of middleware interoperable is becoming crucial. This PhD submission, based on twenty scientific publications, presents a unique solution to the challenge of the seamless interoperation of distributed computing infrastructures at the level of workflows. The submission investigates workflow level interoperation inside a particular workflow system (intra-workflow interoperation), and also between different workflow solutions (inter-workflow interoperation). In both cases the interoperation of workflow component execution and the feeding of data into these components workflow components are considered. The invented and developed framework enables the execution of legacy applications and grid jobs and services on multiple grid systems, the feeding of data from heterogeneous file and data storage solutions to these workflow components, and the embedding of non-native workflows to a hosting meta-workflow. Moreover, the solution provides a high level user interface that enables e-scientist end-users to conveniently access the interoperable grid solutions without requiring them to study or understand the technical details of the underlying infrastructure. The candidate has also developed an application porting methodology that enables the systematic porting of applications to interoperable and interconnected grid infrastructures, and facilitates the exploitation of the above technical framework

    Sub-Nyquist Sampling: Bridging Theory and Practice

    Full text link
    Sampling theory encompasses all aspects related to the conversion of continuous-time signals to discrete streams of numbers. The famous Shannon-Nyquist theorem has become a landmark in the development of digital signal processing. In modern applications, an increasingly number of functions is being pushed forward to sophisticated software algorithms, leaving only those delicate finely-tuned tasks for the circuit level. In this paper, we review sampling strategies which target reduction of the ADC rate below Nyquist. Our survey covers classic works from the early 50's of the previous century through recent publications from the past several years. The prime focus is bridging theory and practice, that is to pinpoint the potential of sub-Nyquist strategies to emerge from the math to the hardware. In that spirit, we integrate contemporary theoretical viewpoints, which study signal modeling in a union of subspaces, together with a taste of practical aspects, namely how the avant-garde modalities boil down to concrete signal processing systems. Our hope is that this presentation style will attract the interest of both researchers and engineers in the hope of promoting the sub-Nyquist premise into practical applications, and encouraging further research into this exciting new frontier.Comment: 48 pages, 18 figures, to appear in IEEE Signal Processing Magazin

    Multidimensional random sampling for Fourier transform estimation

    Get PDF
    This research considers the Fourier transform calculations of multidimensional signals. The calculations are based on random sampling, where the sampling points are nonuniformly distributed according to strategically selected probability functions, to provide new opportunities that are unavailable in the uniform sampling environment. The latter imposes the sampling density of at least the Nyquist density. Otherwise, alias frequencies occur in the processed bandwidth which can lead to irresolvable processing problems. Random sampling can mitigate Nyquist limit that classical uniform-sampling-based approaches endure, for the purpose of performing direct (with no prefiltering or downconverting) Fourier analysis of (high-frequency) signals with unknown spectrum support using low sampling density. Lowering the sampling density while achieving the same signal processing objective could be an efficient, if not essential, way of exploiting the system resources in terms of power, hardware complexity and the acquisition-processing time. In this research we investigate and devise novel random sampling estimation schemes for multidimensional Fourier transform. The main focus of the investigation and development is on the aspect of the quality of estimated Fourier transform in terms of the sampling density. The former aspect is crucial as it serves towards the heart objective of random sampling of lowering the sampling density. This research was motivated by the applicability of the random-sampling-based approaches in determining the Fourier transform in multidimensional Nuclear Magnetic Resonance (NMR) spectroscopy to resolve the critical issue of its long experimental time

    Non-uniform sampling and reconstruction of multi-band signals and its application in wideband spectrum sensing of cognitive radio

    Full text link
    Sampling theories lie at the heart of signal processing devices and communication systems. To accommodate high operating rates while retaining low computational cost, efficient analog-to digital (ADC) converters must be developed. Many of limitations encountered in current converters are due to a traditional assumption that the sampling state needs to acquire the data at the Nyquist rate, corresponding to twice the signal bandwidth. In this thesis a method of sampling far below the Nyquist rate for sparse spectrum multiband signals is investigated. The method is called periodic non-uniform sampling, and it is useful in a variety of applications such as data converters, sensor array imaging and image compression. Firstly, a model for the sampling system in the frequency domain is prepared. It relates the Fourier transform of observed compressed samples with the unknown spectrum of the signal. Next, the reconstruction process based on the topic of compressed sensing is provided. We show that the sampling parameters play an important role on the average sample ratio and the quality of the reconstructed signal. The concept of condition number and its effect on the reconstructed signal in the presence of noise is introduced, and a feasible approach for choosing a sample pattern with a low condition number is given. We distinguish between the cases of known spectrum and unknown spectrum signals respectively. One of the model parameters is determined by the signal band locations that in case of unknown spectrum signals should be estimated from sampled data. Therefore, we applied both subspace methods and non-linear least square methods for estimation of this parameter. We also used the information theoretic criteria (Akaike and MDL) and the exponential fitting test techniques for model order selection in this case

    Sampling and quantization for optimal reconstruction

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2011.Cataloged from PDF version of thesis.Includes bibliographical references (p. 161-167).This thesis develops several approaches for signal sampling and reconstruction given different assumptions about the signal, the type of errors that occur, and the information available about the signal. The thesis first considers the effects of quantization in the environment of interleaved, oversampled multi-channel measurements with the potential of different quantization step size in each channel and varied timing offsets between channels. Considering sampling together with quantization in the digital representation of the continuous-time signal is shown to be advantageous. With uniform quantization and equal quantizer step size in each channel, the effective overall signal-to-noise ratio in the reconstructed output is shown to be maximized when the timing offsets between channels are identical, resulting in uniform sampling when the channels are interleaved. However, with different levels of accuracy in each channel, the choice of identical timing offsets between channels is in general not optimal, with better results often achievable with varied timing offsets corresponding to recurrent nonuniform sampling when the channels are interleaved. Similarly, it is shown that with varied timing offsets, equal quantization step size in each channel is in general not optimal, and a higher signal-to-quantization-noise ratio is often achievable with different levels of accuracy in the quantizers in different channels. Another aspect of this thesis considers nonuniform sampling in which the sampling grid is modeled as a perturbation of a uniform grid. Perfect reconstruction from these nonuniform samples is in general computationally difficult; as an alternative, this work presents a class of approximate reconstruction methods based on the use of time-invariant lowpass filtering, i.e., sinc interpolation. When the average sampling rate is less than the Nyquist rate, i.e., in sub-Nyquist sampling, the artifacts produced when these reconstruction methods are applied to the nonuniform samples can be preferable in certain applications to the aliasing artifacts, which occur in uniform sampling. The thesis also explores various approaches to avoiding aliasing in sampling. These approaches exploit additional information about the signal apart from its bandwidth and suggest using alternative pre-processing instead of the traditional linear time-invariant anti-aliasing filtering prior to sampling.by Shay Maymon.Ph.D

    Theory and realization of novel algorithms for random sampling in digital signal processing

    Get PDF
    Random sampling is a technique which overcomes the alias problem in regular sampling. The randomization, however, destroys the symmetry property of the transform kernel of the discrete Fourier transform. Hence, when transforming a randomly sampled sequence to its frequency spectrum, the Fast Fourier transform cannot be applied and the computational complexity is N(^2). The objectives of this research project are (1) To devise sampling methods for random sampling such that computation may be reduced while the anti-alias property of random sampling is maintained : Two methods of inserting limited regularities into the randomized sampling grids are proposed. They are parallel additive random sampling and hybrid additive random sampling, both of which can save at least 75% of the multiplications required. The algorithms also lend themselves to the implementation by a multiprocessor system, which will further enhance the speed of the evaluation. (2) To study the auto-correlation sequence of a randomly sampled sequence as an alternative means to confirm its anti-alias property : The anti-alias property of the two proposed methods can be confirmed by using convolution in the frequency domain. However, the same conclusion is also reached by analysing in the spatial domain the auto-correlation of such sample sequences. A technique to evaluate the auto-correlation sequence of a randomly sampled sequence with a regular step size is proposed. The technique may also serve as an algorithm to convert a randomly sampled sequence to a regularly spaced sequence having a desired Nyquist frequency. (3) To provide a rapid spectral estimation using a coarse kernel : The approximate method proposed by Mason in 1980, which trades the accuracy for the speed of the computation, is introduced for making random sampling more attractive. (4) To suggest possible applications for random and pseudo-random sampling : To fully exploit its advantages, random sampling has been adopted in measurement Random sampling is a technique which overcomes the alias problem in regular sampling. The randomization, however, destroys the symmetry property of the transform kernel of the discrete Fourier transform. Hence, when transforming a randomly sampled sequence to its frequency spectrum, the Fast Fourier transform cannot be applied and the computational complexity is N"^. The objectives of this research project are (1) To devise sampling methods for random sampling such that computation may be reduced while the anti-alias property of random sampling is maintained : Two methods of inserting limited regularities into the randomized sampling grids are proposed. They are parallel additive random sampling and hybrid additive random sampling, both of which can save at least 75% , of the multiplications required. The algorithms also lend themselves to the implementation by a multiprocessor system, which will further enhance the speed of the evaluation. (2) To study the auto-correlation sequence of a randomly sampled sequence as an alternative means to confirm its anti-alias property : The anti-alias property of the two proposed methods can be confirmed by using convolution in the frequency domain. However, the same conclusion is also reached by analysing in the spatial domain the auto-correlation of such sample sequences. A technique to evaluate the auto-correlation sequence of a randomly sampled sequence with a regular step size is proposed. The technique may also serve as an algorithm to convert a randomly sampled sequence to a regularly spaced sequence having a desired Nyquist frequency. (3) To provide a rapid spectral estimation using a coarse kernel : The approximate method proposed by Mason in 1980, which trades the accuracy for the speed of the computation, is introduced for making random sampling more attractive. (4) To suggest possible applications for random and pseudo-random sampling : To fully exploit its advantages, random sampling has been adopted in measurement instruments where computing a spectrum is either minimal or not required. Such applications in instrumentation are easily found in the literature. In this thesis, two applications in digital signal processing are introduced. (5) To suggest an inverse transformation for random sampling so as to complete a two-way process and to broaden its scope of application. Apart from the above, a case study of realizing in a transputer network the prime factor algorithm with regular sampling is given in Chapter 2 and a rough estimation of the signal-to-noise ratio for a spectrum obtained from random sampling is found in Chapter 3. Although random sampling is alias-free, problems in computational complexity and noise prevent it from being adopted widely in engineering applications. In the conclusions, the criteria for adopting random sampling are put forward and the directions for its development are discussed
    • …
    corecore