398 research outputs found

    Collocation Method to Solve Elliptic Equations, Bivariate Poly-Sinc Approximation

    Get PDF
    The paper proposes a collocation method to solve bivariate elliptic partial differential equations. The method uses Lagrange approximation based on Sinc point collocations. The proposed approximation is collocating on non-equidistant interpolation points generated by conformal maps, called Sinc points. We prove the upper bound of the error for the bivariate Lagrange approximation at these Sinc points. Then we define a collocation algorithm using this approximation to solve elliptic PDEs. We verify the Poly-Sinc technique for different elliptic equations and compare the approximate solutions with exact solutions

    Concepts for on-board satellite image registration, volume 1

    Get PDF
    The NASA-NEEDS program goals present a requirement for on-board signal processing to achieve user-compatible, information-adaptive data acquisition. One very specific area of interest is the preprocessing required to register imaging sensor data which have been distorted by anomalies in subsatellite-point position and/or attitude control. The concepts and considerations involved in using state-of-the-art positioning systems such as the Global Positioning System (GPS) in concert with state-of-the-art attitude stabilization and/or determination systems to provide the required registration accuracy are discussed with emphasis on assessing the accuracy to which a given image picture element can be located and identified, determining those algorithms required to augment the registration procedure and evaluating the technology impact on performing these procedures on-board the satellite

    A fast Monte Carlo scheme for additive processes and option pricing

    Get PDF
    In this paper, we present a very fast Monte Carlo scheme for additive processes: the computational time is of the same order of magnitude of standard algorithms for Brownian motions. We analyze in detail numerical error sources and propose a technique that reduces the two major sources of error. We also compare our results with a benchmark method: the jump simulation with Gaussian approximation. We show an application to additive normal tempered stable processes, a class of additive processes that calibrates ``exactly" the implied volatility surface.Numerical results are relevant. This fast algorithm is also an accurate tool for pricing path-dependent discretely-monitoring options with errors of one bp or below

    Adaptive explicit time delay, frequency estimations in communications systems

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    Improved Interpolation in SPH in Cases of Less Smooth Flow

    Get PDF
    We introduced a method presented in Information Field Theory (IFT) [Abramovich et al., 2007] to improve interpolation in Smoothed Particle Hydrodynamics (SPH) in cases of less smooth flow. The method makes use of wavelet theory combined with B-splines for interpolation. The idea is to identify any jumps a function may have and then reconstruct the smoother segments between the jumps. The results of our work demonstrated superior capability when compared to a particular challenging SPH application, to better conserve jumps and more accurately interpolate the smoother segments of the function. The results of our work also demonstrated increased computational efficiency with limited loss in accuracy as number of multiplications and execution time were reduced. Similar benefits were observed for functions with spikes analyzed by the same method. Lesser, but similar effects were also demonstrated for real life data sets of less smooth nature. SPH is widely used in modeling and simulation of flow of matters. SPH presents advantages compared to grid based methods both in terms of computational efficiency and accuracy, in particular when dealing with less smooth flow. The results we achieved through our research is an improvement to the model in cases of less smooth flow, in particular flow with jumps and spikes. Up until now such improvements have been sought through modifications to the models\u27 physical equations and/or kernel functions and have only partially been able to address the issue. This research, as it introduced wavelet theory and IFT to a field of science that, to our knowledge, not currently are utilizing these methods, did lay the groundwork for future research ideas to benefit SPH. Among those ideas are further development of criteria for wavelet selection, use of smoothing splines for SPH interpolation and incorporation of Bayesian field theory. Improving the method\u27s accuracy, stability and efficiency under more challenging conditions such as flow with jumps and spikes, will benefit applications in a wide area of science. Just in medicine alone, such improvements will further increase real time diagnostics, treatments and training opportunities because jumps and spikes are often the characteristics of significant physiological and anatomic conditions such as pulsatile blood flow, peristaltic intestine contractions and organs\u27 edges appearance in imaging

    IVGPR: A New Program for Advanced End-To-End GPR Processing

    Get PDF
    Ground penetrating radar (GPR) processing workflows commonly rely on techniques developed particularly for seismic reflection imaging. Although this practice has produced an abundance of reliable results, it is limited to basic applications. As the popularity of GPR continues to surge, a greater number of complex studies demand the use of routines that take into account the unique properties of GPR signals. Such is the case of surveys that examine the material properties of subsurface scatterers. The nature of these complicated tasks have created a demand for GPR-specific processing packages flexible enough to tackle new applications. Unlike seismic processing programs, however, GPR counterparts often afford only a limited amount of functionalities. This work produced a new GPR-specific processing package, dubbed IVGPR, that offers over 60 fully customizable procedures. This program was built using the modern Fortran programming language in combination with serial and parallel optimization practices that allow it to achieve high levels of performance. Within its many functions, IVGPR provides the rare opportunity to apply a three-dimensional single-component vector migration routine. This could be of great value for advanced workflows designed to develop and test new true-amplitude and inversion algorithms. Numerous examples given through this work demonstrate the effectiveness of key routines in IVGPR. Additionally, three case studies show end-to-end applications of this program to field records that produced satisfactory result well-suited interpretatio

    Compressed Sensing And Joint Acquisition Techniques In Mri

    Get PDF
    The relatively long scan times in Magnetic Resonance Imaging (MRI) limits some clinical applications and the ability to collect more information in a reasonable period of time. Practically, 3D imaging requires longer acquisitions which can lead to a reduction in image quality due to motion artifacts, patient discomfort, increased costs to the healthcare system and loss of profit to the imaging center. The emphasis in reducing scan time has been to a large degree through using limited k-space data acquisition and special reconstruction techniques. Among these approaches are data extrapolation methods such as constrained reconstruction techniques, data interpolation methods such as parallel imaging, and more recently another technique known as Compressed Sensing (CS). In order to recover the image components from far fewer measurements, CS exploits the compressible nature of MR images by imposing randomness in k-space undersampling schemes. In this work, we explore some intuitive examples of CS reconstruction leading to a primitive algorithm for CS MR imaging. Then, we demonstrate the application of this algorithm to MR angiography (MRA) with the goal of reducing the scan time. Our results showed reconstructions with comparable results to the fully sampled MRA images, providing up to three times faster image acquisition via CS. The CS performance in recovery of the vessels in MRA, showed slightly shrinkage of both the width of and amplitude of the vessels in 20% undersampling scheme. The spatial location of the vessels however remained intact during CS reconstruction. Another direction we pursue is the introduction of joint acquisition for accelerated multi data point MR imaging such as multi echo or dynamic imaging. Keyhole imaging and view sharing are two techniques for accelerating dynamic acquisitions, where some k-space data is shared between neighboring acquisitions. In this work, we combine the concept of CS random sampling with keyhole imaging and view sharing techniques, in order to improve the performance of each method by itself and reduce the scan time. Finally, we demonstrate the application of this new method in multi-echo spin echo (MSE) T2 mapping and compare the results with conventional methods. Our proposed technique can potentially provide up to 2.7 times faster image acquisition. The percentage difference error maps created from T2 maps generated from images with joint acquisition and fully sampled images, have a histogram with a 5-95 percentile of less than 5% error. This technique can potentially be applied to other dynamic imaging acquisitions such as multi flip angle T1 mapping or time resolved contrast enhanced MRA

    Validating Stereoscopic Volume Rendering

    Get PDF
    The evaluation of stereoscopic displays for surface-based renderings is well established in terms of accurate depth perception and tasks that require an understanding of the spatial layout of the scene. In comparison direct volume rendering (DVR) that typically produces images with a high number of low opacity, overlapping features is only beginning to be critically studied on stereoscopic displays. The properties of the specific images and the choice of parameters for DVR algorithms make assessing the effectiveness of stereoscopic displays for DVR particularly challenging and as a result existing literature is sparse with inconclusive results. In this thesis stereoscopic volume rendering is analysed for tasks that require depth perception including: stereo-acuity tasks, spatial search tasks and observer preference ratings. The evaluations focus on aspects of the DVR rendering pipeline and assess how the parameters of volume resolution, reconstruction filter and transfer function may alter task performance and the perceived quality of the produced images. The results of the evaluations suggest that the transfer function and choice of recon- struction filter can have an effect on the performance on tasks with stereoscopic displays when all other parameters are kept consistent. Further, these were found to affect the sensitivity and bias response of the participants. The studies also show that properties of the reconstruction filters such as post-aliasing and smoothing do not correlate well with either task performance or quality ratings. Included in the contributions are guidelines and recommendations on the choice of pa- rameters for increased task performance and quality scores as well as image based methods of analysing stereoscopic DVR images

    Dynamic Code Selection Method for Content Transfer in Deep Space Network

    Get PDF
    Space communications feature large round-trip time delays (for example, between 6.5 and 44 minutes for Mars to Earth and return, depending on the actual distance between the two planets) and highly variable data error rates, for example, bit error rate (BER) of 10−5 is very comand even higher BERs on the order of 10−1 is observed in the deep- space environment. We develop a new content transfer protocol based on RaptorQ codes and turbo codes together with a real-time channel prediction model to maximize file transfer from space vehicles to the Earth stations. While turbo codes are used to correct channel errors, RaptorQ codes are applied to eliminate the need for negative-acknowledgment of the loss of any specific packet. To reduce the effect of channel variation, we develop a practical signal-to-noise ratio (SNR) prediction model that is used to periodically adjust the turbo encoder in distant source spacecraft. This new protocol, termed as dynamic code selection method (DCSM), is compared with two other methods: turbo based genie method (upper bound of DCSM performance) assuming that the channel condition is perfectly known in advance and a static method in which a fixed turbo encoder is used throughout a communication pass. Simulation results demonstrate that the genie can increase telemetry channel throughput expressed in terms of the total number of successfully delivered files during a communication pass by about 20.3 % and DCSM achieves more than 99 % of genie, compared to the static approach being used currently

    Combined Industry, Space and Earth Science Data Compression Workshop

    Get PDF
    The sixth annual Space and Earth Science Data Compression Workshop and the third annual Data Compression Industry Workshop were held as a single combined workshop. The workshop was held April 4, 1996 in Snowbird, Utah in conjunction with the 1996 IEEE Data Compression Conference, which was held at the same location March 31 - April 3, 1996. The Space and Earth Science Data Compression sessions seek to explore opportunities for data compression to enhance the collection, analysis, and retrieval of space and earth science data. Of particular interest is data compression research that is integrated into, or has the potential to be integrated into, a particular space or earth science data information system. Preference is given to data compression research that takes into account the scien- tist's data requirements, and the constraints imposed by the data collection, transmission, distribution and archival systems
    corecore