93 research outputs found

    Scaled Spectroscopy of 1Se and 1Po Highly Excited States of Helium

    Full text link
    In this paper, we examine the properties of the 1Se and 1Po states of helium, combining perimetric coordinates and complex rotation methods. We compute Fourier transforms of quantities of physical interest, among them the average of the operator cos(theta_12), which measures the correlation between the two electrons. Graphs obtained for both 1Se and 1Po states show peaks at action of classical periodic orbits, either "frozen planet" orbit or asymmetric stretch orbits. This observation legitimates the semiclassical quantization of helium with those orbits only, not just for S states but also for P states, which is a new result. To emphasize the similarity between the S and P states we show wavefunctions of 1Po states, presenting the same structure as 1Se states, namely the "frozen planet" and asymmetric stretch configurations.Comment: revtex 15 pages with 6 figures, 2 figures (large) are available on request at email address [email protected]. to appear in J. Phys. B (April 98

    Lecture 02: Tile Low-rank Methods and Applications (w/review)

    Get PDF
    As simulation and analytics enter the exascale era, numerical algorithms, particularly implicit solvers that couple vast numbers of degrees of freedom, must span a widening gap between ambitious applications and austere architectures to support them. We present fifteen universals for researchers in scalable solvers: imperatives from computer architecture that scalable solvers must respect, strategies towards achieving them that are currently well established, and additional strategies currently being developed for an effective and efficient exascale software ecosystem. We consider recent generalizations of what it means to “solve” a computational problem, which suggest that we have often been “oversolving” them at the smaller scales of the past because we could afford to do so. We present innovations that allow to approach lin-log complexity in storage and operation count in many important algorithmic kernels and thus create an opportunity for full applications with optimal scalability

    An Efficient Maximization Algorithm With Implications in Min-Max Predictive Control

    Get PDF
    n this technical note, an algorithm for binary quadratic programs defined by matrices with band structure is proposed. It was shown in the article by T. Alamo, D. M. de la Pentildea, D. Limon, and E. F. Camacho, ldquoConstrained min-max predictive control: modifications of the objective function leading to polynomial complexity,rdquo IEEE Tran. Autom. Control , vol. 50, pp. 710-714, May 2005, that this class of problems arise in robust model predictive control when min-max techniques are applied. Although binary quadratic problems belongs to a class of NP-complete problems, the computational burden of the proposed maximization algorithm for band matrices is polynomial with the dimension of the optimization variable and exponential with the band size. Computational results and comparisons on several hundred test problems demonstrate the efficiency of the algorithm

    Lecture 02: Tile Low-rank Methods and Applications (w/review)

    Get PDF
    As simulation and analytics enter the exascale era, numerical algorithms, particularly implicit solvers that couple vast numbers of degrees of freedom, must span a widening gap between ambitious applications and austere architectures to support them. We present fifteen universals for researchers in scalable solvers: imperatives from computer architecture that scalable solvers must respect, strategies towards achieving them that are currently well established, and additional strategies currently being developed for an effective and efficient exascale software ecosystem. We consider recent generalizations of what it means to “solve” a computational problem, which suggest that we have often been “oversolving” them at the smaller scales of the past because we could afford to do so. We present innovations that allow to approach lin-log complexity in storage and operation count in many important algorithmic kernels and thus create an opportunity for full applications with optimal scalability

    A Spatial Relations Study of Virus Infected Cells and the Human Immune Response through the V-Proportionality Measurement

    Get PDF
    Biotechnological tools have never been stronger than today and the data they provide is absolutely fascinating. As we get a clearer picture of the intricate workings of living systems eective mathematical and statistical tools become a necessity in order to reach a comprehensive understanding of said systems. The purpose of this thesis is to statistically explore cutting edge biomedical data taken from virus infected human tissue samples in the hopes of nding interesting correlations amongst the dierent components in the samples. We will also show that spatial statistical methods can be used to draw valuable and signicant conclusions about biological systems. The method of choice for the statistical analysis in this thesis is the Vproportionality measurement. In theory it can distinguish positive, negative and lack of spatial correlation in datasets through clever use of the Voronoi diagram. The code used for the implementation of the V-proportionality measurement is both explained and provided within the connes of this paper

    Restorasi Citra Pada Kompresi Spiht (Set Partitioning In Hierarchical Trees) Menggunakan Metode Iteratif Lanczos-Hybrid Regularization

    Get PDF
    Restorasi citra merupakan proses merekonstruksi atau mendapatkan kembali citra asli dari sebuah citra yang terdegradasi agar dapat menyerupai citra asli. Kompresi citra merupakan salah satu proses pemampatan citra yang menyebabkan citra mengalami degradasi atau penurunan kualitas. Penurunan kualitas citra terjadi pada proses kompresi loosy, salah satu contoh kompresi loosy adalah dengan metode Set Partitioning In Hierarchical Tress (SPIHT). Oleh karena itu, untuk meningkatkan kembali kualitas citra agar menyerupai citra asli maka digunakan restorasi citra dengan metode Iterative Lanczos Hybrid Regularization. Pada tugas akhir ini menggunakan citra grayscale dengan beberapa variasi resolusi citra untuk data uji coba kompresi SPIHT dan restorasi citra. Pengujian restorasi citra dengan data uji coba nilai PSNR sebesar 25 dB mengalami kenaikan nilai PSNR rata-rata sebesar 0,91 dB dan waktu komputasi 187,058 detik lebih lambat dari proses kompresi. Pada data uji coba nilai PSNR sebesar 35 dB mengalami kenaikan nilai PSNR rata rata sebesar 0,57 dB dan waktu komputasi 127,418 detik lebih cepat dari proses kompresi. Hal ini menunjukkan bahwa citra hasil restorasi dengan menggunakan metode iteratif lanczos hybrid regularization dapat meningkatkan kualitas citra ========================================================================= Image restoration is reconstructing original image from image degradation so that can be similar with original image. Image compression is one of image processing which causes image degradation or loss of quality. A decrease image quality occurs in loosy compression, an example using Set Partitioning In Hierarchical Trees. Therefore, to improve the image quality, back to resemble the original image used image restoration with iterative lanczos-hybrid regularization method. In this research using grayscale image with some variation of the image resolution for trial data compression and image restoration. Image restoration program with PSNR value by 25 dB can increased by an average of 0,91 dB and have total time elapsed about 187,058 second slowest than the compression process. At trial data PSNR value by 35 dB can increased by an average of 0,57 dB and have total time elapsed about 127,418 second fastest than the compression process. Than, this research can showed that the image restoration using iterative lanczos hybrid regularization method can increased image qualit

    Precision analysis for hardware acceleration of numerical algorithms

    No full text
    The precision used in an algorithm affects the error and performance of individual computations, the memory usage, and the potential parallelism for a fixed hardware budget. However, when migrating an algorithm onto hardware, the potential improvements that can be obtained by tuning the precision throughout an algorithm to meet a range or error specification are often overlooked; the major reason is that it is hard to choose a number system which can guarantee any such specification can be met. Instead, the problem is mitigated by opting to use IEEE standard double precision arithmetic so as to be ‘no worse’ than a software implementation. However, the flexibility in the number representation is one of the key factors that can be exploited on reconfigurable hardware such as FPGAs, and hence ignoring this potential significantly limits the performance achievable. In order to optimise the performance of hardware reliably, we require a method that can tractably calculate tight bounds for the error or range of any variable within an algorithm, but currently only a handful of methods to calculate such bounds exist, and these either sacrifice tightness or tractability, whilst simulation-based methods cannot guarantee the given error estimate. This thesis presents a new method to calculate these bounds, taking into account both input ranges and finite precision effects, which we show to be, in general, tighter in comparison to existing methods; this in turn can be used to tune the hardware to the algorithm specifications. We demonstrate the use of this software to optimise hardware for various algorithms to accelerate the solution of a system of linear equations, which forms the basis of many problems in engineering and science, and show that significant performance gains can be obtained by using this new approach in conjunction with more traditional hardware optimisations

    Towards approximate fair bandwidth sharing via dynamic priority queuing

    Get PDF
    We tackle the problem of a network switch enforcing fair bandwidth sharing of the same link among many TCP-like senders. Most of the mechanisms to solve this problem are based on complex scheduling algorithms, whose feasibility becomes very expensive with today's line rate requirements, i.e. 10-100 Gbit/s per port. We propose a new scheme called FDPA in which we do not modify the scheduler, but instead we use an array of rate estimators to dynamically assign traffic flows to an existing strict priority scheduler serving only few queues. FDPA is inspired by recent advances in programmable stateful data planes. We propose a design that uses primitives common in data plane abstractions such as P4 and OpenFlow. We conducted experiments on a physical 10 Gbit/s testbed, we present preliminary results showing that FDPA produces fairness comparable to approaches based on scheduling

    Multiprocessor Out-of-Core FFTs with Distributed Memory and Parallel Disks

    Get PDF
    This paper extends an earlier out-of-core Fast Fourier Transform (FFT) method for a uniprocessor with the Parallel Disk Model (PDM) to use multiple processors. Four out-of-core multiprocessor methods are examined. Operationally, these methods differ in the size of mini-butterfly computed in memory and how the data are organized on the disks and in the distributed memory of the multiprocessor. The methods also perform differing amounts of I/O and communication. Two of them have the remarkable property that even though they are computing the FFT on a multiprocessor, all interprocessor communication occurs outside the mini-butterfly computations. Performance results on a small workstation cluster indicate that except for unusual combinations of problem size and memory size, the methods that do not perform interprocessor communication during the mini-butterfly computations require approximately 86% of the time of those that do. Moreover, the faster methods are much easier to implement
    corecore