43 research outputs found

    Semiclassical properties and chaos degree for the quantum baker's map

    Get PDF
    We study the chaotic behaviour and the quantum-classical correspondence for the baker's map. Correspondence between quantum and classical expectation values is investigated and it is numerically shown that it is lost at the logarithmic timescale. The quantum chaos degree is computed and it is demonstrated that it describes the chaotic features of the model. The correspondence between classical and quantum chaos degrees is considered.Comment: 30 pages, 4 figures, accepted for publication in J. Math. Phy

    Prediction based task scheduling in distributed computing

    Full text link

    Distributed Maple: parallel computer algebra in networked environments

    Get PDF
    AbstractWe describe the design and use of Distributed Maple, an environment for executing parallel computer algebra programs on multiprocessors and heterogeneous clusters. The system embeds kernels of the computer algebra system Maple as computational engines into a networked coordination layer implemented in the programming language Java. On the basis of a comparatively high-level programming model, one may write parallel Maple programs that show good speedups in medium-scaled environments. We report on the use of the system for the parallelization of various functions of the algebraic geometry library CASA and demonstrate how design decisions affect the dynamic behaviour and performance of a parallel application. Numerous experimental results allow comparison of Distributed Maple with other systems for parallel computer algebra

    On Search Complexity of Discrete Logarithm

    Get PDF

    HPC-GAP: engineering a 21st-century high-performance computer algebra system

    Get PDF
    Symbolic computation has underpinned a number of key advances in Mathematics and Computer Science. Applications are typically large and potentially highly parallel, making them good candidates for parallel execution at a variety of scales from multi-core to high-performance computing systems. However, much existing work on parallel computing is based around numeric rather than symbolic computations. In particular, symbolic computing presents particular problems in terms of varying granularity and irregular task sizes thatdo not match conventional approaches to parallelisation. It also presents problems in terms of the structure of the algorithms and data. This paper describes a new implementation of the free open-source GAP computational algebra system that places parallelism at the heart of the design, dealing with the key scalability and cross-platform portability problems. We provide three system layers that deal with the three most important classes of hardware: individual shared memory multi-core nodes, mid-scale distributed clusters of (multi-core) nodes, and full-blown HPC systems, comprising large-scale tightly-connected networks of multi-core nodes. This requires us to develop new cross-layer programming abstractions in the form of new domain-specific skeletons that allow us to seamlessly target different hardware levels. Our results show that, using our approach, we can achieve good scalability and speedups for two realistic exemplars, on high-performance systems comprising up to 32,000 cores, as well as on ubiquitous multi-core systems and distributed clusters. The work reported here paves the way towards full scale exploitation of symbolic computation by high-performance computing systems, and we demonstrate the potential with two major case studies

    On Hardness of Testing Equivalence to Sparse Polynomials Under Shifts

    Get PDF

    Robust CFAR Detector Based on Truncated Statistics for Polarimetric Synthetic Aperture Radar

    Get PDF
    Constant false alarm rate (CFAR) algorithms using a local training window are widely used for ship detection with synthetic aperture radar (SAR) imagery. However, when the density of the targets is high, such as in busy shipping lines and crowded harbors, the background statistics may be contaminated by the presence of nearby targets in the training window. Recently, a robust CFAR detector based on truncated statistics (TS) was proposed. However, the truncation of data in the format of polarimetric covariance matrices is much more complicated with respect to the truncation of intensity (single polarization) data. In this article, a polarimetric whitening filter TS CFAR (PWF-TS-CFAR) is proposed to estimate the background parameters accurately in the contaminated sea clutter for PolSAR imagery. The CFAR detector uses a polarimetric whitening filter (PWF) to turn the multidimensional problem to a 1-D case. It uses truncation to exclude possible statistically interfering outliers and uses TS to model the remaining background samples. The algorithm does not require prior knowledge of the interfering targets, and it is performed iteratively and adaptively to derive better estimates of the polarimetric covariance matrix (although this is computationally expensive). The PWF-TS-CFAR detector provides accurate background clutter modeling, a stable false alarm property, and improves the detection performance in high-target-density situations. RadarSat2 data are used to verify our derivations, and the results are in line with the theory

    A parallel architecture for disk-based computing over the Baby Monster and other large finite simple groups

    Full text link
    corecore