9,014 research outputs found

    Optimal Partitioned Cyclic Difference Packings for Frequency Hopping and Code Synchronization

    Full text link
    Optimal partitioned cyclic difference packings (PCDPs) are shown to give rise to optimal frequency-hopping sequences and optimal comma-free codes. New constructions for PCDPs, based on almost difference sets and cyclic difference matrices, are given. These produce new infinite families of optimal PCDPs (and hence optimal frequency-hopping sequences and optimal comma-free codes). The existence problem for optimal PCDPs in Z3m{\mathbb Z}_{3m}, with mm base blocks of size three, is also solved for all m≢8,16(mod24)m\not\equiv 8,16\pmod{24}.Comment: to appear in IEEE Transactions on Information Theor

    High-rate self-synchronizing codes

    Full text link
    Self-synchronization under the presence of additive noise can be achieved by allocating a certain number of bits of each codeword as markers for synchronization. Difference systems of sets are combinatorial designs which specify the positions of synchronization markers in codewords in such a way that the resulting error-tolerant self-synchronizing codes may be realized as cosets of linear codes. Ideally, difference systems of sets should sacrifice as few bits as possible for a given code length, alphabet size, and error-tolerance capability. However, it seems difficult to attain optimality with respect to known bounds when the noise level is relatively low. In fact, the majority of known optimal difference systems of sets are for exceptionally noisy channels, requiring a substantial amount of bits for synchronization. To address this problem, we present constructions for difference systems of sets that allow for higher information rates while sacrificing optimality to only a small extent. Our constructions utilize optimal difference systems of sets as ingredients and, when applied carefully, generate asymptotically optimal ones with higher information rates. We also give direct constructions for optimal difference systems of sets with high information rates and error-tolerance that generate binary and ternary self-synchronizing codes.Comment: 9 pages, no figure, 2 tables. Final accepted version for publication in the IEEE Transactions on Information Theory. Material presented in part at the International Symposium on Information Theory and its Applications, Honolulu, HI USA, October 201

    Correlations and Clustering in Wholesale Electricity Markets

    Full text link
    We study the structure of locational marginal prices in day-ahead and real-time wholesale electricity markets. In particular, we consider the case of two North American markets and show that the price correlations contain information on the locational structure of the grid. We study various clustering methods and introduce a type of correlation function based on event synchronization for spiky time series, and another based on string correlations of location names provided by the markets. This allows us to reconstruct aspects of the locational structure of the grid.Comment: 30 pages, several picture

    NOMAD: Non-locking, stOchastic Multi-machine algorithm for Asynchronous and Decentralized matrix completion

    Full text link
    We develop an efficient parallel distributed algorithm for matrix completion, named NOMAD (Non-locking, stOchastic Multi-machine algorithm for Asynchronous and Decentralized matrix completion). NOMAD is a decentralized algorithm with non-blocking communication between processors. One of the key features of NOMAD is that the ownership of a variable is asynchronously transferred between processors in a decentralized fashion. As a consequence it is a lock-free parallel algorithm. In spite of being an asynchronous algorithm, the variable updates of NOMAD are serializable, that is, there is an equivalent update ordering in a serial implementation. NOMAD outperforms synchronous algorithms which require explicit bulk synchronization after every iteration: our extensive empirical evaluation shows that not only does our algorithm perform well in distributed setting on commodity hardware, but also outperforms state-of-the-art algorithms on a HPC cluster both in multi-core and distributed memory settings

    One machine, one minute, three billion tetrahedra

    Full text link
    This paper presents a new scalable parallelization scheme to generate the 3D Delaunay triangulation of a given set of points. Our first contribution is an efficient serial implementation of the incremental Delaunay insertion algorithm. A simple dedicated data structure, an efficient sorting of the points and the optimization of the insertion algorithm have permitted to accelerate reference implementations by a factor three. Our second contribution is a multi-threaded version of the Delaunay kernel that is able to concurrently insert vertices. Moore curve coordinates are used to partition the point set, avoiding heavy synchronization overheads. Conflicts are managed by modifying the partitions with a simple rescaling of the space-filling curve. The performances of our implementation have been measured on three different processors, an Intel core-i7, an Intel Xeon Phi and an AMD EPYC, on which we have been able to compute 3 billion tetrahedra in 53 seconds. This corresponds to a generation rate of over 55 million tetrahedra per second. We finally show how this very efficient parallel Delaunay triangulation can be integrated in a Delaunay refinement mesh generator which takes as input the triangulated surface boundary of the volume to mesh

    Parallel processing for scientific computations

    Get PDF
    The main contribution of the effort in the last two years is the introduction of the MOPPS system. After doing extensive literature search, we introduced the system which is described next. MOPPS employs a new solution to the problem of managing programs which solve scientific and engineering applications on a distributed processing environment. Autonomous computers cooperate efficiently in solving large scientific problems with this solution. MOPPS has the advantage of not assuming the presence of any particular network topology or configuration, computer architecture, or operating system. It imposes little overhead on network and processor resources while efficiently managing programs concurrently. The core of MOPPS is an intelligent program manager that builds a knowledge base of the execution performance of the parallel programs it is managing under various conditions. The manager applies this knowledge to improve the performance of future runs. The program manager learns from experience
    • …
    corecore