10,611 research outputs found

    Preamble design using embedded signalling for OFDM broadcast systems based on reduced-complexity distance detection

    No full text
    The second generation digital terrestrial television broadcasting standard (DVB-T2) adopts the so-called P1 symbol as the preamble for initial synchronization. The P1 symbol also carries a number of basic transmission parameters, including the fast Fourier transform size and the single-input/single-output as well as multiple-input/single-output mode, in order to appropriately configure the receiver for carrying out the subsequent processing. In this contribution, an improved preamble design is proposed, where a pair of training sequences is inserted in the frequency domain and their distance is used for transmission parameter signalling. At the receiver, only a low-complexity correlator is required for the detection of the signalling. Both the coarse carrier frequency offset and the signalling can be simultaneously estimated by detecting the above-mentioned correlation. Compared to the standardised P1 symbol, the proposed preamble design significantly reduces the complexity of the receiver while retaining high robustness in frequency-selective fading channels. Furthermore, we demonstrate that the proposed preamble design achieves a better signalling performance than the standardised P1 symbol, despite reducing the numbers of multiplications and additions by about 40% and 20%, respectively

    Phase diagram of QCD at finite temperature and chemical potential from lattice simulations with dynamical Wilson quarks

    Full text link
    We present the first results for lattice QCD at finite temperature TT and chemical potential ÎŒ\mu with four flavors of Wilson quarks. The calculations are performed using the imaginary chemical potential method at Îș=0\kappa=0, 0.001, 0.15, 0.165, 0.17 and 0.25, where Îș\kappa is the hopping parameter, related to the bare quark mass mm and lattice spacing aa by Îș=1/(2ma+8)\kappa=1/(2ma+8). Such a method allows us to do large scale Monte Carlo simulations at imaginary chemical potential ÎŒ=iÎŒI\mu=i \mu_I. By analytic continuation of the data with ÎŒI<πT/3\mu_I < \pi T/3 to real values of the chemical potential, we expect at each Îș∈[0,Îșchiral]\kappa\in [0,\kappa_{chiral}], a transition line on the (ÎŒ,T)(\mu, T) plane, in a region relevant to the search for quark gluon plasma in heavy-ion collision experiments. The transition is first order at small or large quark mass, and becomes a crossover at intermediate quark mass.Comment: Published versio

    Chiral Spin Liquid in a Frustrated Anisotropic Kagome Heisenberg Model

    Get PDF
    published_or_final_versio

    Learning Gradual Typing Performance

    Get PDF
    Gradual typing has emerged as a promising typing discipline for reconciling static and dynamic typing, which have respective strengths and shortcomings. Thanks to its promises, gradual typing has gained tremendous momentum in both industry and academia. A main challenge in gradual typing is that, however, the performance of its programs can often be unpredictable, and adding or removing the type of a a single parameter may lead to wild performance swings. Many approaches have been proposed to optimize gradual typing performance, but little work has been done to aid the understanding of the performance landscape of gradual typing and navigating the migration process (which adds type annotations to make programs more static) to avert performance slowdowns. Motivated by this situation, this work develops a machine-learning-based approach to predict the performance of each possible way of adding type annotations to a program. On top of that, many supports for program migrations could be developed, such as finding the most performant neighbor of any given configuration. Our approach gauges runtime overheads of dynamic type checks inserted by gradual typing and uses that information to train a machine learning model, which is used to predict the running time of gradual programs. We have evaluated our approach on 12 Python benchmarks for both guarded and transient semantics. For guarded semantics, our evaluation results indicate that with only 40 training instances generated from each benchmark, the predicted times for all other instances differ on average by 4% from the measured times. For transient semantics, the time difference ratio is higher but the time difference is often within 0.1 seconds
    • 

    corecore