357 research outputs found

    The Petz (lite) recovery map for scrambling channel

    Full text link
    We study properties of the Petz recovery map in chaotic systems, such as the Hayden-Preskill setup for evaporating black holes and the SYK model. Since these systems exhibit the phenomenon called scrambling, we expect that the expression of the recovery channel R\mathcal{R} gets simplified, given by just the adjoint N\mathcal{N}^{\dagger} of the original channel N\mathcal{N} which defines the time evolution of the states in the code subspace embedded into the physical Hilbert space. We check this phenomenon in two examples. The first one is the Hayden-Preskill setup described by Haar random unitaries. We compute the relative entropy S(R[N[ρ]]ρ)S(\mathcal{R}\left[\mathcal{N}[\rho]\right] ||\rho) and show that it vanishes when the decoupling is archived. We further show that the simplified recovery map is equivalent to the protocol proposed by Yoshida and Kitaev. The second example is the SYK model where the two dimensional code subspace is defined by an insertion of a fermionic operator, and the system is evolved by the SYK Hamiltonian. We check the recovery phenomenon by relating some matrix elements of an output density matrix TR[N[ρ]]T\langle T|\mathcal{R}[\mathcal{N}[\rho]]|T' \rangle to R\'enyi-two modular flowed correlators, and show that they coincide with the elements for the input density matrix with small error after twice the scrambling time.Comment: 47pages with 19 figure

    Efficient Implementation of RLS-Based Adaptive Filterson nVIDIA GeForce Graphics Processing Unit

    Get PDF
    This paper presents efficient implementa- tion of RLS-based adaptive filters with a large number of taps on nVIDIA GeForce graphics processing unit (GPU) and CUDA software development environment. Modification of the order and the combination of calcu- lations reduces the number of accesses to slow off-chip memory. Assigning tasks into multiple threads also takes memory access order into account. Multiple shader pro- cessor arrays are used to handle a large matrix. For a 8192-tap case, a GPU program is almost 30-times faster than a CPU program. Real-time processing is possible for an 8kHz-sampling and 512-tap case by us- ing 32 shader processors, which is only 25% of GeForce 8800GTS

    Implementation of RLS-based Adaptive Filterson nVIDIA GeForce Graphics Processing Unit

    Get PDF
    This paper presents efficient implementa- tion of RLS-based adaptive filters with a large number of taps on nVIDIA GeForce graphics processing unit (GPU) and CUDA software development environment. Modification of the order and the combination of calcu- lations reduces the number of accesses to slow off-chip memory. Assigning tasks into multiple threads also takes memory access order into account. For a 4096-tap case, a GPU program is almost three times faster than a CPU program

    Computationally efficient implementation of sarse-tap FIR adaptive filters with tap-position control on intel IA-32 processors

    Get PDF
    金沢大学理工研究域 電子情報学

    Implementation of large-scale fir adaptive filters on NVIDIA GeForce graphics processing unit

    Get PDF
    金沢大学理工研究域電子情報学系This paper presents implementations of an FIR adaptive filter with a large number of taps on nVIDIA GeForce graphics processing unit (GPU) and CUOA software development environment. In order to overcome a long access latency for slow off-chip memory access, reduction of memory accesses by re-ordering and vector load/store operations and an increase of the number of threads are introduced. A tree adder is introduced to reduce the cost for summing thread outputs up. A simultaneous execution of multiple filters are also examined. On low-cost platform such as an Atom/ION nettop, GPU will accelerates the computation by almost three times. For simultaneous multiple simulations such as an ensemble averaging, a GPU with a large number of processing elements outperforms a dual-core CPU; almost six times faster for 16 runs. © 2010 IEE

    Implementation of stereophonic acoustic echo canceller on nVIDIA GeForce graphics processing unit

    Get PDF
    金沢大学理工研究域電子情報学系This paper presents an implementation of a stereophonic acoustic echo canceller on nVIDIA GeForce graphics processor and CUDA software development environment. For ef.ciency, fast shared memory has been used as much as possilbe. A tree adder is introduced to reduce the cost for summing thread outputs up. The performance evaluation results suggest that Even a low-cost GPU\u27s with a small number of shader processor greatly helps the echo cancellation for low-cost PC-based teleconferencing. ©2009 IEEE.

    Implementation of stereophonic acoustic echo canceller on nVIDIA GeForce graphics processing unit

    Get PDF
    金沢大学理工研究域電子情報学系This paper presents an implementation of a stereophonic acoustic echo canceller on nVIDIA GeForce graphics processor and CUDA software development environment. For ef.ciency, fast shared memory has been used as much as possilbe. A tree adder is introduced to reduce the cost for summing thread outputs up. The performance evaluation results suggest that Even a low-cost GPU\u27s with a small number of shader processor greatly helps the echo cancellation for low-cost PC-based teleconferencing. ©2009 IEEE.
    corecore