752 research outputs found

    Energy Efficient Power Allocation for Distributed Antenna System over Shadowed Nakagami Fading Channel

    Get PDF
    In this paper, the energy efficiency (EE) of downlink distributed antenna system (DAS) with multiple receive antennas is investigated over composite fading channel that takes the path loss, shadow fading and Nakagami-m fading into account. Our aim is to maximize EE which is defined as the ratio of the transmission rate to the total consumed power under the constraints of maximum transmit power of each remote antenna. According to the definition of EE and using the upper bound of average EE, the optimized objective function is provided. Based on this, utilizing Karush-Kuhn-Tucker (KKT) conditions and mathematical derivation, a suboptimal energy efficient power allocation (PA) scheme is developed, and closed-form PA coefficients are obtained. The developed scheme has the EE performance close to the existing optimal scheme. Moreover, it has relatively lower complexity than the existing scheme because only the statistic channel information and less iteration are required. Besides, it includes the scheme in composite Rayleigh channel as a special case. Simulation results show the effectiveness of the developed scheme

    A Dynamic Equivalent Energy Storage Model of Natural Gas Networks for Joint Optimal Dispatch of Electricity-Gas Systems

    Full text link
    The development of energy conversion techniques enhances the coupling between the gas network and power system. However, challenges remain in the joint optimal dispatch of electricity-gas systems. The dynamic model of the gas network, described by partial differential equations, is complex and computationally demanding for power system operators. Furthermore, information privacy concerns and limited accessibility to detailed gas network models by power system operators necessitate quantifying the equivalent energy storage capacity of gas networks. This paper proposes a multi-port energy storage model with time-varying capacity to represent the dynamic gas state transformation and operational constraints in a compact and intuitive form. The model can be easily integrated into the optimal dispatch problem of the power system. Test cases demonstrate that the proposed model ensures feasible control strategies and significantly reduces the computational burden while maintaining high accuracy in the joint optimal dispatch of electricity-gas systems. In contrast, the existing static equivalent model fails to capture the full flexibility of the gas network and may yield infeasible results.Comment: 12 pages, 8 figure

    ZeroPrompt: Streaming Acoustic Encoders are Zero-Shot Masked LMs

    Full text link
    In this paper, we present ZeroPrompt (Figure 1-(a)) and the corresponding Prompt-and-Refine strategy (Figure 3), two simple but effective \textbf{training-free} methods to decrease the Token Display Time (TDT) of streaming ASR models \textbf{without any accuracy loss}. The core idea of ZeroPrompt is to append zeroed content to each chunk during inference, which acts like a prompt to encourage the model to predict future tokens even before they were spoken. We argue that streaming acoustic encoders naturally have the modeling ability of Masked Language Models and our experiments demonstrate that ZeroPrompt is engineering cheap and can be applied to streaming acoustic encoders on any dataset without any accuracy loss. Specifically, compared with our baseline models, we achieve 350 \sim 700ms reduction on First Token Display Time (TDT-F) and 100 \sim 400ms reduction on Last Token Display Time (TDT-L), with theoretically and experimentally equal WER on both Aishell-1 and Librispeech datasets.Comment: accepted by interspeech 202

    LightGrad: Lightweight Diffusion Probabilistic Model for Text-to-Speech

    Full text link
    Recent advances in neural text-to-speech (TTS) models bring thousands of TTS applications into daily life, where models are deployed in cloud to provide services for customs. Among these models are diffusion probabilistic models (DPMs), which can be stably trained and are more parameter-efficient compared with other generative models. As transmitting data between customs and the cloud introduces high latency and the risk of exposing private data, deploying TTS models on edge devices is preferred. When implementing DPMs onto edge devices, there are two practical problems. First, current DPMs are not lightweight enough for resource-constrained devices. Second, DPMs require many denoising steps in inference, which increases latency. In this work, we present LightGrad, a lightweight DPM for TTS. LightGrad is equipped with a lightweight U-Net diffusion decoder and a training-free fast sampling technique, reducing both model parameters and inference latency. Streaming inference is also implemented in LightGrad to reduce latency further. Compared with Grad-TTS, LightGrad achieves 62.2% reduction in paramters, 65.7% reduction in latency, while preserving comparable speech quality on both Chinese Mandarin and English in 4 denoising steps.Comment: Accepted by ICASSP 202
    corecore