2,657 research outputs found

    Data and Energy Integrated Communication Networks for Wireless Big Data

    Get PDF
    This paper describes a new type of communication network called data and energy integrated communication networks (DEINs), which integrates the traditionally separate two processes, i.e., wireless information transfer (WIT) and wireless energy transfer (WET), fulfilling co-transmission of data and energy. In particular, the energy transmission using radio frequency is for the purpose of energy harvesting (EH) rather than information decoding. One driving force of the advent of DEINs is wireless big data, which comes from wireless sensors that produce a large amount of small piece of data. These sensors are typically powered by battery that drains sooner or later and will have to be taken out and then replaced or recharged. EH has emerged as a technology to wirelessly charge batteries in a contactless way. Recent research work has attempted to combine WET with WIT, typically under the label of simultaneous wireless information and power transfer. Such work in the literature largely focuses on the communication side of the whole wireless networks with particular emphasis on power allocation. The DEIN communication network proposed in this paper regards the convergence of WIT and WET as a full system that considers not only the physical layer but also the higher layers, such as media access control and information routing. After describing the DEIN concept and its high-level architecture/protocol stack, this paper presents two use cases focusing on the lower layer and the higher layer of a DEIN network, respectively. The lower layer use case is about a fair resource allocation algorithm, whereas the high-layer section introduces an efficient data forwarding scheme in combination with EH. The two case studies aim to give a better explanation of the DEIN concept. Some future research directions and challenges are also pointed out

    Revisit Weakly-Supervised Audio-Visual Video Parsing from the Language Perspective

    Full text link
    We focus on the weakly-supervised audio-visual video parsing task (AVVP), which aims to identify and locate all the events in audio/visual modalities. Previous works only concentrate on video-level overall label denoising across modalities, but overlook the segment-level label noise, where adjacent video segments (i.e., 1-second video clips) may contain different events. However, recognizing events in the segment is challenging because its label could be any combination of events that occur in the video. To address this issue, we consider tackling AVVP from the language perspective, since language could freely describe how various events appear in each segment beyond fixed labels. Specifically, we design language prompts to describe all cases of event appearance for each video. Then, the similarity between language prompts and segments is calculated, where the event of the most similar prompt is regarded as the segment-level label. In addition, to deal with the mislabeled segments, we propose to perform dynamic re-weighting on the unreliable segments to adjust their labels. Experiments show that our simple yet effective approach outperforms state-of-the-art methods by a large margin
    • …
    corecore