2 research outputs found
Reusing Wireless Power Transfer for Backscatter-assisted Cooperation in WPCN
This paper studies a novel user cooperation method in a wireless powered
communication network (WPCN), where a pair of closely located devices first
harvest wireless energy from an energy node (EN) and then use the harvested
energy to transmit information to an access point (AP). In particular, we
consider the two energy-harvesting users exchanging their messages and then
transmitting cooperatively to the AP using space-time block codes.
Interestingly, we exploit the short distance between the two users and allow
the information exchange to be achieved by energy-conserving backscatter
technique. Meanwhile the considered backscatter-assisted method can effectively
reuse wireless power transfer for simultaneous information exchange during the
energy harvesting phase. Specifically, we maximize the common throughput
through optimizing the time allocation on energy and information transmission.
Simulation results show that the proposed user cooperation scheme can
effectively improve the throughput fairness compared to some representative
benchmark methods.Comment: The paper has been accepted for publication in MLICOM 201
Throughput Maximization for Ambient Backscatter Communication: A Reinforcement Learning Approach
Ambient backscatter (AB) communication is an emerging wireless communication
technology that enables wireless devices (WDs) to communicate without requiring
active radio transmission. In an AB communication system, a WD switches between
communication and energy harvesting modes. The harvested energy is used to
power the devices operations, e.g., circuit power consumption and sensing
operation. In this paper, we focus on maximizing the throughput performance of
AB communication system by adaptively selecting the operating mode under fading
channel environment. We model the problem as an infinite-horizon Markov
Decision Process (MDP) and accordingly obtain the optimal mode switching policy
by the value iteration algorithm given the channel distributions. Meanwhile,
when the knowledge of channel distribution is absent, a Q-learning (QL) method
is applied to explore a suboptimal strategy through device repeated interaction
with the environment. Finally, our simulations show that the proposed QL method
can achieve close-to-optimal throughput performance and significantly
outperforms the other than representative benchmark methods.Comment: The paper has been accepted by IEEE ITNEC 201