158 research outputs found

    User Association Optimisation in HetNets: Algorithms and Performance

    Get PDF
    PhDThe fifth generation (5G) mobile networks expect significantly higher transmission rate and energy efficiency than existing networks. Heterogeneous networks (HetNets), where various low power base stations (BSs) are underlaid in a macro-cellular network, are likely to become the dominate theme during the wireless evolution towards 5G. However the complex HetNets scenario poses substantial challenges to the user association design. This thesis focuses on the user association optimisation for different HetNets scenarios. First, user association policy is designed for conventional grid-powered HetNets via game theory. An optimal user association algorithm is proposed to improve the downlink (DL) system performance. In order to address the uplink-downlink (UL-DL) asymmetry issue in HetNets, a joint UL and DL user association algorithm is further developed to enhance both UL and DL energy efficiencies. In addition, an opportunistic user association algorithm in multi-service HetNets is proposed for quality of service (QoS) provision of delay constraint traffic while providing fair resource allocation for best effort traffic. Second, driven by increasing environmental concerns, user association policy is designed for green HetNets with renewable energy powered BSs. In such a scenario, the proposed adaptive user association algorithm is able to adapt the user association decision to the amount of renewable energy harvested by BSs. Third, HetNets with hybrid energy sources are investigated, as BSs powered by both power grid and renewable energy sources have the superiority in supporting uninterrupted service as well as achieving green communications. In this context, an optimal user association algorithm is developed to achieve the tradeoffs between average traffic delay and on-grid energy consumption. Additionally, a two-dimensional optimisation on user association and green energy allocation is proposed to minimise both total and peak on-grid energy consumptions, as well as enhance the QoS provision. Thorough theoretical analysis is conducted in the development of all proposed algorithms, and performance of proposed algorithms is evaluated via comprehensive simulations

    User relay assisted traffic shifting in LTE-advanced systems

    No full text
    In order to deal with uneven load distribution, mobility load balancing adjusts the handover region to shift edge users from a hot-spot cell to the less-loaded neighbouring cells. However, shifted users suffer the reduced signal power from neighbouring cells, which may result in link quality degradation. This paper employs a user relaying model and proposes a user relay assisted traffic shifting (URTS) scheme to deal with the above problem. In URTS, a shifted user selects a suitable non-active user as relay user to forward data, thus enhancing the link quality of the shifted user. Since the user relaying model consumes relay user’s energy, a utility function is designed in relay selection to reach a trade-off between the shifted user’s link quality improvement and the relay user’s energy consumption. Simulation results show that URTS scheme could improve SINR and throughput of shifted users. Also, URTS scheme keeps the cost of relay user’s energy consumption at an acceptable level

    Caching deployment algorithm based on user preference in device-to-device networks

    Get PDF
    In cache enabled D2D communication networks, the cache space in a mobile terminal is relatively small compared with the huge amounts of multimedia contents. As such, a strategy for caching the diverse contents in a multiple cache-enabled mobile terminals, namely caching deployment, will have a substantial impact to network performance. In this paper, a user preference aware caching deployment algorithm is proposed for D2D caching networks. Firstly, based on the concept of the user preference, the definition of user interest similarity is given, in which it can be used to evaluate the similarity of user preferences. Then a content cache utility of a mobile terminal is defined by taking the communication coverage of this mobile terminal and the user interest similarity of its adjacent mobile terminals into consideration. The logarithmic utility maximization problem for caching deployment is formulated. Subsequently, we relax the logarithmic utility maximization problem, and obtain a low complexity near-optimal solution via dual decomposition method. The convergence of the proposed caching deployment algorithm is validated by simulation results. Compared with the existing caching placement methods, the proposed algorithm can achieve significant improvement on cache hit ratio, content access delay and traffic offloading gain

    User preference aware caching deployment for device-to-device caching networks

    Get PDF
    Content caching in the device-to-device (D2D) cellular networks can be utilized to improve the content delivery efficiency and reduce traffic load of cellular networks. In such cache-enabled D2D cellular networks, how to cache the diversity contents in the multiple cache-enabled mobile terminals, namely, the caching deployment, has a substantial impact on the network performance. In this paper, a user preference aware caching deployment algorithm is proposed for D2D caching networks. First, the definition of the user interest similarity is given based on the user preference. Then, a content cache utility of a mobile terminal is defined by taking the transmission coverage region of this mobile terminal and the user interest similarity of its adjacent mobile terminals into consideration. A general cache utility maximization problem with joint caching deployment and cache space allocation is formulated, where the special logarithmic utility function is integrated. In doing so, the caching deployment and the cache space allocation can be decoupled by equal cache space allocation. Subsequently, we relax the logarithmic utility maximization problem, and obtain a low complexity near-optimal solution via a dual decomposition method. Compared with the existing caching placement methods, the proposed algorithm can achieve significant improvement on cache hit ratio, content access delay, and traffic offloading gain

    Transformer-QEC: Quantum Error Correction Code Decoding with Transferable Transformers

    Full text link
    Quantum computing has the potential to solve problems that are intractable for classical systems, yet the high error rates in contemporary quantum devices often exceed tolerable limits for useful algorithm execution. Quantum Error Correction (QEC) mitigates this by employing redundancy, distributing quantum information across multiple data qubits and utilizing syndrome qubits to monitor their states for errors. The syndromes are subsequently interpreted by a decoding algorithm to identify and correct errors in the data qubits. This task is complex due to the multiplicity of error sources affecting both data and syndrome qubits as well as syndrome extraction operations. Additionally, identical syndromes can emanate from different error sources, necessitating a decoding algorithm that evaluates syndromes collectively. Although machine learning (ML) decoders such as multi-layer perceptrons (MLPs) and convolutional neural networks (CNNs) have been proposed, they often focus on local syndrome regions and require retraining when adjusting for different code distances. We introduce a transformer-based QEC decoder which employs self-attention to achieve a global receptive field across all input syndromes. It incorporates a mixed loss training approach, combining both local physical error and global parity label losses. Moreover, the transformer architecture's inherent adaptability to variable-length inputs allows for efficient transfer learning, enabling the decoder to adapt to varying code distances without retraining. Evaluation on six code distances and ten different error configurations demonstrates that our model consistently outperforms non-ML decoders, such as Union Find (UF) and Minimum Weight Perfect Matching (MWPM), and other ML decoders, thereby achieving best logical error rates. Moreover, the transfer learning can save over 10x of training cost.Comment: Accepted to ICCAD 2023, FAST ML for Science Workshop; 7 pages, 8 figure

    Refined Qingkailing Protects MCAO Mice from Endoplasmic Reticulum Stress-Induced Apoptosis with a Broad Time Window

    Get PDF
    In the current study, we are investigating effect of refined QKL on ischemia-reperfusion-induced brain injury in mice. Methods. Mice were employed to induce ischemia-reperfusion injury of brain by middle cerebral artery occlusion (MCAO). RQKL solution was administered with different doses (0, 1.5, 3, and 6 mL/kg body weight) at the same time of onset of ischemia, and with the dose of 1.5 mL/kg at different time points (0, 1.5, 3, 6, and 9 h after MCAO). Neurological function and brain infarction were examined and cell apoptosis and ROS at prefrontal cortex were evaluated 24 h after MCAO, and western blot and intracellular calcium were also researched, respectively. Results. RQKL of all doses can improve neurological function and decrease brain infarction, and it performed significant effect in 0, 1.5, 3, and 6 h groups. Moreover, RQKL was able to reduce apoptotic process by reduction of caspase-3 expression, or restraint of eIF2a phosphorylation and caspase-12 activation. It was also able to reduce ROS and modulate intracellular calcium in the brain. Conclusion. RQKL can prevent ischemic-induced brain injury with a time window of 6 h, and its mechanism might be related to suppress ER stress-mediated apoptotic signaling
    corecore