7,750 research outputs found

    Toward Autonomous Power Control in Semi-Grant-Free NOMA Systems: A Power Pool-Based Approach

    Get PDF
    In this paper, we design a resource block (RB) oriented power pool (PP) for semi-grant-free non-orthogonal multiple access (SGF-NOMA) in the presence of residual errors resulting from imperfect successive interference cancellation (SIC). In the proposed method, the BS allocates one orthogonal RB to each grant-based (GB) user, and determines the acceptable received power from grant-free (GF) users and calculates a threshold against this RB for broadcasting. Each GF user as an agent, tries to find the optimal transmit power and RB without affecting the quality-of-service (QoS) and ongoing transmission of the GB user. To this end, we formulate the transmit power and RB allocation problem as a stochastic Markov game to design the desired PPs and maximize the long-term system throughput. The problem is then solved using multi-agent (MA) deep reinforcement learning algorithms, such as double deep Q networks (DDQN) and Dueling DDQN due to their enhanced capabilities in value estimation and policy learning, with the latter performing optimally in environments characterized by extensive states and action spaces. The agents (GF users) undertake actions, specifically adjusting power levels and selecting RBs, in pursuit of maximizing cumulative rewards (throughput). Simulation results indicate computational scalability and minimal signaling overhead of the proposed algorithm with notable gains in system throughput compared to existing SGF-NOMA systems. We examine the effect of SIC error levels on sum rate and user transmit power, revealing a decrease in sum rate and an increase in user transmit power as QoS requirements and error variance escalate. We demonstrate that PPs can benefit new (untrained) users joining the network and outperform conventional SGF-NOMA without PPs in spectral efficiency

    Deep generative models for network data synthesis and monitoring

    Get PDF
    Measurement and monitoring are fundamental tasks in all networks, enabling the down-stream management and optimization of the network. Although networks inherently have abundant amounts of monitoring data, its access and effective measurement is another story. The challenges exist in many aspects. First, the inaccessibility of network monitoring data for external users, and it is hard to provide a high-fidelity dataset without leaking commercial sensitive information. Second, it could be very expensive to carry out effective data collection to cover a large-scale network system, considering the size of network growing, i.e., cell number of radio network and the number of flows in the Internet Service Provider (ISP) network. Third, it is difficult to ensure fidelity and efficiency simultaneously in network monitoring, as the available resources in the network element that can be applied to support the measurement function are too limited to implement sophisticated mechanisms. Finally, understanding and explaining the behavior of the network becomes challenging due to its size and complex structure. Various emerging optimization-based solutions (e.g., compressive sensing) or data-driven solutions (e.g. deep learning) have been proposed for the aforementioned challenges. However, the fidelity and efficiency of existing methods cannot yet meet the current network requirements. The contributions made in this thesis significantly advance the state of the art in the domain of network measurement and monitoring techniques. Overall, we leverage cutting-edge machine learning technology, deep generative modeling, throughout the entire thesis. First, we design and realize APPSHOT , an efficient city-scale network traffic sharing with a conditional generative model, which only requires open-source contextual data during inference (e.g., land use information and population distribution). Second, we develop an efficient drive testing system — GENDT, based on generative model, which combines graph neural networks, conditional generation, and quantified model uncertainty to enhance the efficiency of mobile drive testing. Third, we design and implement DISTILGAN, a high-fidelity, efficient, versatile, and real-time network telemetry system with latent GANs and spectral-temporal networks. Finally, we propose SPOTLIGHT , an accurate, explainable, and efficient anomaly detection system of the Open RAN (Radio Access Network) system. The lessons learned through this research are summarized, and interesting topics are discussed for future work in this domain. All proposed solutions have been evaluated with real-world datasets and applied to support different applications in real systems

    Reliable indoor optical wireless communication in the presence of fixed and random blockers

    Get PDF
    The advanced innovation of smartphones has led to the exponential growth of internet users which is expected to reach 71% of the global population by the end of 2027. This in turn has given rise to the demand for wireless data and internet devices that is capable of providing energy-efficient, reliable data transmission and high-speed wireless data services. Light-fidelity (LiFi), known as one of the optical wireless communication (OWC) technology is envisioned as a promising solution to accommodate these demands. However, the indoor LiFi channel is highly environment-dependent which can be influenced by several crucial factors (e.g., presence of people, furniture, random users' device orientation and the limited field of view (FOV) of optical receivers) which may contribute to the blockage of the line-of-sight (LOS) link. In this thesis, it is investigated whether deep learning (DL) techniques can effectively learn the distinct features of the indoor LiFi environment in order to provide superior performance compared to the conventional channel estimation techniques (e.g., minimum mean square error (MMSE) and least squares (LS)). This performance can be seen particularly when access to real-time channel state information (CSI) is restricted and is achieved with the cost of collecting large and meaningful data to train the DL neural networks and the training time which was conducted offline. Two DL-based schemes are designed for signal detection and resource allocation where it is shown that the proposed methods were able to offer close performance to the optimal conventional schemes and demonstrate substantial gain in terms of bit-error ratio (BER) and throughput especially in a more realistic or complex indoor environment. Performance analysis of LiFi networks under the influence of fixed and random blockers is essential and efficient solutions capable of diminishing the blockage effect is required. In this thesis, a CSI acquisition technique for a reconfigurable intelligent surface (RIS)-aided LiFi network is proposed to significantly reduce the dimension of the decision variables required for RIS beamforming. Furthermore, it is shown that several RIS attributes such as shape, size, height and distribution play important roles in increasing the network performance. Finally, the performance analysis for an RIS-aided realistic indoor LiFi network are presented. The proposed RIS configuration shows outstanding performances in reducing the network outage probability under the effect of blockages, random device orientation, limited receiver's FOV, furniture and user behavior. Establishing a LOS link that achieves uninterrupted wireless connectivity in a realistic indoor environment can be challenging. In this thesis, an analysis of link blockage is presented for an indoor LiFi system considering fixed and random blockers. In particular, novel analytical framework of the coverage probability for a single source and multi-source are derived. Using the proposed analytical framework, link blockages of the indoor LiFi network are carefully investigated and it is shown that the incorporation of multiple sources and RIS can significantly reduce the LOS coverage blockage probability in indoor LiFi systems

    Securing NextG networks with physical-layer key generation: A survey

    Get PDF
    As the development of next-generation (NextG) communication networks continues, tremendous devices are accessing the network and the amount of information is exploding. However, with the increase of sensitive data that requires confidentiality to be transmitted and stored in the network, wireless network security risks are further amplified. Physical-layer key generation (PKG) has received extensive attention in security research due to its solid information-theoretic security proof, ease of implementation, and low cost. Nevertheless, the applications of PKG in the NextG networks are still in the preliminary exploration stage. Therefore, we survey existing research and discuss (1) the performance advantages of PKG compared to cryptography schemes, (2) the principles and processes of PKG, as well as research progresses in previous network environments, and (3) new application scenarios and development potential for PKG in NextG communication networks, particularly analyzing the effect and prospects of PKG in massive multiple-input multiple-output (MIMO), reconfigurable intelligent surfaces (RISs), artificial intelligence (AI) enabled networks, integrated space-air-ground network, and quantum communication. Moreover, we summarize open issues and provide new insights into the development trends of PKG in NextG networks

    Optimization of Beyond 5G Network Slicing for Smart City Applications

    Get PDF
    Transitioning from the current fifth-generation (5G) wireless technology, the advent of beyond 5G (B5G) signifies a pivotal stride toward sixth generation (6G) communication technology. B5G, at its essence, harnesses end-to-end (E2E) network slicing (NS) technology, enabling the simultaneous accommodation of multiple logical networks with distinct performance requirements on a shared physical infrastructure. At the forefront of this implementation lies the critical process of network slice design, a phase central to the realization of efficient smart city networks. This thesis assumes a key role in the network slicing life cycle, emphasizing the analysis and formulation of optimal procedures for configuring, customizing, and allocating E2E network slices. The focus extends to catering to the unique demands of smart city applications, encompassing critical areas such as emergency response, smart buildings, and video surveillance. By addressing the intricacies of network slice design, the study navigates through the complexities of tailoring slices to meet specific application needs, thereby contributing to the seamless integration of diverse services within the smart city framework. Addressing the core challenge of NS, which involves the allocation of virtual networks on the physical topology with optimal resource allocation, the thesis introduces a dual integer linear programming (ILP) optimization problem. This problem is formulated to jointly minimize the embedding cost and latency. However, given the NP-hard nature of this ILP, finding an efficient alternative becomes a significant hurdle. In response, this thesis introduces a novel heuristic approach the matroid-based modified greedy breadth-first search (MGBFS) algorithm. This pioneering algorithm leverages matroid properties to navigate the process of virtual network embedding and resource allocation. By introducing this novel heuristic approach, the research aims to provide near-optimal solutions, overcoming the computational complexities associated with the dual integer linear programming problem. The proposed MGBFS algorithm not only addresses the connectivity, cost, and latency constraints but also outperforms the benchmark model delivering solutions remarkably close to optimal. This innovative approach represents a substantial advancement in the optimization of smart city applications, promising heightened connectivity, efficiency, and resource utilization within the evolving landscape of B5G-enabled communication technology

    Multidisciplinary perspectives on Artificial Intelligence and the law

    Get PDF
    This open access book presents an interdisciplinary, multi-authored, edited collection of chapters on Artificial Intelligence (‘AI’) and the Law. AI technology has come to play a central role in the modern data economy. Through a combination of increased computing power, the growing availability of data and the advancement of algorithms, AI has now become an umbrella term for some of the most transformational technological breakthroughs of this age. The importance of AI stems from both the opportunities that it offers and the challenges that it entails. While AI applications hold the promise of economic growth and efficiency gains, they also create significant risks and uncertainty. The potential and perils of AI have thus come to dominate modern discussions of technology and ethics – and although AI was initially allowed to largely develop without guidelines or rules, few would deny that the law is set to play a fundamental role in shaping the future of AI. As the debate over AI is far from over, the need for rigorous analysis has never been greater. This book thus brings together contributors from different fields and backgrounds to explore how the law might provide answers to some of the most pressing questions raised by AI. An outcome of the Católica Research Centre for the Future of Law and its interdisciplinary working group on Law and Artificial Intelligence, it includes contributions by leading scholars in the fields of technology, ethics and the law.info:eu-repo/semantics/publishedVersio

    Modern computing: Vision and challenges

    Get PDF
    Over the past six decades, the computing systems field has experienced significant transformations, profoundly impacting society with transformational developments, such as the Internet and the commodification of computing. Underpinned by technological advancements, computer systems, far from being static, have been continuously evolving and adapting to cover multifaceted societal niches. This has led to new paradigms such as cloud, fog, edge computing, and the Internet of Things (IoT), which offer fresh economic and creative opportunities. Nevertheless, this rapid change poses complex research challenges, especially in maximizing potential and enhancing functionality. As such, to maintain an economical level of performance that meets ever-tighter requirements, one must understand the drivers of new model emergence and expansion, and how contemporary challenges differ from past ones. To that end, this article investigates and assesses the factors influencing the evolution of computing systems, covering established systems and architectures as well as newer developments, such as serverless computing, quantum computing, and on-device AI on edge devices. Trends emerge when one traces technological trajectory, which includes the rapid obsolescence of frameworks due to business and technical constraints, a move towards specialized systems and models, and varying approaches to centralized and decentralized control. This comprehensive review of modern computing systems looks ahead to the future of research in the field, highlighting key challenges and emerging trends, and underscoring their importance in cost-effectively driving technological progress
    • …
    corecore