4,383 research outputs found

    Reliable indoor optical wireless communication in the presence of fixed and random blockers

    Get PDF
    The advanced innovation of smartphones has led to the exponential growth of internet users which is expected to reach 71% of the global population by the end of 2027. This in turn has given rise to the demand for wireless data and internet devices that is capable of providing energy-efficient, reliable data transmission and high-speed wireless data services. Light-fidelity (LiFi), known as one of the optical wireless communication (OWC) technology is envisioned as a promising solution to accommodate these demands. However, the indoor LiFi channel is highly environment-dependent which can be influenced by several crucial factors (e.g., presence of people, furniture, random users' device orientation and the limited field of view (FOV) of optical receivers) which may contribute to the blockage of the line-of-sight (LOS) link. In this thesis, it is investigated whether deep learning (DL) techniques can effectively learn the distinct features of the indoor LiFi environment in order to provide superior performance compared to the conventional channel estimation techniques (e.g., minimum mean square error (MMSE) and least squares (LS)). This performance can be seen particularly when access to real-time channel state information (CSI) is restricted and is achieved with the cost of collecting large and meaningful data to train the DL neural networks and the training time which was conducted offline. Two DL-based schemes are designed for signal detection and resource allocation where it is shown that the proposed methods were able to offer close performance to the optimal conventional schemes and demonstrate substantial gain in terms of bit-error ratio (BER) and throughput especially in a more realistic or complex indoor environment. Performance analysis of LiFi networks under the influence of fixed and random blockers is essential and efficient solutions capable of diminishing the blockage effect is required. In this thesis, a CSI acquisition technique for a reconfigurable intelligent surface (RIS)-aided LiFi network is proposed to significantly reduce the dimension of the decision variables required for RIS beamforming. Furthermore, it is shown that several RIS attributes such as shape, size, height and distribution play important roles in increasing the network performance. Finally, the performance analysis for an RIS-aided realistic indoor LiFi network are presented. The proposed RIS configuration shows outstanding performances in reducing the network outage probability under the effect of blockages, random device orientation, limited receiver's FOV, furniture and user behavior. Establishing a LOS link that achieves uninterrupted wireless connectivity in a realistic indoor environment can be challenging. In this thesis, an analysis of link blockage is presented for an indoor LiFi system considering fixed and random blockers. In particular, novel analytical framework of the coverage probability for a single source and multi-source are derived. Using the proposed analytical framework, link blockages of the indoor LiFi network are carefully investigated and it is shown that the incorporation of multiple sources and RIS can significantly reduce the LOS coverage blockage probability in indoor LiFi systems

    Securing NextG networks with physical-layer key generation: A survey

    Get PDF
    As the development of next-generation (NextG) communication networks continues, tremendous devices are accessing the network and the amount of information is exploding. However, with the increase of sensitive data that requires confidentiality to be transmitted and stored in the network, wireless network security risks are further amplified. Physical-layer key generation (PKG) has received extensive attention in security research due to its solid information-theoretic security proof, ease of implementation, and low cost. Nevertheless, the applications of PKG in the NextG networks are still in the preliminary exploration stage. Therefore, we survey existing research and discuss (1) the performance advantages of PKG compared to cryptography schemes, (2) the principles and processes of PKG, as well as research progresses in previous network environments, and (3) new application scenarios and development potential for PKG in NextG communication networks, particularly analyzing the effect and prospects of PKG in massive multiple-input multiple-output (MIMO), reconfigurable intelligent surfaces (RISs), artificial intelligence (AI) enabled networks, integrated space-air-ground network, and quantum communication. Moreover, we summarize open issues and provide new insights into the development trends of PKG in NextG networks

    Optimization of Beyond 5G Network Slicing for Smart City Applications

    Get PDF
    Transitioning from the current fifth-generation (5G) wireless technology, the advent of beyond 5G (B5G) signifies a pivotal stride toward sixth generation (6G) communication technology. B5G, at its essence, harnesses end-to-end (E2E) network slicing (NS) technology, enabling the simultaneous accommodation of multiple logical networks with distinct performance requirements on a shared physical infrastructure. At the forefront of this implementation lies the critical process of network slice design, a phase central to the realization of efficient smart city networks. This thesis assumes a key role in the network slicing life cycle, emphasizing the analysis and formulation of optimal procedures for configuring, customizing, and allocating E2E network slices. The focus extends to catering to the unique demands of smart city applications, encompassing critical areas such as emergency response, smart buildings, and video surveillance. By addressing the intricacies of network slice design, the study navigates through the complexities of tailoring slices to meet specific application needs, thereby contributing to the seamless integration of diverse services within the smart city framework. Addressing the core challenge of NS, which involves the allocation of virtual networks on the physical topology with optimal resource allocation, the thesis introduces a dual integer linear programming (ILP) optimization problem. This problem is formulated to jointly minimize the embedding cost and latency. However, given the NP-hard nature of this ILP, finding an efficient alternative becomes a significant hurdle. In response, this thesis introduces a novel heuristic approach the matroid-based modified greedy breadth-first search (MGBFS) algorithm. This pioneering algorithm leverages matroid properties to navigate the process of virtual network embedding and resource allocation. By introducing this novel heuristic approach, the research aims to provide near-optimal solutions, overcoming the computational complexities associated with the dual integer linear programming problem. The proposed MGBFS algorithm not only addresses the connectivity, cost, and latency constraints but also outperforms the benchmark model delivering solutions remarkably close to optimal. This innovative approach represents a substantial advancement in the optimization of smart city applications, promising heightened connectivity, efficiency, and resource utilization within the evolving landscape of B5G-enabled communication technology

    Analysis and Design of Non-Orthogonal Multiple Access (NOMA) Techniques for Next Generation Wireless Communication Systems

    Get PDF
    The current surge in wireless connectivity, anticipated to amplify significantly in future wireless technologies, brings a new wave of users. Given the impracticality of an endlessly expanding bandwidth, there’s a pressing need for communication techniques that efficiently serve this burgeoning user base with limited resources. Multiple Access (MA) techniques, notably Orthogonal Multiple Access (OMA), have long addressed bandwidth constraints. However, with escalating user numbers, OMA’s orthogonality becomes limiting for emerging wireless technologies. Non-Orthogonal Multiple Access (NOMA), employing superposition coding, serves more users within the same bandwidth as OMA by allocating different power levels to users whose signals can then be detected using the gap between them, thus offering superior spectral efficiency and massive connectivity. This thesis examines the integration of NOMA techniques with cooperative relaying, EXtrinsic Information Transfer (EXIT) chart analysis, and deep learning for enhancing 6G and beyond communication systems. The adopted methodology aims to optimize the systems’ performance, spanning from bit-error rate (BER) versus signal to noise ratio (SNR) to overall system efficiency and data rates. The primary focus of this thesis is the investigation of the integration of NOMA with cooperative relaying, EXIT chart analysis, and deep learning techniques. In the cooperative relaying context, NOMA notably improved diversity gains, thereby proving the superiority of combining NOMA with cooperative relaying over just NOMA. With EXIT chart analysis, NOMA achieved low BER at mid-range SNR as well as achieved optimal user fairness in the power allocation stage. Additionally, employing a trained neural network enhanced signal detection for NOMA in the deep learning scenario, thereby producing a simpler signal detection for NOMA which addresses NOMAs’ complex receiver problem

    Global Growth and Trends of In-Body Communication Research—Insight From Bibliometric Analysis

    Get PDF
    A bibliometric analysis was conducted to examine research on in-body communication. This study aimed to assess the research growth in different countries, identify influential authors for potential international collaboration, investigate research challenges, and explore future prospects for in-body communication. A total of 148 articles written in English from journals and conference proceedings were gathered from the Scopus database. These articles cover the period from 2006 until August 2023. VOS Viewer 1.6.19 and Tableau Cloud were used to analyze the data. The analysis reveals that research on in-body communication has shown fluctuations but overall tends to increase. The United States, Finland, and Japan were identified as the leading countries (top three) in terms of publication quantity, while researchers from Norway, Finland, and Morocco received the highest number of citations. The University of Oulu in Finland has emerged as a productive institution in this field. Collaborative research opportunities exist with the countries mentioned above or with authors who have expertise in this topic. The dominant research topic within this field pertains to ultra-wideband (UWB) technology. One of the future challenges in this field is the exploration of optical wireless communication (OWC) as a potential communication medium for in-body devices, such as electronic devices implanted in the human body. This includes improving performance to meet the requirements for in-body communication devices. Additionally, this paper provides further insights into the progress of research on OWC for in-body communication conducted in our laboratory

    Intelligent ultrasound hand gesture recognition system

    Get PDF
    With the booming development of technology, hand gesture recognition has become a hotspot in Human-Computer Interaction (HCI) systems. Ultrasound hand gesture recognition is an innovative method that has attracted ample interest due to its strong real-time performance, low cost, large field of view, and illumination independence. Well-investigated HCI applications include external digital pens, game controllers on smart mobile devices, and web browser control on laptops. This thesis probes gesture recognition systems on multiple platforms to study the behavior of system performance with various gesture features. Focused on this topic, the contributions of this thesis can be summarized from the perspectives of smartphone acoustic field and hand model simulation, real-time gesture recognition on smart devices with speed categorization algorithm, fast reaction gesture recognition based on temporal neural networks, and angle of arrival-based gesture recognition system. Firstly, a novel pressure-acoustic simulation model is developed to examine its potential for use in acoustic gesture recognition. The simulation model is creating a new system for acoustic verification, which uses simulations mimicking real-world sound elements to replicate a sound pressure environment as authentically as possible. This system is fine-tuned through sensitivity tests within the simulation and validate with real-world measurements. Following this, the study constructs novel simulations for acoustic applications, informed by the verified acoustic field distribution, to assess their effectiveness in specific devices. Furthermore, a simulation focused on understanding the effects of the placement of sound devices and hand-reflected sound waves is properly designed. Moreover, a feasibility test on phase control modification is conducted, revealing the practical applications and boundaries of this model. Mobility and system accuracy are two significant factors that determine gesture recognition performance. As smartphones have high-quality acoustic devices for developing gesture recognition, to achieve a portable gesture recognition system with high accuracy, novel algorithms were developed to distinguish gestures using smartphone built-in speakers and microphones. The proposed system adopts Short-Time-Fourier-Transform (STFT) and machine learning to capture hand movement and determine gestures by the pretrained neural network. To differentiate gesture speeds, a specific neural network was designed and set as part of the classification algorithm. The final accuracy rate achieves 96% among nine gestures and three speed levels. The proposed algorithms were evaluated comparatively through algorithm comparison, and the accuracy outperformed state-of-the-art systems. Furthermore, a fast reaction gesture recognition based on temporal neural networks was designed. Traditional ultrasound gesture recognition adopts convolutional neural networks that have flaws in terms of response time and discontinuous operation. Besides, overlap intervals in network processing cause cross-frame failures that greatly reduce system performance. To mitigate these problems, a novel fast reaction gesture recognition system that slices signals in short time intervals was designed. The proposed system adopted a novel convolutional recurrent neural network (CRNN) that calculates gesture features in a short time and combines features over time. The results showed the reaction time significantly reduced from 1s to 0.2s, and accuracy improved to 100% for six gestures. Lastly, an acoustic sensor array was built to investigate the angle information of performed gestures. The direction of a gesture is a significant feature for gesture classification, which enables the same gesture in different directions to represent different actions. Previous studies mainly focused on types of gestures and analyzing approaches (e.g., Doppler Effect and channel impulse response, etc.), while the direction of gestures was not extensively studied. An acoustic gesture recognition system based on both speed information and gesture direction was developed. The system achieved 94.9% accuracy among ten different gestures from two directions. The proposed system was evaluated comparatively through numerical neural network structures, and the results confirmed that incorporating additional angle information improved the system's performance. In summary, the work presented in this thesis validates the feasibility of recognizing hand gestures using remote ultrasonic sensing across multiple platforms. The acoustic simulation explores the smartphone acoustic field distribution and response results in the context of hand gesture recognition applications. The smartphone gesture recognition system demonstrates the accuracy of recognition through ultrasound signals and conducts an analysis of classification speed. The fast reaction system proposes a more optimized solution to address the cross-frame issue using temporal neural networks, reducing the response latency to 0.2s. The speed and angle-based system provides an additional feature for gesture recognition. The established work will accelerate the development of intelligent hand gesture recognition, enrich the available gesture features, and contribute to further research in various gestures and application scenarios

    GFDM Pulse Shaping Optimization Based Genetic Algorithm

    Get PDF
     يعد تعدد الإرسال بتقسيم التردد العمومي (GFDM) أحد المخططات المرشحة للجيل الخامس وما بعده. إن مخطط وهيكله لتشكيل الموجات الحاملة المتعددة عبارة عن بلوكات مستقلة، ويحتوي كل بلوك على موجات حاملة فرعية ورموز فرعية. يتم ترشيح الموجات الحاملة الفرعية بنموذج أولي لتشكيل النبضة يتحول حسب الوقت ومجال التردد. يستخدم هذا العمل الخوارزمية الجينية (GA) لتعيين أفضل المعلمات لمرشح تشكيل النبض، والذي يستخدم الخطأ كدالة تكلفة. تقوم الخوارزمية بتعيين معلماتها بناءً على حد أدنى من الخطأ من خلال المعالجة التكرارية حتى الوصول إلى قيم النبض التي تعطي الخطأ الطفيف. أعطت هذه الطريقة ميزة تقليل الخطأ الناتج عن التعامد وبالتالي أعطت أداءً محسنًا. تعتمد هذه الطريقة في البداية على البحث عن قيم المرشحات التي توفر أقل قيمة للخطأ ثم اعتماد هذه القيم في بناء مرسل ومستقبل GFDM. خفضت هذه الطريقة معدل الخطأ في البتات إلى 0.0107 عند 10 SNR و0.0033 عند 25 SNR مقارنة بالطريقة التقليدية. وبالتالي، هذه طريقة جديدة لبناء نظام الإرسال والاستقبالGFDM.Generalized Frequency Division Multiplexing (GFDM) is one of the candidate schemes for the 5G and beyond. Its multicarrier modulation scheme and structure are independent blocks, and each block contains sub-carriers and sub-symbols. Sub-carriers are filtered with a prototype pulse shaping that shifts by time and frequency domain. This work uses Genetic Algorithm (GA) to assign the best parameters of the pulse shaping filter, which uses the error as a cost function. The algorithm assigns its parameters based on a minimum error by iteratively processing until reaching the pulse values that give the minor error. This method gave the advantage of reducing the error generated due to orthogonality and thus gave improved performance. This method initially depends on searching for filter values that provide the lowest error value and then adopting these values in building the GFDM transmitter and receiver. This method reduced the BER to 0.0107 at 10 SNR and 0.0033 at 25 SNR compared with the traditional method. Thus, this is a new method for building a GFDM transceiver system

    Programming Wireless Security through Learning-Aided Spatiotemporal Digital Coding Metamaterial Antenna

    Full text link
    The advancement of future large-scale wireless networks necessitates the development of cost-effective and scalable security solutions. Conventional cryptographic methods, due to their computational and key management complexity, are unable to fulfill the low-latency and scalability requirements of these networks. Physical layer (PHY) security has been put forth as a cost-effective alternative to cryptographic mechanisms that can circumvent the need for explicit key exchange between communication devices, owing to the fact that PHY security relies on the physics of the signal transmission for providing security. In this work, a space-time-modulated digitally-coded metamaterial (MTM) leaky wave antenna (LWA) is proposed that can enable PHY security by achieving the functionalities of directional modulation (DM) using a machine learning-aided branch and bound (B&B) optimized coding sequence. From the theoretical perspective, it is first shown that the proposed space-time MTM antenna architecture can achieve DM through both the spatial and spectral manipulation of the orthogonal frequency division multiplexing (OFDM) signal received by a user equipment. Simulation results are then provided as proof-of-principle, demonstrating the applicability of our approach for achieving DM in various communication settings. To further validate our simulation results, a prototype of the proposed architecture controlled by a field-programmable gate array (FPGA) is realized, which achieves DM via an optimized coding sequence carried out by the learning-aided branch-and-bound algorithm corresponding to the states of the MTM LWA's unit cells. Experimental results confirm the theory behind the space-time-modulated MTM LWA in achieving DM, which is observed via both the spectral harmonic patterns and bit error rate (BER) measurements

    Causal Reasoning: Charting a Revolutionary Course for Next-Generation AI-Native Wireless Networks

    Full text link
    Despite the basic premise that next-generation wireless networks (e.g., 6G) will be artificial intelligence (AI)-native, to date, most existing efforts remain either qualitative or incremental extensions to existing ``AI for wireless'' paradigms. Indeed, creating AI-native wireless networks faces significant technical challenges due to the limitations of data-driven, training-intensive AI. These limitations include the black-box nature of the AI models, their curve-fitting nature, which can limit their ability to reason and adapt, their reliance on large amounts of training data, and the energy inefficiency of large neural networks. In response to these limitations, this article presents a comprehensive, forward-looking vision that addresses these shortcomings by introducing a novel framework for building AI-native wireless networks; grounded in the emerging field of causal reasoning. Causal reasoning, founded on causal discovery, causal representation learning, and causal inference, can help build explainable, reasoning-aware, and sustainable wireless networks. Towards fulfilling this vision, we first highlight several wireless networking challenges that can be addressed by causal discovery and representation, including ultra-reliable beamforming for terahertz (THz) systems, near-accurate physical twin modeling for digital twins, training data augmentation, and semantic communication. We showcase how incorporating causal discovery can assist in achieving dynamic adaptability, resilience, and cognition in addressing these challenges. Furthermore, we outline potential frameworks that leverage causal inference to achieve the overarching objectives of future-generation networks, including intent management, dynamic adaptability, human-level cognition, reasoning, and the critical element of time sensitivity
    corecore