99 research outputs found
Opportunistic secure transmission for wireless relay networks with modify-and-forward protocol
This paper investigates the security at the physical layer in cooperative wireless networks (CWNs) where the data transmission between nodes can be realised via either direct transmission (DT) or relaying transmission (RT) schemes. Inspired by the concept of physical-layer network coding (PNC), a secure PNC-based modify-and-forward (SPMF) is developed to cope with the imperfect shared knowledge of the message modification between relay and destination in the conventional modify-and-forward (MF). In this paper, we first derive the secrecy outage probability (SOP) of the SPMF scheme, which is shown to be a general expression for deriving the SOP of any MF schemes. By comparing the SOPs of various schemes, the usage of the relay is shown to be not always necessary and even causes a poorer performance depending on target secrecy rate and quality of channel links. To this extent, we then propose an opportunistic secure transmission protocol to minimise the SOP of the CWNs. In particular, an optimisation problem is developed in which secrecy rate thresholds (SRTs) are determined to find an optimal scheme among various DT and RT schemes for achieving the lowest SOP. Furthermore, the conditions for the existence of SRTs are derived with respect to various channel conditions to determine if the relay could be relied on in practice
An analytical channel model for emerging wireless networks-on-chip
Recently wireless Networks-on-Chip (WiNoCs) have been proposed to overcome the scalability and performance limitations of traditional multi-hop wired NoC architectures. However, the adaptation of wireless technology for on-chip communication is still in its infancy. Consequently, several challenges such as simulation and design tools that consider the technological constraints imposed by the wireless channel are yet to be addressed. To this end, in this paper, we propose and efficient channel model for WiNoCs which takes into account practical issues and constraints of the propagation medium, such as transmission frequency, operating temperature, ambient pressure and distance between the on-chip antennas. The proposed channel model demonstrates that total path loss of the wireless channel in WiNoCs suffers from not only dielectric propagation loss (DPL) but also molecular absorption attenuation (MAA) which reduces the reliability of the system
Digital twin for O-RAN towards 6G
In future wireless systems of beyond 5G and 6G, addressing diverse applications with varying quality requirements is essential. Open Radio Access Network (O-RAN) architectures offer the potential for dynamic resource adaptation based on traffic demands. However, achieving real-time resource orchestration remains a challenge. Simultaneously, Digital Twin (DT) technology holds promise for testing and analysing complex systems, offering a unique platform for addressing dynamic operation and automation in O-RAN architectures. Yet, developing DTs for complex 5G/6G networks poses challenges, including data exchanges, ML model training data availability, network dynamics, processing power limitations, interdisciplinary collaboration needs, and a lack of standardized methodologies. This paper provides an overview of Open RAN architecture, trend and challenges, proposing the DT concepts for O-RAN with solution examples showcasing its integration into the framework
Deep-NC: a secure image transmission using deep learning and network coding
Visual communications have played an important part in our daily life as a non-verbal way of conveying information using symbols, gestures and images. With the advances of technology, people can visually communicate with each other in a number of forms via digital communications. Recently Image Super-Resolution (ISR) with Deep Learning (DL) has been developed to reproduce the original image from its low-resolution version, which allows us to reduce the image size for saving transmission bandwidth. Although many benefits can be realised, the image transmission over wireless media experiences inevitable loss due to environment noise and inherent hardware issues. Moreover, data privacy is of vital importance, especially when the eavesdropper can easily overhear the communications over the air. To this end, this paper proposes a secure ISR protocol, namely Deep-NC, for the image communications based on the DL and Network Coding (NC). Specifically, two schemes, namely Per-Image Coding (PIC) and Per-Pixel Coding (PPC), are designed so as to protect the sharing of private image from the eavesdropper. Although the PPC scheme achieves a better performance than the PIC scheme for the entire image, it requires a higher computational complexity on every pixel of the image. In the proposed Deep-NC, the intended user can easily recover the original image achieving a much higher performance in terms of Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity Index Measure (SSIM) than those at the eavesdropper. Simulation results show that an improvement of up to 32 dB in the PSNR can be obtained when the eavesdropper does not have any knowledge of the parameters and the reference image used in the mixing schemes. Furthermore, the original image can be downscaled to a much lower resolution for saving significantly the transmission bandwidth with negligible performance loss
Optimisation of server selection for maximising utility in Erlang-loss systems
This paper undertakes the challenge of server selection problem in Erlang-loss system (ELS). We propose a novel approach to the server selection problem in the ELS taking into account probabilistic modelling to reflect a practical scenario when user arrivals vary over time. The proposed framework is divided into three stages, including i) developing a new method for server selection based on the M/M/n/n queuing model with probabilistic arrivals; ii) combining server allocation results with further research on utility-maximising server selection to optimise system performance; and iii) designing a heuristic approach to efficiently solve the developed optimisation problem. Simulation results show that by using this framework, Internet Service Providers (ISPs) can significantly improve QoS for better revenue with optimal server allocation in their data centre networks
An efficient pest classification in smart agriculture using transfer learning
To this day, agriculture still remains very important and plays considerable role to support our daily life and economy in most countries. It is the source of not only food supply, but also providing raw materials for other industries, e.g. plastic, fuel. Currently, farmers are facing the challenge to produce sufficient crops for expanding human population and growing in economy, while maintaining the quality of agriculture products. Pest invasions, however, are a big threat to the growth crops which cause the crop loss and economic consequences. If they are left untreated even in a small area, they can quickly spread out other healthy area or nearby countries. A pest control is therefore crucial to reduce the crop loss. In this paper, we introduce an efficient method basing on deep learning approach to classify pests from images captured from the crops. The proposed method is implemented on various EfficientNet and shown to achieve a considerably high accuracy in a complex dataset, but only a few iterations are required in the training process
Internet traffic prediction using recurrent neural networks
Network traffic prediction (NTP) represents an essential component in planning large-scale networks which are in general unpredictable and must adapt to unforeseen circumstances. In small to medium-size networks, the administrator can anticipate the fluctuations in traffic without the need of using forecasting tools, but in the scenario of large-scale networks where hundreds of new users can be added in a matter of weeks, more efficient forecasting tools are required to avoid congestion and over provisioning. Network and hardware resources are however limited; and hence resource allocation is critical for the NTP with scalable solutions. To this end, in this paper, we propose an efficient NTP by optimizing recurrent neural networks (RNNs) to analyse the traffic patterns that occur inside flow time series, and predict future samples based on the history of the traffic that was used for training. The predicted traffic with the proposed RNNs is compared with the real values that are stored in the database in terms of mean squared error, mean absolute error and categorical cross entropy. Furthermore, the real traffic samples for NTP training are compared with those from other techniques such as auto-regressive moving average (ARIMA) and AdaBoost regressor to validate the effectiveness of the proposed method. It is shown that the proposed RNN achieves a better performance than both the ARIMA and AdaBoost regressor when more samples are employed
On the handover security key update and residence management in LTE networks
In LTE networks, key update and residence management have been investigated as an effective solution to cope with desynchronization attacks in mobility management entity (MME) handovers. In this paper, we first analyse the impacts of the key update interval (KUI) and MME residence interval (MRI) on the handover performance in terms of the number of exposed packets (NEP) and signaling overhead rate (SOR). By deriving the bounds of the NEP and SOR over the KUI and MRI, it is shown that there exists a tradeoff between the NEP and the SOR, while our aim is to minimise both of them simultaneously. This accordingly motivates us to propose a multiobjective optimisation problem to find the optimal KUI and MRI that minimise both the NEP and SOR. By introducing a relative importance factor between the SOR and NEP along with their derived bounds, we further transform the proposed optimisation problem into a single-objective optimisation problem which can be solved via a simple numerical method. In particular, the results show that a higher accuracy of up to 1 second is achieved with the proposed approach while requiring a lower complexity compared to the conventional approach employing iterative searches
On the nanocommunications at THz band in graphene-enabled wireless network-on-chip
One of the main challenges towards the growing computation-intensive applications with scalable bandwidth requirement is the deployment of a dense number of on-chip cores within a chip package. To this end, this paper investigates the Wireless Network-on-Chip (WiNoC), which is enabled by graphene- based nanoantennas (GNAs) in Terahertz frequency band. We first develop a channel model between the GNAs taking into account the practical issues of the propagation medium, such as transmission frequency, operating temperature, ambient pressure and distance between the GNAs. In the Terahertz band, not only dielectric propagation loss (DPL) but also molecular absorption attenuation (MAA) caused by various molecules and their isotopologues within the chip package constitute the loss of signal transmission. We further propose an optimal power allocation to achieve the channel capacity subject to transmit power constraint. By analysing the effects of the MAA on the path loss and channel capacity, the proposed channel model shows that the MAA significantly degrades the performance at certain frequency ranges, e.g. 1.21 THz, 1.28 THz and 1.45 THz, of up to 31.8% compared to the conventional channel model, even when the GNAs are very closely located of only 0.01 mm. More specifically, at transmission frequency of 1 THz, the channel capacity of the proposed model is shown to be much lower than that of the conventional model over the whole range of temperature and ambient pressure of up to 26.8% and 25%, respectively. Finally, simulation results are provided to verify the analytical findings
- …