2,461 research outputs found

    Securing NextG networks with physical-layer key generation: A survey

    Get PDF
    As the development of next-generation (NextG) communication networks continues, tremendous devices are accessing the network and the amount of information is exploding. However, with the increase of sensitive data that requires confidentiality to be transmitted and stored in the network, wireless network security risks are further amplified. Physical-layer key generation (PKG) has received extensive attention in security research due to its solid information-theoretic security proof, ease of implementation, and low cost. Nevertheless, the applications of PKG in the NextG networks are still in the preliminary exploration stage. Therefore, we survey existing research and discuss (1) the performance advantages of PKG compared to cryptography schemes, (2) the principles and processes of PKG, as well as research progresses in previous network environments, and (3) new application scenarios and development potential for PKG in NextG communication networks, particularly analyzing the effect and prospects of PKG in massive multiple-input multiple-output (MIMO), reconfigurable intelligent surfaces (RISs), artificial intelligence (AI) enabled networks, integrated space-air-ground network, and quantum communication. Moreover, we summarize open issues and provide new insights into the development trends of PKG in NextG networks

    Modern computing: Vision and challenges

    Get PDF
    Over the past six decades, the computing systems field has experienced significant transformations, profoundly impacting society with transformational developments, such as the Internet and the commodification of computing. Underpinned by technological advancements, computer systems, far from being static, have been continuously evolving and adapting to cover multifaceted societal niches. This has led to new paradigms such as cloud, fog, edge computing, and the Internet of Things (IoT), which offer fresh economic and creative opportunities. Nevertheless, this rapid change poses complex research challenges, especially in maximizing potential and enhancing functionality. As such, to maintain an economical level of performance that meets ever-tighter requirements, one must understand the drivers of new model emergence and expansion, and how contemporary challenges differ from past ones. To that end, this article investigates and assesses the factors influencing the evolution of computing systems, covering established systems and architectures as well as newer developments, such as serverless computing, quantum computing, and on-device AI on edge devices. Trends emerge when one traces technological trajectory, which includes the rapid obsolescence of frameworks due to business and technical constraints, a move towards specialized systems and models, and varying approaches to centralized and decentralized control. This comprehensive review of modern computing systems looks ahead to the future of research in the field, highlighting key challenges and emerging trends, and underscoring their importance in cost-effectively driving technological progress

    Securing the Internet of Things: A Study on Machine Learning-Based Solutions for IoT Security and Privacy Challenges

    Get PDF
    The Internet of Things (IoT) is a rapidly growing technology that connects and integrates billions of smart devices, generating vast volumes of data and impacting various aspects of daily life and industrial systems. However, the inherent characteristics of IoT devices, including limited battery life, universal connectivity, resource-constrained design, and mobility, make them highly vulnerable to cybersecurity attacks, which are increasing at an alarming rate. As a result, IoT security and privacy have gained significant research attention, with a particular focus on developing anomaly detection systems. In recent years, machine learning (ML) has made remarkable progress, evolving from a lab novelty to a powerful tool in critical applications. ML has been proposed as a promising solution for addressing IoT security and privacy challenges. In this article, we conducted a study of the existing security and privacy challenges in the IoT environment. Subsequently, we present the latest ML-based models and solutions to address these challenges, summarizing them in a table that highlights the key parameters of each proposed model. Additionally, we thoroughly studied available datasets related to IoT technology. Through this article, readers will gain a detailed understanding of IoT architecture, security attacks, and countermeasures using ML techniques, utilizing available datasets. We also discuss future research directions for ML-based IoT security and privacy. Our aim is to provide valuable insights into the current state of research in this field and contribute to the advancement of IoT security and privacy

    A survey of trends and motivations regarding Communication Service Providers' metro area network implementations

    Full text link
    Relevance of research on telecommunications networks is predicated upon the implementations which it explicitly claims or implicitly subsumes. This paper supports researchers through a survey of Communications Service Providers current implementations within the metro area, and trends that are expected to shape the next-generation metro area network. The survey is composed of a quantitative component, complemented by a qualitative component carried out among field experts. Among the several findings, it has been found that service providers with large subscriber base sizes, are less agile in their response to technological change than those with smaller subscriber base sizes: thus, copper media are still an important component in the set of access network technologies. On the other hand, service providers with large subscriber base sizes are strongly committed to deploying distributed access architectures, notably using remote access nodes like remote OLT and remote MAC-PHY. This study also shows that the extent of remote node deployment for multi-access edge computing is about the same as remote node deployment for distributed access architectures, indicating that these two aspects of metro area networks are likely to be co-deployed.Comment: 84 page

    Accurate quantum transport modelling and epitaxial structure design of high-speed and high-power In0.53Ga0.47As/AlAs double-barrier resonant tunnelling diodes for 300-GHz oscillator sources

    Get PDF
    Terahertz (THz) wave technology is envisioned as an appealing and conceivable solution in the context of several potential high-impact applications, including sixth generation (6G) and beyond consumer-oriented ultra-broadband multi-gigabit wireless data-links, as well as highresolution imaging, radar, and spectroscopy apparatuses employable in biomedicine, industrial processes, security/defence, and material science. Despite the technological challenges posed by the THz gap, recent scientific advancements suggest the practical viability of THz systems. However, the development of transmitters (Tx) and receivers (Rx) based on compact semiconductor devices operating at THz frequencies is urgently demanded to meet the performance requirements calling from emerging THz applications. Although several are the promising candidates, including high-speed III-V transistors and photo-diodes, resonant tunnelling diode (RTD) technology offers a compact and high performance option in many practical scenarios. However, the main weakness of the technology is currently represented by the low output power capability of RTD THz Tx, which is mainly caused by the underdeveloped and non-optimal device, as well as circuit, design implementation approaches. Indeed, indium phosphide (InP) RTD devices can nowadays deliver only up to around 1 mW of radio-frequency (RF) power at around 300 GHz. In the context of THz wireless data-links, this severely impacts the Tx performance, limiting communication distance and data transfer capabilities which, at the current time, are of the order of few tens of gigabit per second below around 1 m. However, recent research studies suggest that several milliwatt of output power are required to achieve bit-rate capabilities of several tens of gigabits per second and beyond, and to reach several metres of communication distance in common operating conditions. Currently, the shortterm target is set to 5−10 mW of output power at around 300 GHz carrier waves, which would allow bit-rates in excess of 100 Gb/s, as well as wireless communications well above 5 m distance, in first-stage short-range scenarios. In order to reach it, maximisation of the RTD highfrequency RF power capability is of utmost importance. Despite that, reliable epitaxial structure design approaches, as well as accurate physical-based numerical simulation tools, aimed at RF power maximisation in the 300 GHz-band are lacking at the current time. This work aims at proposing practical solutions to address the aforementioned issues. First, a physical-based simulation methodology was developed to accurately and reliably simulate the static current-voltage (IV ) characteristic of indium gallium arsenide/aluminium arsenide (In-GaAs/AlAs) double-barrier RTD devices. The approach relies on the non-equilibrium Green’s function (NEGF) formalism implemented in Silvaco Atlas technology computer-aided design (TCAD) simulation package, requires low computational budget, and allows to correctly model In0.53Ga0.47As/AlAs RTD devices, which are pseudomorphically-grown on lattice-matched to InP substrates, and are commonly employed in oscillators working at around 300 GHz. By selecting the appropriate physical models, and by retrieving the correct materials parameters, together with a suitable discretisation of the associated heterostructure spatial domain through finite-elements, it is shown, by comparing simulation data with experimental results, that the developed numerical approach can reliably compute several quantities of interest that characterise the DC IV curve negative differential resistance (NDR) region, including peak current, peak voltage, and voltage swing, all of which are key parameters in RTD oscillator design. The demonstrated simulation approach was then used to study the impact of epitaxial structure design parameters, including those characterising the double-barrier quantum well, as well as emitter and collector regions, on the electrical properties of the RTD device. In particular, a comprehensive simulation analysis was conducted, and the retrieved output trends discussed based on the heterostructure band diagram, transmission coefficient energy spectrum, charge distribution, and DC current-density voltage (JV) curve. General design guidelines aimed at enhancing the RTD device maximum RF power gain capability are then deduced and discussed. To validate the proposed epitaxial design approach, an In0.53Ga0.47As/AlAs double-barrier RTD epitaxial structure providing several milliwatt of RF power was designed by employing the developed simulation methodology, and experimentally-investigated through the microfabrication of RTD devices and subsequent high-frequency characterisation up to 110 GHz. The analysis, which included fabrication optimisation, reveals an expected RF power performance of up to around 5 mW and 10 mW at 300 GHz for 25 ÎŒm2 and 49 ÎŒm2-large RTD devices, respectively, which is up to five times higher compared to the current state-of-the-art. Finally, in order to prove the practical employability of the proposed RTDs in oscillator circuits realised employing low-cost photo-lithography, both coplanar waveguide and microstrip inductive stubs are designed through a full three-dimensional electromagnetic simulation analysis. In summary, this work makes and important contribution to the rapidly evolving field of THz RTD technology, and demonstrates the practical feasibility of 300-GHz high-power RTD devices realisation, which will underpin the future development of Tx systems capable of the power levels required in the forthcoming THz applications

    Tiny Machine Learning Environment: Enabling Intelligence on Constrained Devices

    Get PDF
    Running machine learning algorithms (ML) on constrained devices at the extreme edge of the network is problematic due to the computational overhead of ML algorithms, available resources on the embedded platform, and application budget (i.e., real-time requirements, power constraints, etc.). This required the development of specific solutions and development tools for what is now referred to as TinyML. In this dissertation, we focus on improving the deployment and performance of TinyML applications, taking into consideration the aforementioned challenges, especially memory requirements. This dissertation contributed to the construction of the Edge Learning Machine environment (ELM), a platform-independent open-source framework that provides three main TinyML services, namely shallow ML, self-supervised ML, and binary deep learning on constrained devices. In this context, this work includes the following steps, which are reflected in the thesis structure. First, we present the performance analysis of state-of-the-art shallow ML algorithms including dense neural networks, implemented on mainstream microcontrollers. The comprehensive analysis in terms of algorithms, hardware platforms, datasets, preprocessing techniques, and configurations shows similar performance results compared to a desktop machine and highlights the impact of these factors on overall performance. Second, despite the assumption that TinyML only permits models inference provided by the scarcity of resources, we have gone a step further and enabled self-supervised on-device training on microcontrollers and tiny IoT devices by developing the Autonomous Edge Pipeline (AEP) system. AEP achieves comparable accuracy compared to the typical TinyML paradigm, i.e., models trained on resource-abundant devices and then deployed on microcontrollers. Next, we present the development of a memory allocation strategy for convolutional neural networks (CNNs) layers, that optimizes memory requirements. This approach reduces the memory footprint without affecting accuracy nor latency. Moreover, e-skin systems share the main requirements of the TinyML fields: enabling intelligence with low memory, low power consumption, and low latency. Therefore, we designed an efficient Tiny CNN architecture for e-skin applications. The architecture leverages the memory allocation strategy presented earlier and provides better performance than existing solutions. A major contribution of the thesis is given by CBin-NN, a library of functions for implementing extremely efficient binary neural networks on constrained devices. The library outperforms state of the art NN deployment solutions by drastically reducing memory footprint and inference latency. All the solutions proposed in this thesis have been implemented on representative devices and tested in relevant applications, of which results are reported and discussed. The ELM framework is open source, and this work is clearly becoming a useful, versatile toolkit for the IoT and TinyML research and development community

    Beam scanning by liquid-crystal biasing in a modified SIW structure

    Get PDF
    A fixed-frequency beam-scanning 1D antenna based on Liquid Crystals (LCs) is designed for application in 2D scanning with lateral alignment. The 2D array environment imposes full decoupling of adjacent 1D antennas, which often conflicts with the LC requirement of DC biasing: the proposed design accommodates both. The LC medium is placed inside a Substrate Integrated Waveguide (SIW) modified to work as a Groove Gap Waveguide, with radiating slots etched on the upper broad wall, that radiates as a Leaky-Wave Antenna (LWA). This allows effective application of the DC bias voltage needed for tuning the LCs. At the same time, the RF field remains laterally confined, enabling the possibility to lay several antennas in parallel and achieve 2D beam scanning. The design is validated by simulation employing the actual properties of a commercial LC medium

    Adaptive vehicular networking with Deep Learning

    Get PDF
    Vehicular networks have been identified as a key enabler for future smart traffic applications aiming to improve on-road safety, increase road traffic efficiency, or provide advanced infotainment services to improve on-board comfort. However, the requirements of smart traffic applications also place demands on vehicular networks’ quality in terms of high data rates, low latency, and reliability, while simultaneously meeting the challenges of sustainability, green network development goals and energy efficiency. The advances in vehicular communication technologies combined with the peculiar characteristics of vehicular networks have brought challenges to traditional networking solutions designed around fixed parameters using complex mathematical optimisation. These challenges necessitate greater intelligence to be embedded in vehicular networks to realise adaptive network optimisation. As such, one promising solution is the use of Machine Learning (ML) algorithms to extract hidden patterns from collected data thus formulating adaptive network optimisation solutions with strong generalisation capabilities. In this thesis, an overview of the underlying technologies, applications, and characteristics of vehicular networks is presented, followed by the motivation of using ML and a general introduction of ML background. Additionally, a literature review of ML applications in vehicular networks is also presented drawing on the state-of-the-art of ML technology adoption. Three key challenging research topics have been identified centred around network optimisation and ML deployment aspects. The first research question and contribution focus on mobile Handover (HO) optimisation as vehicles pass between base stations; a Deep Reinforcement Learning (DRL) handover algorithm is proposed and evaluated against the currently deployed method. Simulation results suggest that the proposed algorithm can guarantee optimal HO decision in a realistic simulation setup. The second contribution explores distributed radio resource management optimisation. Two versions of a Federated Learning (FL) enhanced DRL algorithm are proposed and evaluated against other state-of-the-art ML solutions. Simulation results suggest that the proposed solution outperformed other benchmarks in overall resource utilisation efficiency, especially in generalisation scenarios. The third contribution looks at energy efficiency optimisation on the network side considering a backdrop of sustainability and green networking. A cell switching algorithm was developed based on a Graph Neural Network (GNN) model and the proposed energy efficiency scheme is able to achieve almost 95% of the metric normalised energy efficiency compared against the “ideal” optimal energy efficiency benchmark and is capable of being applied in many more general network configurations compared with the state-of-the-art ML benchmark
    • 

    corecore