1,758 research outputs found

    Revolutionizing Future Connectivity: A Contemporary Survey on AI-empowered Satellite-based Non-Terrestrial Networks in 6G

    Full text link
    Non-Terrestrial Networks (NTN) are expected to be a critical component of 6th Generation (6G) networks, providing ubiquitous, continuous, and scalable services. Satellites emerge as the primary enabler for NTN, leveraging their extensive coverage, stable orbits, scalability, and adherence to international regulations. However, satellite-based NTN presents unique challenges, including long propagation delay, high Doppler shift, frequent handovers, spectrum sharing complexities, and intricate beam and resource allocation, among others. The integration of NTNs into existing terrestrial networks in 6G introduces a range of novel challenges, including task offloading, network routing, network slicing, and many more. To tackle all these obstacles, this paper proposes Artificial Intelligence (AI) as a promising solution, harnessing its ability to capture intricate correlations among diverse network parameters. We begin by providing a comprehensive background on NTN and AI, highlighting the potential of AI techniques in addressing various NTN challenges. Next, we present an overview of existing works, emphasizing AI as an enabling tool for satellite-based NTN, and explore potential research directions. Furthermore, we discuss ongoing research efforts that aim to enable AI in satellite-based NTN through software-defined implementations, while also discussing the associated challenges. Finally, we conclude by providing insights and recommendations for enabling AI-driven satellite-based NTN in future 6G networks.Comment: 40 pages, 19 Figure, 10 Tables, Surve

    Distributed multi-agent based traffic management system

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    Machine Learning-Enabled Resource Allocation for Underlay Cognitive Radio Networks

    Get PDF
    Due to the rapid growth of new wireless communication services and applications, much attention has been directed to frequency spectrum resources and the way they are regulated. Considering that the radio spectrum is a natural limited resource, supporting the ever increasing demands for higher capacity and higher data rates for diverse sets of users, services and applications is a challenging task which requires innovative technologies capable of providing new ways of efficiently exploiting the available radio spectrum. Consequently, dynamic spectrum access (DSA) has been proposed as a replacement for static spectrum allocation policies. The DSA is implemented in three modes including interweave, overlay and underlay mode [1]. The key enabling technology for DSA is cognitive radio (CR), which is among the core prominent technologies for the next generation of wireless communication systems. Unlike conventional radio which is restricted to only operate in designated spectrum bands, a CR has the capability to operate in different spectrum bands owing to its ability in sensing, understanding its wireless environment, learning from past experiences and proactively changing the transmission parameters as needed. These features for CR are provided by an intelligent software package called the cognitive engine (CE). In general, the CE manages radio resources to accomplish cognitive functionalities and allocates and adapts the radio resources to optimize the performance of the network. Cognitive functionality of the CE can be achieved by leveraging machine learning techniques. Therefore, this thesis explores the application of two machine learning techniques in enabling the cognition capability of CE. The two considered machine learning techniques are neural network-based supervised learning and reinforcement learning. Specifically, this thesis develops resource allocation algorithms that leverage the use of machine learning techniques to find the solution to the resource allocation problem for heterogeneous underlay cognitive radio networks (CRNs). The proposed algorithms are evaluated under extensive simulation runs. The first resource allocation algorithm uses a neural network-based learning paradigm to present a fully autonomous and distributed underlay DSA scheme where each CR operates based on predicting its transmission effect on a primary network (PN). The scheme is based on a CE with an artificial neural network that predicts the adaptive modulation and coding configuration for the primary link nearest to a transmitting CR, without exchanging information between primary and secondary networks. By managing the effect of the secondary network (SN) on the primary network, the presented technique maintains the relative average throughput change in the primary network within a prescribed maximum value, while also finding transmit settings for the CRs that result in throughput as large as allowed by the primary network interference limit. The second resource allocation algorithm uses reinforcement learning and aims at distributively maximizing the average quality of experience (QoE) across transmission of CRs with different types of traffic while satisfying a primary network interference constraint. To best satisfy the QoE requirements of the delay-sensitive type of traffics, a cross-layer resource allocation algorithm is derived and its performance is compared against a physical-layer algorithm in terms of meeting end-to-end traffic delay constraints. Moreover, to accelerate the learning performance of the presented algorithms, the idea of transfer learning is integrated. The philosophy behind transfer learning is to allow well-established and expert cognitive agents (i.e. base stations or mobile stations in the context of wireless communications) to teach newly activated and naive agents. Exchange of learned information is used to improve the learning performance of a distributed CR network. This thesis further identifies the best practices to transfer knowledge between CRs so as to reduce the communication overhead. The investigations in this thesis propose a novel technique which is able to accurately predict the modulation scheme and channel coding rate used in a primary link without the need to exchange information between the two networks (e.g. access to feedback channels), while succeeding in the main goal of determining the transmit power of the CRs such that the interference they create remains below the maximum threshold that the primary network can sustain with minimal effect on the average throughput. The investigations in this thesis also provide a physical-layer as well as a cross-layer machine learning-based algorithms to address the challenge of resource allocation in underlay cognitive radio networks, resulting in better learning performance and reduced communication overhead

    Artificial Intelligence Applications to Critical Transportation Issues

    Full text link

    IoT in smart communities, technologies and applications.

    Get PDF
    Internet of Things is a system that integrates different devices and technologies, removing the necessity of human intervention. This enables the capacity of having smart (or smarter) cities around the world. By hosting different technologies and allowing interactions between them, the internet of things has spearheaded the development of smart city systems for sustainable living, increased comfort and productivity for citizens. The Internet of Things (IoT) for Smart Cities has many different domains and draws upon various underlying systems for its operation, in this work, we provide a holistic coverage of the Internet of Things in Smart Cities by discussing the fundamental components that make up the IoT Smart City landscape, the technologies that enable these domains to exist, the most prevalent practices and techniques which are used in these domains as well as the challenges that deployment of IoT systems for smart cities encounter and which need to be addressed for ubiquitous use of smart city applications. It also presents a coverage of optimization methods and applications from a smart city perspective enabled by the Internet of Things. Towards this end, a mapping is provided for the most encountered applications of computational optimization within IoT smart cities for five popular optimization methods, ant colony optimization, genetic algorithm, particle swarm optimization, artificial bee colony optimization and differential evolution. For each application identified, the algorithms used, objectives considered, the nature of the formulation and constraints taken in to account have been specified and discussed. Lastly, the data setup used by each covered work is also mentioned and directions for future work have been identified. Within the smart health domain of IoT smart cities, human activity recognition has been a key study topic in the development of cyber physical systems and assisted living applications. In particular, inertial sensor based systems have become increasingly popular because they do not restrict users’ movement and are also relatively simple to implement compared to other approaches. Fall detection is one of the most important tasks in human activity recognition. With an increasingly aging world population and an inclination by the elderly to live alone, the need to incorporate dependable fall detection schemes in smart devices such as phones, watches has gained momentum. Therefore, differentiating between falls and activities of daily living (ADLs) has been the focus of researchers in recent years with very good results. However, one aspect within fall detection that has not been investigated much is direction and severity aware fall detection. Since a fall detection system aims to detect falls in people and notify medical personnel, it could be of added value to health professionals tending to a patient suffering from a fall to know the nature of the accident. In this regard, as a case study for smart health, four different experiments have been conducted for the task of fall detection with direction and severity consideration on two publicly available datasets. These four experiments not only tackle the problem on an increasingly complicated level (the first one considers a fall only scenario and the other two a combined activity of daily living and fall scenario) but also present methodologies which outperform the state of the art techniques as discussed. Lastly, future recommendations have also been provided for researchers

    Reinforcement Learning

    Get PDF
    Brains rule the world, and brain-like computation is increasingly used in computers and electronic devices. Brain-like computation is about processing and interpreting data or directly putting forward and performing actions. Learning is a very important aspect. This book is on reinforcement learning which involves performing actions to achieve a goal. The first 11 chapters of this book describe and extend the scope of reinforcement learning. The remaining 11 chapters show that there is already wide usage in numerous fields. Reinforcement learning can tackle control tasks that are too complex for traditional, hand-designed, non-learning controllers. As learning computers can deal with technical complexities, the tasks of human operators remain to specify goals on increasingly higher levels. This book shows that reinforcement learning is a very dynamic area in terms of theory and applications and it shall stimulate and encourage new research in this field

    Spatial and temporal background modelling of non-stationary visual scenes

    Get PDF
    PhDThe prevalence of electronic imaging systems in everyday life has become increasingly apparent in recent years. Applications are to be found in medical scanning, automated manufacture, and perhaps most significantly, surveillance. Metropolitan areas, shopping malls, and road traffic management all employ and benefit from an unprecedented quantity of video cameras for monitoring purposes. But the high cost and limited effectiveness of employing humans as the final link in the monitoring chain has driven scientists to seek solutions based on machine vision techniques. Whilst the field of machine vision has enjoyed consistent rapid development in the last 20 years, some of the most fundamental issues still remain to be solved in a satisfactory manner. Central to a great many vision applications is the concept of segmentation, and in particular, most practical systems perform background subtraction as one of the first stages of video processing. This involves separation of ‘interesting foreground’ from the less informative but persistent background. But the definition of what is ‘interesting’ is somewhat subjective, and liable to be application specific. Furthermore, the background may be interpreted as including the visual appearance of normal activity of any agents present in the scene, human or otherwise. Thus a background model might be called upon to absorb lighting changes, moving trees and foliage, or normal traffic flow and pedestrian activity, in order to effect what might be termed in ‘biologically-inspired’ vision as pre-attentive selection. This challenge is one of the Holy Grails of the computer vision field, and consequently the subject has received considerable attention. This thesis sets out to address some of the limitations of contemporary methods of background segmentation by investigating methods of inducing local mutual support amongst pixels in three starkly contrasting paradigms: (1) locality in the spatial domain, (2) locality in the shortterm time domain, and (3) locality in the domain of cyclic repetition frequency. Conventional per pixel models, such as those based on Gaussian Mixture Models, offer no spatial support between adjacent pixels at all. At the other extreme, eigenspace models impose a structure in which every image pixel bears the same relation to every other pixel. But Markov Random Fields permit definition of arbitrary local cliques by construction of a suitable graph, and 3 are used here to facilitate a novel structure capable of exploiting probabilistic local cooccurrence of adjacent Local Binary Patterns. The result is a method exhibiting strong sensitivity to multiple learned local pattern hypotheses, whilst relying solely on monochrome image data. Many background models enforce temporal consistency constraints on a pixel in attempt to confirm background membership before being accepted as part of the model, and typically some control over this process is exercised by a learning rate parameter. But in busy scenes, a true background pixel may be visible for a relatively small fraction of the time and in a temporally fragmented fashion, thus hindering such background acquisition. However, support in terms of temporal locality may still be achieved by using Combinatorial Optimization to derive shortterm background estimates which induce a similar consistency, but are considerably more robust to disturbance. A novel technique is presented here in which the short-term estimates act as ‘pre-filtered’ data from which a far more compact eigen-background may be constructed. Many scenes entail elements exhibiting repetitive periodic behaviour. Some road junctions employing traffic signals are among these, yet little is to be found amongst the literature regarding the explicit modelling of such periodic processes in a scene. Previous work focussing on gait recognition has demonstrated approaches based on recurrence of self-similarity by which local periodicity may be identified. The present work harnesses and extends this method in order to characterize scenes displaying multiple distinct periodicities by building a spatio-temporal model. The model may then be used to highlight abnormality in scene activity. Furthermore, a Phase Locked Loop technique with a novel phase detector is detailed, enabling such a model to maintain correct synchronization with scene activity in spite of noise and drift of periodicity. This thesis contends that these three approaches are all manifestations of the same broad underlying concept: local support in each of the space, time and frequency domains, and furthermore, that the support can be harnessed practically, as will be demonstrated experimentally
    corecore