150 research outputs found

    Spectrum cartography techniques, challenges, opportunities, and applications: A survey

    Get PDF
    The spectrum cartography finds applications in several areas such as cognitive radios, spectrum aware communications, machine-type communications, Internet of Things, connected vehicles, wireless sensor networks, and radio frequency management systems, etc. This paper presents a survey on state-of-the-art of spectrum cartography techniques for the construction of various radio environment maps (REMs). Following a brief overview on spectrum cartography, various techniques considered to construct the REMs such as channel gain map, power spectral density map, power map, spectrum map, power propagation map, radio frequency map, and interference map are reviewed. In this paper, we compare the performance of the different spectrum cartography methods in terms of mean absolute error, mean square error, normalized mean square error, and root mean square error. The information presented in this paper aims to serve as a practical reference guide for various spectrum cartography methods for constructing different REMs. Finally, some of the open issues and challenges for future research and development are discussed.publishedVersio

    Real-Time Machine Learning for Quickest Detection

    Get PDF
    Safety-critical Cyber-Physical Systems (CPS) require real-time machine learning for control and decision making. One promising solution is to use deep learning to discover useful patterns for event detection from heterogeneous data. However, deep learning algorithms encounter challenges in CPS with assurability requirements: 1) Decision explainability, 2) Real-time and quickest event detection, and 3) Time-eficient incremental learning. To address these obstacles, I developed a real-time Machine Learning Framework for Quickest Detection (MLQD). To be specific, I first propose the zero-bias neural network, which removes decision bias and preferabilities from regular neural networks and provides an interpretable decision process. Second, I discover the latent space characteristic of the zero-bias neural network and the method to mathematically convert a Deep Neural Network (DNN) classifier into a performance-assured binary abnormality detector. In this way, I can seamlessly integrate the deep neural networks\u27 data processing capability with Quickest Detection (QD) and provide real-time sequential event detection paradigm. Thirdly, after discovering that a critical factor that impedes the incremental learning of neural networks is the concept interference (confusion) in latent space, and I prove that to minimize interference, the concept representation vectors (class fingerprints) within the latent space need to be organized orthogonally and I invent a new incremental learning strategy using the findings, I facilitate deep neural networks in the CPS to evolve eficiently without retraining. All my algorithms are evaluated on real-world applications, ADS-B (Automatic Dependent Surveillance Broadcasting) signal identification, and spoofing detection in the aviation communication system. Finally, I discuss the current trends in MLQD and conclude this dissertation by presenting the future research directions and applications. As a summary, the innovations of this dissertation are as follows: i) I propose the zerobias neural network, which provides transparent latent space characteristics, I apply it to solve the wireless device identification problem. ii) I discover and prove the orthogonal memory organization mechanism in artificial neural networks and apply this mechanism in time-efficient incremental learning. iii) I discover and mathematically prove the converging point theorem, with which we can predict the latent space topological characteristics and estimate the topological maturity of neural networks. iv) I bridge the gap between machine learning and quickest detection with assurable performance

    Improving the domain generalization and robustness of neural networks for medical imaging

    Get PDF
    Deep neural networks are powerful tools to process medical images, with great potential to accelerate clinical workflows and facilitate large-scale studies. However, in order to achieve satisfactory performance at deployment, these networks generally require massive labeled data collected from various domains (e.g., hospitals, scanners), which is rarely available in practice. The main goal of this work is to improve the domain generalization and robustness of neural networks for medical imaging when labeled data is limited. First, we develop multi-task learning methods to exploit auxiliary data to enhance networks. We first present a multi-task U-net that performs image classification and MR atrial segmentation simultaneously. We then present a shape-aware multi-view autoencoder together with a multi-view U-net, which enables extracting useful shape priors from complementary long-axis views and short-axis views in order to assist the left ventricular myocardium segmentation task on the short-axis MR images. Experimental results show that the proposed networks successfully leverage complementary information from auxiliary tasks to improve model generalization on the main segmentation task. Second, we consider utilizing unlabeled data. We first present an adversarial data augmentation method with bias fields to improve semi-supervised learning for general medical image segmentation tasks. We further explore a more challenging setting where the source and the target images are from different data distributions. We demonstrate that an unsupervised image style transfer method can bridge the domain gap, successfully transferring the knowledge learned from labeled balanced Steady-State Free Precession (bSSFP) images to unlabeled Late Gadolinium Enhancement (LGE) images, achieving state-of-the-art performance on a public multi-sequence cardiac MR segmentation challenge. For scenarios with limited training data from a single domain, we first propose a general training and testing pipeline to improve cardiac image segmentation across various unseen domains. We then present a latent space data augmentation method with a cooperative training framework to further enhance model robustness against unseen domains and imaging artifacts.Open Acces

    A Tutorial on Environment-Aware Communications via Channel Knowledge Map for 6G

    Full text link
    Sixth-generation (6G) mobile communication networks are expected to have dense infrastructures, large-dimensional channels, cost-effective hardware, diversified positioning methods, and enhanced intelligence. Such trends bring both new challenges and opportunities for the practical design of 6G. On one hand, acquiring channel state information (CSI) in real time for all wireless links becomes quite challenging in 6G. On the other hand, there would be numerous data sources in 6G containing high-quality location-tagged channel data, making it possible to better learn the local wireless environment. By exploiting such new opportunities and for tackling the CSI acquisition challenge, there is a promising paradigm shift from the conventional environment-unaware communications to the new environment-aware communications based on the novel approach of channel knowledge map (CKM). This article aims to provide a comprehensive tutorial overview on environment-aware communications enabled by CKM to fully harness its benefits for 6G. First, the basic concept of CKM is presented, and a comparison of CKM with various existing channel inference techniques is discussed. Next, the main techniques for CKM construction are discussed, including both the model-free and model-assisted approaches. Furthermore, a general framework is presented for the utilization of CKM to achieve environment-aware communications, followed by some typical CKM-aided communication scenarios. Finally, important open problems in CKM research are highlighted and potential solutions are discussed to inspire future work

    Underwater simulation and mapping using imaging sonar through ray theory and Hilbert maps

    Get PDF
    Mapping, sometimes as part of a SLAM system, is an active topic of research and has remarkable solutions using laser scanners, but most of the underwater mapping is focused on 2D maps, treating the environment as a floor plant, or on 2.5D maps of the seafloor. The reason for the problematic of underwater mapping originates in its sensor, i.e. sonars. In contrast to lasers (LIDARs), sonars are unprecise high-noise sensors. Besides its noise, imaging sonars have a wide sound beam effectuating a volumetric measurement. The first part of this dissertation develops an underwater simulator for highfrequency single-beam imaging sonars capable of replicating multipath, directional gain and typical noise effects on arbitrary environments. The simulation relies on a ray theory based method and explanations of how this theory follows from first principles under short-wavelegnth assumption are provided. In the second part of this dissertation, the simulator is combined to a continous map algorithm based on Hilbert Maps. Hilbert maps arise as a machine learning technique over Hilbert spaces, using features maps, applied to the mapping context. The embedding of a sonar response in such a map is a contribution. A qualitative comparison between the simulator ground truth and the reconstucted map reveal Hilbert maps as a promising technique to noisy sensor mapping and, also, indicates some hard to distinguish characteristics of the surroundings, e.g. corners and non smooth features.O mapeamento, às vezes como parte de um sistema SLAM, é um tema de pesquisa ativo e tem soluções notáveis usando scanners a laser, mas a maioria do mapeamento subaquático é focada em mapas 2D, que tratam o ambiente como uma planta, ou mapas 2.5D do fundo do mar. A razão para a dificuldade do mapeamento subaquático origina-se no seu sensor, i.e. sonares. Em contraste com lasers (LIDARs), os sonares são sensores imprecisos e com alto nível de ruído. Além do seu ruído, os sonares do tipo imaging têm um feixe sonoro muito amplo e, com isso, efetuam uma medição volumétrica, ou seja, sobre todo um volume. Na primeira parte dessa dissertação se desenvolve um simulador para sonares do tipo imaging de feixo único de alta frequência capaz de replicar os efeitos típicos de multicaminho, ganho direcional e ruído de fundo em ambientes arbitrários. O simulador implementa um método baseado na teoria geométrica de raios, com todo seu desenvolvimento partindo da acústica subaquática. Na segunda parte dessa dissertação, o simulador é incorporado em um algoritmo de reconstrução de mapas contínuos baseado em Hilbert Maps. Hilbert Maps surge como uma técnica de aprendizado de máquina sobre espaços de Hilbert, usando mapas de características, aplicadas ao contexto de mapeamento. A incorporação de uma resposta de sonar em um tal mapa é uma contribuição desse trabalho. Uma comparação qualitativa entre o ambiente de referência fornecido ao simulador e o mapa reconstruído pela técnica proposta, revela Hilbert Maps como uma técnica promissora para mapeamento atráves de sensores ruidosos e, também, aponta para algumas características do ambiente difíceis de se distinguir, e.g. cantos e regiões não suaves

    Deep generative modelling of the imaged human brain

    Get PDF
    Human-machine symbiosis is a very promising opportunity for the field of neurology given that the interpretation of the imaged human brain is a trivial feat for neither entity. However, before machine learning systems can be used in real world clinical situations, many issues with automated analysis must first be solved. In this thesis I aim to address what I consider the three biggest hurdles to the adoption of automated machine learning interpretative systems. For each issue, I will first elucidate the reader on its importance given the overarching narratives of both neurology and machine learning, and then showcase my proposed solutions to these issues through the use of deep generative models of the imaged human brain. First, I start by addressing what is an uncontroversial and universal sign of intelligence: the ability to extrapolate knowledge to unseen cases. Human neuroradiologists have studied the anatomy of the healthy brain and can therefore, with some success, identify most pathologies present on an imaged brain, even without having ever been previously exposed to them. Current discriminative machine learning systems require vast amounts of labelled data in order to accurately identify diseases. In this first part I provide a generative framework that permits machine learning models to more efficiently leverage unlabelled data for better diagnoses with either none or small amounts of labels. Secondly, I address a major ethical concern in medicine: equitable evaluation of all patients, regardless of demographics or other identifying characteristics. This is, unfortunately, something that even human practitioners fail at, making the matter ever more pressing: unaddressed biases in data will become biases in the models. To address this concern I suggest a framework through which a generative model synthesises demographically counterfactual brain imaging to successfully reduce the proliferation of demographic biases in discriminative models. Finally, I tackle the challenge of spatial anatomical inference, a task at the centre of the field of lesion-deficit mapping, which given brain lesions and associated cognitive deficits attempts to discover the true functional anatomy of the brain. I provide a new Bayesian generative framework and implementation that allows for greatly improved results on this challenge, hopefully, paving part of the road towards a greater and more complete understanding of the human brain
    corecore