30,493 research outputs found

    Thirty Years of Machine Learning: The Road to Pareto-Optimal Wireless Networks

    Full text link
    Future wireless networks have a substantial potential in terms of supporting a broad range of complex compelling applications both in military and civilian fields, where the users are able to enjoy high-rate, low-latency, low-cost and reliable information services. Achieving this ambitious goal requires new radio techniques for adaptive learning and intelligent decision making because of the complex heterogeneous nature of the network structures and wireless services. Machine learning (ML) algorithms have great success in supporting big data analytics, efficient parameter estimation and interactive decision making. Hence, in this article, we review the thirty-year history of ML by elaborating on supervised learning, unsupervised learning, reinforcement learning and deep learning. Furthermore, we investigate their employment in the compelling applications of wireless networks, including heterogeneous networks (HetNets), cognitive radios (CR), Internet of things (IoT), machine to machine networks (M2M), and so on. This article aims for assisting the readers in clarifying the motivation and methodology of the various ML algorithms, so as to invoke them for hitherto unexplored services as well as scenarios of future wireless networks.Comment: 46 pages, 22 fig

    Visible Light Communication (VLC)

    Get PDF
    Visible light communication (VLC) using light-emitting diodes (LEDs) or laser diodes (LDs) has been envisioned as one of the key enabling technologies for 6G and Internet of Things (IoT) systems, owing to its appealing advantages, including abundant and unregulated spectrum resources, no electromagnetic interference (EMI) radiation and high security. However, despite its many advantages, VLC faces several technical challenges, such as the limited bandwidth and severe nonlinearity of opto-electronic devices, link blockage and user mobility. Therefore, significant efforts are needed from the global VLC community to develop VLC technology further. This Special Issue, “Visible Light Communication (VLC)”, provides an opportunity for global researchers to share their new ideas and cutting-edge techniques to address the above-mentioned challenges. The 16 papers published in this Special Issue represent the fascinating progress of VLC in various contexts, including general indoor and underwater scenarios, and the emerging application of machine learning/artificial intelligence (ML/AI) techniques in VLC

    MorphIC: A 65-nm 738k-Synapse/mm2^2 Quad-Core Binary-Weight Digital Neuromorphic Processor with Stochastic Spike-Driven Online Learning

    Full text link
    Recent trends in the field of neural network accelerators investigate weight quantization as a means to increase the resource- and power-efficiency of hardware devices. As full on-chip weight storage is necessary to avoid the high energy cost of off-chip memory accesses, memory reduction requirements for weight storage pushed toward the use of binary weights, which were demonstrated to have a limited accuracy reduction on many applications when quantization-aware training techniques are used. In parallel, spiking neural network (SNN) architectures are explored to further reduce power when processing sparse event-based data streams, while on-chip spike-based online learning appears as a key feature for applications constrained in power and resources during the training phase. However, designing power- and area-efficient spiking neural networks still requires the development of specific techniques in order to leverage on-chip online learning on binary weights without compromising the synapse density. In this work, we demonstrate MorphIC, a quad-core binary-weight digital neuromorphic processor embedding a stochastic version of the spike-driven synaptic plasticity (S-SDSP) learning rule and a hierarchical routing fabric for large-scale chip interconnection. The MorphIC SNN processor embeds a total of 2k leaky integrate-and-fire (LIF) neurons and more than two million plastic synapses for an active silicon area of 2.86mm2^2 in 65nm CMOS, achieving a high density of 738k synapses/mm2^2. MorphIC demonstrates an order-of-magnitude improvement in the area-accuracy tradeoff on the MNIST classification task compared to previously-proposed SNNs, while having no penalty in the energy-accuracy tradeoff.Comment: This document is the paper as accepted for publication in the IEEE Transactions on Biomedical Circuits and Systems journal (2019), the fully-edited paper is available at https://ieeexplore.ieee.org/document/876400

    Detection of Intestinal Bleeding in Wireless Capsule Endoscopy using Machine Learning Techniques

    Get PDF
    Gastrointestinal (GI) bleeding is very common in humans, which may lead to fatal consequences. GI bleeding can usually be identified using a flexible wired endoscope. In 2001, a newer diagnostic tool, wireless capsule endoscopy (WCE) was introduced. It is a swallow-able capsule-shaped device with a camera that captures thousands of color images and wirelessly sends those back to a data recorder. After that, the physicians analyze those images in order to identify any GI abnormalities. But it takes a longer screening time which may increase the danger of the patients in emergency cases. It is therefore necessary to use a real-time detection tool to identify bleeding in the GI tract. Each material has its own spectral ‘signature’ which shows distinct characteristics in specific wavelength of light [33]. Therefore, by evaluating the optical characteristics, the presence of blood can be detected. In the study, three main hardware designs were presented: one using a two-wavelength based optical sensor and others using two six-wavelength based spectral sensors with AS7262 and AS7263 chips respectively to determine the optical characteristics of the blood and non-blood samples. The goal of the research is to develop a machine learning model to differentiate blood samples (BS) and non-blood samples (NBS) by exploring their optical properties. In this experiment, 10 levels of crystallized bovine hemoglobin solutions were used as BS and 5 food colors (red, yellow, orange, tan and pink) with different concentrations totaling 25 non-blood samples were used as NBS. These blood and non-blood samples were also combined with pig’s intestine to mimic in-vivo experimental environment. The collected samples were completely separated into training and testing data. Different spectral features are analyzed to obtain the optical information about the samples. Based on the performance on the selected most significant features of the spectral wavelengths, k-nearest neighbors algorithm (k-NN) is finally chosen for the automated bleeding detection. The proposed k-NN classifier model has been able to distinguish the BS and NBS with an accuracy of 91.54% using two wavelengths features and around 89% using three combined wavelengths features in the visible and near-infrared spectral regions. The research also indicates that it is possible to deploy tiny optical detectors to detect GI bleeding in a WCE system which could eliminate the need of time-consuming image post-processing steps

    Learning to detect chest radiographs containing lung nodules using visual attention networks

    Get PDF
    Machine learning approaches hold great potential for the automated detection of lung nodules in chest radiographs, but training the algorithms requires vary large amounts of manually annotated images, which are difficult to obtain. Weak labels indicating whether a radiograph is likely to contain pulmonary nodules are typically easier to obtain at scale by parsing historical free-text radiological reports associated to the radiographs. Using a repositotory of over 700,000 chest radiographs, in this study we demonstrate that promising nodule detection performance can be achieved using weak labels through convolutional neural networks for radiograph classification. We propose two network architectures for the classification of images likely to contain pulmonary nodules using both weak labels and manually-delineated bounding boxes, when these are available. Annotated nodules are used at training time to deliver a visual attention mechanism informing the model about its localisation performance. The first architecture extracts saliency maps from high-level convolutional layers and compares the estimated position of a nodule against the ground truth, when this is available. A corresponding localisation error is then back-propagated along with the softmax classification error. The second approach consists of a recurrent attention model that learns to observe a short sequence of smaller image portions through reinforcement learning. When a nodule annotation is available at training time, the reward function is modified accordingly so that exploring portions of the radiographs away from a nodule incurs a larger penalty. Our empirical results demonstrate the potential advantages of these architectures in comparison to competing methodologies
    • 

    corecore