2,791 research outputs found
Thirty Years of Machine Learning: The Road to Pareto-Optimal Wireless Networks
Future wireless networks have a substantial potential in terms of supporting
a broad range of complex compelling applications both in military and civilian
fields, where the users are able to enjoy high-rate, low-latency, low-cost and
reliable information services. Achieving this ambitious goal requires new radio
techniques for adaptive learning and intelligent decision making because of the
complex heterogeneous nature of the network structures and wireless services.
Machine learning (ML) algorithms have great success in supporting big data
analytics, efficient parameter estimation and interactive decision making.
Hence, in this article, we review the thirty-year history of ML by elaborating
on supervised learning, unsupervised learning, reinforcement learning and deep
learning. Furthermore, we investigate their employment in the compelling
applications of wireless networks, including heterogeneous networks (HetNets),
cognitive radios (CR), Internet of things (IoT), machine to machine networks
(M2M), and so on. This article aims for assisting the readers in clarifying the
motivation and methodology of the various ML algorithms, so as to invoke them
for hitherto unexplored services as well as scenarios of future wireless
networks.Comment: 46 pages, 22 fig
EDDense-Net: Fully Dense Encoder Decoder Network for Joint Segmentation of Optic Cup and Disc
Glaucoma is an eye disease that causes damage to the optic nerve, which can
lead to visual loss and permanent blindness. Early glaucoma detection is
therefore critical in order to avoid permanent blindness. The estimation of the
cup-to-disc ratio (CDR) during an examination of the optical disc (OD) is used
for the diagnosis of glaucoma. In this paper, we present the EDDense-Net
segmentation network for the joint segmentation of OC and OD. The encoder and
decoder in this network are made up of dense blocks with a grouped
convolutional layer in each block, allowing the network to acquire and convey
spatial information from the image while simultaneously reducing the network's
complexity. To reduce spatial information loss, the optimal number of filters
in all convolution layers were utilised. In semantic segmentation, dice pixel
classification is employed in the decoder to alleviate the problem of class
imbalance. The proposed network was evaluated on two publicly available
datasets where it outperformed existing state-of-the-art methods in terms of
accuracy and efficiency. For the diagnosis and analysis of glaucoma, this
method can be used as a second opinion system to assist medical
ophthalmologists
Segmentation of Photovoltaic Module Cells in Electroluminescence Images
High resolution electroluminescence (EL) images captured in the infrared
spectrum allow to visually and non-destructively inspect the quality of
photovoltaic (PV) modules. Currently, however, such a visual inspection
requires trained experts to discern different kinds of defects, which is
time-consuming and expensive. Automated segmentation of cells is therefore a
key step in automating the visual inspection workflow. In this work, we propose
a robust automated segmentation method for extraction of individual solar cells
from EL images of PV modules. This enables controlled studies on large amounts
of data to understanding the effects of module degradation over time-a process
not yet fully understood. The proposed method infers in several steps a
high-level solar module representation from low-level edge features. An
important step in the algorithm is to formulate the segmentation problem in
terms of lens calibration by exploiting the plumbline constraint. We evaluate
our method on a dataset of various solar modules types containing a total of
408 solar cells with various defects. Our method robustly solves this task with
a median weighted Jaccard index of 94.47% and an score of 97.54%, both
indicating a very high similarity between automatically segmented and ground
truth solar cell masks
Programming models, compilers, and runtime systems for accelerator computing
Accelerators, such as GPUs and Intel Xeon Phis, have become the workhorses of high-performance computing. Typically, the accelerators act as co-processors, with discrete memory spaces. They possess massive parallelism, along with many other unique architectural features. In order to obtain high performance, these features must be carefully exploited, which requires high programmer expertise. This thesis presents new programming models, and the necessary compiler and runtime systems to ease the accelerator programming process, while obtaining high performance
Visual Clutter Study for Pedestrian Using Large Scale Naturalistic Driving Data
Some of the pedestrian crashes are due to driver’s late or difficult perception of pedestrian’s appearance. Recognition of pedestrians during driving is a complex cognitive activity. Visual clutter analysis can be used to study the factors that affect human visual search efficiency and help design advanced driver assistant system for better decision making and user experience. In this thesis, we propose the pedestrian perception evaluation model which can quantitatively analyze the pedestrian perception difficulty using naturalistic driving data. An efficient detection framework was developed to locate pedestrians within large scale naturalistic driving data. Visual clutter analysis was used to study the factors that may affect the driver’s ability to perceive pedestrian appearance. The candidate factors were explored by the designed exploratory study using naturalistic driving data and a bottom-up image-based pedestrian clutter metric was proposed to quantify the pedestrian perception difficulty in naturalistic driving data. Based on the proposed bottom-up clutter metrics and top-down pedestrian appearance based estimator, a Bayesian probabilistic pedestrian perception evaluation model was further constructed to simulate the pedestrian perception process
On-line anomaly detection with advanced independent component analysis of multi-variate residual signals from causal relation networks.
Anomaly detection in todays industrial environments is an ambitious challenge to detect possible faults/problems which may turn into severe waste during production, defects, or systems components damage, at an early stage. Data-driven anomaly detection in multi-sensor networks rely on models which are extracted from multi-sensor measurements and which characterize the anomaly-free reference situation. Therefore, significant deviations to these models indicate potential anomalies. In this paper, we propose a new approach which is based on causal relation networks (CRNs) that represent the inner causes and effects between sensor channels (or sensor nodes) in form of partial sub-relations, and evaluate its functionality and performance on two distinct production phases within a micro-fluidic chip manufacturing scenario. The partial relations are modeled by non-linear (fuzzy) regression models for characterizing the (local) degree of influences of the single causes on the effects. An advanced analysis of the multi-variate residual signals, obtained from the partial relations in the CRNs, is conducted. It employs independent component analysis (ICA) to characterize hidden structures in the fused residuals through independent components (latent variables) as obtained through the demixing matrix. A significant change in the energy content of latent variables, detected through automated control limits, indicates an anomaly. Suppression of possible noise content in residuals—to decrease the likelihood of false alarms—is achieved by performing the residual analysis solely on the dominant parts of the demixing matrix. Our approach could detect anomalies in the process which caused bad quality chips (with the occurrence of malfunctions) with negligible delay based on the process data recorded by multiple sensors in two production phases: injection molding and bonding, which are independently carried out with completely different process parameter settings and on different machines (hence, can be seen as two distinct use cases). Our approach furthermore i.) produced lower false alarm rates than several related and well-known state-of-the-art methods for (unsupervised) anomaly detection, and ii.) also caused much lower parametrization efforts (in fact, none at all). Both aspects are essential for the useability of an anomaly detection approach
Contributions to the segmentation of dermoscopic images
Tese de mestrado. Mestrado em Engenharia Biomédica. Faculdade de Engenharia. Universidade do Porto. 201
- …