13 research outputs found
An Overview on Application of Machine Learning Techniques in Optical Networks
Today's telecommunication networks have become sources of enormous amounts of
widely heterogeneous data. This information can be retrieved from network
traffic traces, network alarms, signal quality indicators, users' behavioral
data, etc. Advanced mathematical tools are required to extract meaningful
information from these data and take decisions pertaining to the proper
functioning of the networks from the network-generated data. Among these
mathematical tools, Machine Learning (ML) is regarded as one of the most
promising methodological approaches to perform network-data analysis and enable
automated network self-configuration and fault management. The adoption of ML
techniques in the field of optical communication networks is motivated by the
unprecedented growth of network complexity faced by optical networks in the
last few years. Such complexity increase is due to the introduction of a huge
number of adjustable and interdependent system parameters (e.g., routing
configurations, modulation format, symbol rate, coding schemes, etc.) that are
enabled by the usage of coherent transmission/reception technologies, advanced
digital signal processing and compensation of nonlinear effects in optical
fiber propagation. In this paper we provide an overview of the application of
ML to optical communications and networking. We classify and survey relevant
literature dealing with the topic, and we also provide an introductory tutorial
on ML for researchers and practitioners interested in this field. Although a
good number of research papers have recently appeared, the application of ML to
optical networks is still in its infancy: to stimulate further work in this
area, we conclude the paper proposing new possible research directions
A Novel Graph Neural Network-based Framework for Automatic Modulation Classification in Mobile Environments
Automatic modulation classification (AMC) refers to a signal processing procedure through which the modulation type and order of an observed signal are identified without any prior information about the communications setup. AMC has been recognized as one of the essential measures in various communications research fields such as intelligent modem design, spectrum sensing and management, and threat detection. The research literature in AMC is limited to accounting only for the noise that affects the received signal, which makes their models applicable for stationary environments. However, a more practical and real-world application of AMC can be found in mobile environments where a higher number of distorting effects is present. Hence, in this dissertation, we have developed a solution in which the distorting effects of mobile environments, e.g., multipath, Doppler shift, frequency, phase and timing offset, do not influence the process of identifying the modulation type and order classification. This solution has two major parts: recording an emulated dataset in mobile environments with real-world parameters (MIMOSigRef-SD), and developing an efficient feature-based AMC classifier. The latter itself includes two modules: feature extraction and classification. The feature extraction module runs upon a dynamic spatio-temporal graph convolutional neural network architecture, which tackles the challenges of statistical pattern recognition of received samples and assignment of constellation points. After organizing the feature space in the classification module, a support vector machine is adopted to be trained and perform classification operation. The designed robust feature extraction modules enable the developed solution to outperform other state-of-the-art AMC platforms in terms of classification accuracy and efficiency, which is an important factor for real-world implementations. We validated the performance of our developed solution in a prototyping and field-testing process in environments similar to MIMOSigRef-SD. Therefore, taking all aspects into consideration, our developed solution is deemed to be more practical and feasible for implementation in the next generations of communication systems.
Advisor: Hamid R. Sharif-Kashan
An Overview on Application of Machine Learning Techniques in Optical Networks
Today's telecommunication networks have become sources of enormous amounts of widely heterogeneous data. This information can be retrieved from network traffic traces, network alarms, signal quality indicators, users' behavioral data, etc. Advanced mathematical tools are required to extract meaningful information from these data and take decisions pertaining to the proper functioning of the networks from the network-generated data. Among these mathematical tools, machine learning (ML) is regarded as one of the most promising methodological approaches to perform network-data analysis and enable automated network self-configuration and fault management. The adoption of ML techniques in the field of optical communication networks is motivated by the unprecedented growth of network complexity faced by optical networks in the last few years. Such complexity increase is due to the introduction of a huge number of adjustable and interdependent system parameters (e.g., routing configurations, modulation format, symbol rate, coding schemes, etc.) that are enabled by the usage of coherent transmission/reception technologies, advanced digital signal processing, and compensation of nonlinear effects in optical fiber propagation. In this paper we provide an overview of the application of ML to optical communications and networking. We classify and survey relevant literature dealing with the topic, and we also provide an introductory tutorial on ML for researchers and practitioners interested in this field. Although a good number of research papers have recently appeared, the application of ML to optical networks is still in its infancy: to stimulate further work in this area, we conclude this paper proposing new possible research directions
Data Clustering and Partial Supervision with Some Parallel Developments
Data Clustering and Partial Supell'ision with SOllie Parallel Developments
by Sameh A. Salem
Clustering is an important and irreplaceable step towards the search for structures in the
data. Many different clustering algorithms have been proposed. Yet, the sources of variability
in most clustering algorithms affect the reliability of their results. Moreover, the
majority tend to be based on the knowledge of the number of clusters as one of the input
parameters. Unfortunately, there are many scenarios, where this knowledge may not be
available. In addition, clustering algorithms are very computationally intensive which leads
to a major challenging problem in scaling up to large datasets. This thesis gives possible
solutions for such problems.
First, new measures - called clustering performance measures (CPMs) - for assessing
the reliability of a clustering algorithm are introduced. These CPMs can be used to evaluate:
I) clustering algorithms that have a structure bias to certain type of data distribution
as well as those that have no such biases, 2) clustering algorithms that have initialisation
dependency as well as the clustering algorithms that have a unique solution for a given set
of parameter values with no initialisation dependency.
Then, a novel clustering algorithm, which is a RAdius based Clustering ALgorithm
(RACAL), is proposed. RACAL uses a distance based principle to map the distributions of
the data assuming that clusters are determined by a distance parameter, without having to
specify the number of clusters. Furthermore, RACAL is enhanced by a validity index to
choose the best clustering result, i.e. result has compact clusters with wide cluster separations,
for a given input parameter. Comparisons with other clustering algorithms indicate
the applicability and reliability of the proposed clustering algorithm. Additionally, an adaptive
partial supervision strategy is proposed for using in conjunction with RACAL_to make
it act as a classifier. Results from RACAL with partial supervision, RACAL-PS, indicate
its robustness in classification. Additionally, a parallel version of RACAL (P-RACAL) is
proposed. The parallel evaluations of P-RACAL indicate that P-RACAL is scalable in terms
of speedup and scaleup, which gives the ability to handle large datasets of high dimensions
in a reasonable time.
Next, a novel clustering algorithm, which achieves clustering without any control of
cluster sizes, is introduced. This algorithm, which is called Nearest Neighbour Clustering,
Algorithm (NNCA), uses the same concept as the K-Nearest Neighbour (KNN) classifier
with the advantage that the algorithm needs no training set and it is completely unsupervised.
Additionally, NNCA is augmented with a partial supervision strategy, NNCA-PS, to
act as a classifier. Comparisons with other methods indicate the robustness of the proposed
method in classification. Additionally, experiments on parallel environment indicate the
suitability and scalability of the parallel NNCA, P-NNCA, in handling large datasets.
Further investigations on more challenging data are carried out. In this context, microarray
data is considered. In such data, the number of clusters is not clearly defined.
This points directly towards the clustering algorithms that does not require the knowledge
of the number of clusters. Therefore, the efficacy of one of these algorithms is examined.
Finally, a novel integrated clustering performance measure (lCPM) is proposed to be used
as a guideline for choosing the proper clustering algorithm that has the ability to extract
useful biological information in a particular dataset.
Supplied by The British Library - 'The world's knowledge'
Supplied by The British Library - 'The world's knowledge
D13.1 Fundamental issues on energy- and bandwidth-efficient communications and networking
Deliverable D13.1 del projecte europeu NEWCOM#The report presents the current status in the research area of energy- and bandwidth-efficient communications and networking and highlights the fundamental issues still open for further investigation. Furthermore, the report presents the Joint Research Activities (JRAs) which will be performed within WP1.3. For each activity there is the description, the identification of the adherence with the identified fundamental open issues, a presentation of the initial results, and a roadmap for the planned joint research work in each topic.Preprin
5G Outlook – Innovations and Applications
5G Outlook - Innovations and Applications is a collection of the recent research and development in the area of the Fifth Generation Mobile Technology (5G), the future of wireless communications. Plenty of novel ideas and knowledge of the 5G are presented in this book as well as divers applications from health science to business modeling. The authors of different chapters contributed from various countries and organizations. The chapters have also been presented at the 5th IEEE 5G Summit held in Aalborg on July 1, 2016. The book starts with a comprehensive introduction on 5G and its need and requirement. Then millimeter waves as a promising spectrum to 5G technology is discussed. The book continues with the novel and inspiring ideas for the future wireless communication usage and network. Further, some technical issues in signal processing and network design for 5G are presented. Finally, the book ends up with different applications of 5G in distinct areas. Topics widely covered in this book are: • 5G technology from past to present to the future• Millimeter- waves and their characteristics• Signal processing and network design issues for 5G• Applications, business modeling and several novel ideas for the future of 5
5G Outlook – Innovations and Applications
5G Outlook - Innovations and Applications is a collection of the recent research and development in the area of the Fifth Generation Mobile Technology (5G), the future of wireless communications. Plenty of novel ideas and knowledge of the 5G are presented in this book as well as divers applications from health science to business modeling. The authors of different chapters contributed from various countries and organizations. The chapters have also been presented at the 5th IEEE 5G Summit held in Aalborg on July 1, 2016. The book starts with a comprehensive introduction on 5G and its need and requirement. Then millimeter waves as a promising spectrum to 5G technology is discussed. The book continues with the novel and inspiring ideas for the future wireless communication usage and network. Further, some technical issues in signal processing and network design for 5G are presented. Finally, the book ends up with different applications of 5G in distinct areas. Topics widely covered in this book are: • 5G technology from past to present to the future• Millimeter- waves and their characteristics• Signal processing and network design issues for 5G• Applications, business modeling and several novel ideas for the future of 5
Bioinspired metaheuristic algorithms for global optimization
This paper presents concise comparison study of newly developed bioinspired algorithms for global optimization problems. Three different metaheuristic techniques, namely Accelerated Particle Swarm Optimization (APSO), Firefly Algorithm (FA), and Grey Wolf Optimizer (GWO) are investigated and implemented in Matlab environment. These methods are compared on four unimodal and multimodal nonlinear functions in order to find global optimum values. Computational results indicate that GWO outperforms other intelligent techniques, and that all aforementioned algorithms can be successfully used for optimization of continuous functions