18,039 research outputs found
Linear Support Vector Machines for Error Correction in Optical Data Transmission
Reduction of bit error rates in optical transmission systems is an important task that is difficult to achieve. As speeds increase, the difficulty in reducing bit error rates also increases. Channels have differing characteristics, which may change over time, and any error correction employed must be capable of operating at extremely high speeds. In this paper, a linear support vector machine is used to classify large-scale data sets of simulated optical transmission data in order to demonstrate their effectiveness at reducing bit error rates and their adaptability to the specifics of each channel. For the classification, LIBLINEAR is used, which is related to the popular LIBSVM classifier. It is found that is possible to reduce the error rate on a very noisy channel to about 3 bits in a thousand. This is done by a linear separator that can be built in hardware and can operate at the high speed required of an operationally useful decode
Continuous-variable entanglement distillation over a pure loss channel with multiple quantum scissors
Entanglement distillation is a key primitive for distributing high-quality
entanglement between remote locations. Probabilistic noiseless linear
amplification based on the quantum scissors is a candidate for entanglement
distillation from noisy continuous-variable (CV) entangled states. Being a
non-Gaussian operation, quantum scissors is challenging to analyze. We present
a derivation of the non-Gaussian state heralded by multiple quantum scissors in
a pure loss channel with two-mode squeezed vacuum input. We choose the reverse
coherent information (RCI)---a proven lower bound on the distillable
entanglement of a quantum state under one-way local operations and classical
communication (LOCC), as our figure of merit. We evaluate a Gaussian lower
bound on the RCI of the heralded state. We show that it can exceed the
unlimited two-way LOCCassisted direct transmission entanglement distillation
capacity of the pure loss channel. The optimal heralded Gaussian RCI with two
quantum scissors is found to be significantly more than that with a single
quantum scissors, albeit at the cost of decreased success probability. Our
results fortify the possibility of a quantum repeater scheme for CV quantum
states using the quantum scissors.Comment: accepted for publication in Physical Review
Reducing Errors in Optical Data Transmission Using Trainable Machine Learning Methods
Reducing Bit Error Ratio (BER) and improving performance of modern coherent optical communication system is a significant issue. As the distance travelled by the information signal increases, the bit error ratio will degrade. Machine learning techniques (ML) have been used in applications associated with optical communication systems. The most common machine learning techniques that have been used in applications of optical communication systems are artificial neural networks, Bayesian analysis, and support vector machines (SVMs). This thesis investigates how to improve the bit error ratio in optical data transmission using a trainable machine learning method (ML), that is, a Support Vector Machine (SVM). SVM is a successful machine learning method for pattern recognition, which outperformed the conventional threshold method based on measuring the phase value of each symbol's central sample. In order that the described system can be implemented in hardware, this thesis focuses on applications of SVM with a linear kernel due to the fact that the linear separator is easier to be built in hardware at the desired high speed required of the decoder.
In this thesis, using an SVM to reduce the bit error ratio of signals that travel over various distances has been investigated thoroughly. Especially, particular attention has been paid to using the neighbouring information of each symbol being decoded. To further improve the bit error ratio, the wavelet transforms (WT) technique has been employed to reduce the noise of distorted optical signals; however the method did not bring the sort of improvements that the proponents of wavelets led me to believe.
It has been found that the most significant improvement of bit error ratio over the current threshold method is to use a number of neighbours on either side of the symbol being decoded. This works much better than using more information from the symbol itself
Adaptive pattern recognition by mini-max neural networks as a part of an intelligent processor
In this decade and progressing into 21st Century, NASA will have missions including Space Station and the Earth related Planet Sciences. To support these missions, a high degree of sophistication in machine automation and an increasing amount of data processing throughput rate are necessary. Meeting these challenges requires intelligent machines, designed to support the necessary automations in a remote space and hazardous environment. There are two approaches to designing these intelligent machines. One of these is the knowledge-based expert system approach, namely AI. The other is a non-rule approach based on parallel and distributed computing for adaptive fault-tolerances, namely Neural or Natural Intelligence (NI). The union of AI and NI is the solution to the problem stated above. The NI segment of this unit extracts features automatically by applying Cauchy simulated annealing to a mini-max cost energy function. The feature discovered by NI can then be passed to the AI system for future processing, and vice versa. This passing increases reliability, for AI can follow the NI formulated algorithm exactly, and can provide the context knowledge base as the constraints of neurocomputing. The mini-max cost function that solves the unknown feature can furthermore give us a top-down architectural design of neural networks by means of Taylor series expansion of the cost function. A typical mini-max cost function consists of the sample variance of each class in the numerator, and separation of the center of each class in the denominator. Thus, when the total cost energy is minimized, the conflicting goals of intraclass clustering and interclass segregation are achieved simultaneously
Proposal of a health care network based on big data analytics for PDs
Health care networks for Parkinson's disease (PD) already exist and have been already proposed in the literature, but most of them are not able to analyse the vast volume of data generated from medical examinations and collected and organised in a pre-defined manner. In this work, the authors propose a novel health care network based on big data analytics for PD. The main goal of the proposed architecture is to support clinicians in the objective assessment of the typical PD motor issues and alterations. The proposed health care network has the ability to retrieve a vast volume of acquired heterogeneous data from a Data warehouse and train an ensemble SVM to classify and rate the motor severity of a PD patient. Once the network is trained, it will be able to analyse the data collected during motor examinations of a PD patient and generate a diagnostic report on the basis of the previously acquired knowledge. Such a diagnostic report represents a tool both to monitor the follow up of the disease for each patient and give robust advice about the severity of the disease to clinicians
A survey on fiber nonlinearity compensation for 400 Gbps and beyond optical communication systems
Optical communication systems represent the backbone of modern communication
networks. Since their deployment, different fiber technologies have been used
to deal with optical fiber impairments such as dispersion-shifted fibers and
dispersion-compensation fibers. In recent years, thanks to the introduction of
coherent detection based systems, fiber impairments can be mitigated using
digital signal processing (DSP) algorithms. Coherent systems are used in the
current 100 Gbps wavelength-division multiplexing (WDM) standard technology.
They allow the increase of spectral efficiency by using multi-level modulation
formats, and are combined with DSP techniques to combat the linear fiber
distortions. In addition to linear impairments, the next generation 400 Gbps/1
Tbps WDM systems are also more affected by the fiber nonlinearity due to the
Kerr effect. At high input power, the fiber nonlinear effects become more
important and their compensation is required to improve the transmission
performance. Several approaches have been proposed to deal with the fiber
nonlinearity. In this paper, after a brief description of the Kerr-induced
nonlinear effects, a survey on the fiber nonlinearity compensation (NLC)
techniques is provided. We focus on the well-known NLC techniques and discuss
their performance, as well as their implementation and complexity. An extension
of the inter-subcarrier nonlinear interference canceler approach is also
proposed. A performance evaluation of the well-known NLC techniques and the
proposed approach is provided in the context of Nyquist and super-Nyquist
superchannel systems.Comment: Accepted in the IEEE Communications Surveys and Tutorial
Mitigation of Nonlinear Impairments by Using Support Vector Machine and Nonlinear Volterra Equalizer
A support vector machine (SVM) based detection is applied to different equalization
schemes for a data center interconnect link using coherent 64 GBd 64-QAM over 100 km standard
single mode ïŹber (SSMF). Without any prior knowledge or heuristic assumptions, the SVM is able
to learn and capture the transmission characteristics from only a short training data set. We show
that, with the use of suitable kernel functions, the SVM can create nonlinear decision thresholds
and reduce the errors caused by nonlinear phase noise (NLPN), laser phase noise, I/Q imbalances
and so forth. In order to apply the SVM to 64-QAM we introduce a binary coding SVM, which
provides a binary multiclass classiïŹcation with reduced complexity. We investigate the performance
of this SVM and show how it can improve the bit-error rate (BER) of the entire system. After
100 km the ïŹber-induced nonlinear penalty is reduced by 2 dB at a BER of 3.7 Ă 10
â3
. Furthermore,
we apply a nonlinear Volterra equalizer (NLVE), which is based on the nonlinear Volterra theory,
as another method for mitigating nonlinear effects. The combination of SVM and NLVE reduces
the large computational complexity of the NLVE and allows more accurate compensation of nonlinear
transmission impairments
Reduction of Nonlinear Intersubcarrier Intermixing in Coherent Optical OFDM by a Fast Newton-Based Support Vector Machine Nonlinear Equalizer
A fast Newton-based support vector machine (N-SVM) nonlinear equalizer (NLE) is experimentally demonstrated, for the first time, in 40 Gb/s 16-quadrature amplitude modulated coherent optical orthogonal frequency division multiplexing at 2000 km of transmission. It is shown that N-SVM-NLE extends the optimum launched optical power by 2 dB compared to the benchmark Volterra-based NLE. The performance improvement by N-SVM is due to its ability of tackling both deterministic fiber-induced nonlinear effects and the interaction between nonlinearities and stochastic noises (e.g., polarization-mode dispersion). An N-SVM is more tolerant to intersubcarrier nonlinear crosstalk effects than Volterra-based NLE, especially when applied across all subcarriers simultaneously. In contrast to the conventional SVM, the proposed algorithm is of reduced classifier complexity offering lower computational load and execution time. For a low C-parameter of 4 (a penalty parameter related to complexity), an execution time of 1.6 s is required for N-SVM to effectively mitigate nonlinearities. Compared to conventional SVM, the computational load of N-SVM is âŒ6 times lower
Multivariate NIR studies of seed-water interaction in Scots Pine Seeds (Pinus sylvestris L.)
This thesis describes seed-water interaction using near infrared (NIR) spectroscopy, multivariate regression models and Scots pine seeds. The presented research covers classification of seed viability, prediction of seed moisture content, selection of NIR wavelengths and interpretation of seed-water interaction modelled and analysed by principal component analysis, ordinary least squares (OLS), partial least squares (PLS), bi-orthogonal least squares (BPLS) and genetic algorithms. The potential of using multivariate NIR calibration models for seed classification was demonstrated using filled viable and non-viable seeds that could be separated with an accuracy of 98-99%. It was also shown that multivariate NIR calibration models gave low errors (0.7% and 1.9%) in prediction of seed moisture content for bulk seed and single seeds, respectively, using either NIR reflectance or transmittance spectroscopy. Genetic algorithms selected three to eight wavelength bands in the NIR region and these narrow bands gave about the same prediction of seed moisture content (0.6% and 1.7%) as using the whole NIR interval in the PLS regression models. The selected regions were simulated as NIR filters in OLS regression resulting in predictions of the same quality (0.7 % and 2.1%). This finding opens possibilities to apply NIR sensors in fast and simple spectrometers for the determination of seed moisture content. Near infrared (NIR) radiation interacts with overtones of vibrating bonds in polar molecules. The resulting spectra contain chemical and physical information. This offers good possibilities to measure seed-water interactions, but also to interpret processes within seeds. It is shown that seed-water interaction involves both transitions and changes mainly in covalent bonds of O-H, C-H, C=O and N-H emanating from ongoing physiological processes like seed respiration and protein metabolism. I propose that BPLS analysis that has orthonormal loadings and orthogonal scores giving the same predictions as using conventional PLS regression, should be used as a standard to harmonise the interpretation of NIR spectra
- âŠ