107 research outputs found

    Image Compression Using Cascaded Neural Networks

    Get PDF
    Images are forming an increasingly large part of modern communications, bringing the need for efficient and effective compression. Many techniques developed for this purpose include transform coding, vector quantization and neural networks. In this thesis, a new neural network method is used to achieve image compression. This work extends the use of 2-layer neural networks to a combination of cascaded networks with one node in the hidden layer. A redistribution of the gray levels in the training phase is implemented in a random fashion to make the minimization of the mean square error applicable to a broad range of images. The computational complexity of this approach is analyzed in terms of overall number of weights and overall convergence. Image quality is measured objectively, using peak signal-to-noise ratio and subjectively, using perception. The effects of different image contents and compression ratios are assessed. Results show the performance superiority of cascaded neural networks compared to that of fixedarchitecture training paradigms especially at high compression ratios. The proposed new method is implemented in MATLAB. The results obtained, such as compression ratio and computing time of the compressed images, are presented

    Image Compression Using Cascaded Neural Networks

    Get PDF
    Images are forming an increasingly large part of modern communications, bringing the need for efficient and effective compression. Many techniques developed for this purpose include transform coding, vector quantization and neural networks. In this thesis, a new neural network method is used to achieve image compression. This work extends the use of 2-layer neural networks to a combination of cascaded networks with one node in the hidden layer. A redistribution of the gray levels in the training phase is implemented in a random fashion to make the minimization of the mean square error applicable to a broad range of images. The computational complexity of this approach is analyzed in terms of overall number of weights and overall convergence. Image quality is measured objectively, using peak signal-to-noise ratio and subjectively, using perception. The effects of different image contents and compression ratios are assessed. Results show the performance superiority of cascaded neural networks compared to that of fixedarchitecture training paradigms especially at high compression ratios. The proposed new method is implemented in MATLAB. The results obtained, such as compression ratio and computing time of the compressed images, are presented

    Thermocouple signal conditioning using artificial neural networks.

    Get PDF
    Masters Degree. University of KwaZulu- Natal, Durban.Thermocouples are probably the most widely used temperature sensing devices in industrial applications. This is due to their relatively high accuracy. Thermocouples sense temperature using thermoelectric voltages arising due to temperature differences between the hot and cold junctions of the thermocouple. The generated thermoelectric voltage is nonlinear in form. Linear approximations in the conversion of thermoelectric voltages into temperature readings compromise the accuracy of the derived temperature values: requiring further processing of the thermocouple voltage for improved temperature measurements. Moreover, undetected variations in the cold junction temperature could further worsen the accuracy of the temperature sensor. The current study researched the enhancement of the accuracy of thermocouple temperature measurement subjected to both random variations in the reference junction temperature and nonlinearities, with a validation of the design process using T, R, E, and J, thermocouples. To this end, the ITS-90 thermocouple tables based on a fixed 0°C reference junction temperature were not adequate for use in the study, so the thermocouple polynomial equations for the T, R, E, and J thermocouples were simulated in MATLAB, with randomly generated cold-junction temperature values, to produce augmented ITS-90 tables for the four thermocouples studied. Results show that the augmented thermocouple tables accurately compared with the ITS-90 tables when the reference junction temperature was set to 0°C. Data samples were generated from each of the augmented thermocouple tables for neural network studies. Half of the data samples for each of the thermocouples was used to train ‘table-lookup’ Multilayer Perceptron (MLP) neural networks in MATLAB. Each neural network used the cold-junction temperatures and thermoelectric voltages as inputs, while the corresponding hot-junction temperatures were used as the target outputs. The validation process for the augmented ITS-90 thermocouple tables showed that the E, T, R, and J thermocouples could all reproduce the hot junction temperature within 0.01% of the results found on the ITS-90 tables. The performance results for the neural networks showed that the E-type thermocouple neural network has a worst-case error within 0.2% in reproducing the hot junction temperature. The J-type thermocouple neural network showed a worst-case error within 0.1%, while the T and R-type thermocouple neural network produced worst error case within 0.04% of the results generated by the augmented ITS-90 tables. For the practical validation of the development presented in this thesis, the structure of each of the trained MLP neural networks was coded as a subroutine within an Arduino Uno microprocessor. The hot junction of the thermocouple was placed in a TTM-004 controller or oven. The cold junction of the thermocouple was located in the ambient of the used laboratory and monitored by an LM 35 temperature sensor connected to one of the inputs of the microcontroller. The experimental results showed that temperature of the TTM-004 controller or oven was evaluated to within 2%, 4% and 3% by the signal conditioning unit using T-type, J-type and E-type thermocouple respectively

    Neural networks: from the perceptron to deep nets

    Full text link
    Artificial networks have been studied through the prism of statistical mechanics as disordered systems since the 80s, starting from the simple models of Hopfield's associative memory and the single-neuron perceptron classifier. Assuming data is generated by a teacher model, asymptotic generalisation predictions were originally derived using the replica method and the online learning dynamics has been described in the large system limit. In this chapter, we review the key original ideas of this literature along with their heritage in the ongoing quest to understand the efficiency of modern deep learning algorithms. One goal of current and future research is to characterize the bias of the learning algorithms toward well-generalising minima in a complex overparametrized loss landscapes with many solutions perfectly interpolating the training data. Works on perceptrons, two-layer committee machines and kernel-like learning machines shed light on these benefits of overparametrization. Another goal is to understand the advantage of depth while models now commonly feature tens or hundreds of layers. If replica computations apparently fall short in describing general deep neural networks learning, studies of simplified linear or untrained models, as well as the derivation of scaling laws provide the first elements of answers.Comment: Contribution to the book Spin Glass Theory and Far Beyond: Replica Symmetry Breaking after 40 Years; Chap. 2

    Investigation of the CasCor family of learning algorithms

    Get PDF

    Data mining using intelligent systems : an optimized weighted fuzzy decision tree approach

    Get PDF
    Data mining can be said to have the aim to analyze the observational datasets to find relationships and to present the data in ways that are both understandable and useful. In this thesis, some existing intelligent systems techniques such as Self-Organizing Map, Fuzzy C-means and decision tree are used to analyze several datasets. The techniques are used to provide flexible information processing capability for handling real-life situations. This thesis is concerned with the design, implementation, testing and application of these techniques to those datasets. The thesis also introduces a hybrid intelligent systems technique: Optimized Weighted Fuzzy Decision Tree (OWFDT) with the aim of improving Fuzzy Decision Trees (FDT) and solving practical problems. This thesis first proposes an optimized weighted fuzzy decision tree, incorporating the introduction of Fuzzy C-Means to fuzzify the input instances but keeping the expected labels crisp. This leads to a different output layer activation function and weight connection in the neural network (NN) structure obtained by mapping the FDT to the NN. A momentum term was also introduced into the learning process to train the weight connections to avoid oscillation or divergence. A new reasoning mechanism has been also proposed to combine the constructed tree with those weights which had been optimized in the learning process. This thesis also makes a comparison between the OWFDT and two benchmark algorithms, Fuzzy ID3 and weighted FDT. SIx datasets ranging from material science to medical and civil engineering were introduced as case study applications. These datasets involve classification of composite material failure mechanism, classification of electrocorticography (ECoG)/Electroencephalogram (EEG) signals, eye bacteria prediction and wave overtopping prediction. Different intelligent systems techniques were used to cluster the patterns and predict the classes although OWFDT was used to design classifiers for all the datasets. In the material dataset, Self-Organizing Map and Fuzzy C-Means were used to cluster the acoustic event signals and classify those events to different failure mechanism, after the classification, OWFDT was introduced to design a classifier in an attempt to classify acoustic event signals. For the eye bacteria dataset, we use the bagging technique to improve the classification accuracy of Multilayer Perceptrons and Decision Trees. Bootstrap aggregating (bagging) to Decision Tree also helped to select those most important sensors (features) so that the dimension of the data could be reduced. Those features which were most important were used to grow the OWFDT and the curse of dimensionality problem could be solved using this approach. The last dataset, which is concerned with wave overtopping, was used to benchmark OWFDT with some other Intelligent Systems techniques, such as Adaptive Neuro-Fuzzy Inference System (ANFIS), Evolving Fuzzy Neural Network (EFuNN), Genetic Neural Mathematical Method (GNMM) and Fuzzy ARTMAP. Through analyzing these datasets using these Intelligent Systems Techniques, it has been shown that patterns and classes can be found or can be classified through combining those techniques together. OWFDT has also demonstrated its efficiency and effectiveness as compared with a conventional fuzzy Decision Tree and weighted fuzzy Decision Tree

    Intelligent packet error prediction for enhanced radio network performance

    Get PDF
    In cellular communication systems, for example 4G and 5G, quite often data packets (in user-plane payload) fail to successfully deliver to the user equipment (UE). Because upon failure, a re-transmission of the data packet is required by the network, these failed data packets introduce latency to the network. In some applications, such latency might be tolerable by the UE, but in applications that require ultra reliable low latency communication (URLCC), time latency becomes a critical issue. In order to cope with this issue, typically wireless networks rely on re-transmissions upon receiver request or use naïve approach like packet duplication to transmit data packets more than once to ensure successful transmission of at least one data packet without any error. In this thesis, we explore the feasibility of designing an intelligent solution to this issue by using network data with machine learning and neural networks to predict if a data packet would fail to transmit in the next transmission time interval (TTI). Our research includes a detailed systematic study on which radio parameters to choose from the raw data (log files) and data preprocessing. From our experiments we also determine how many past values of these radio parameters can be useful to predict the packet failure in the next TTI. Moreover, we enlist the network parameters useful to make such a prediction and compare their contribution in the model. Finally, we show that an intelligent packet error prediction can be done using machine learning that forecasts the packet failure in the next TTI with sufficient accuracy. We compare the performance of different machine learning algorithms and show that boosted decision trees (XGBoost) perform the best on the given dataset. Compared to naïve approaches used in cellular communication to avoid packet failures, our solution based on intelligent packet error prediction indicates promising practical applications in cellular network for enhanced radio network performance, particularly in URLLC
    corecore