11 research outputs found

    Разработка модели оценки и прогнозирования состояния почв сельско-городских территорий на основе искусственной нейронной сети

    Get PDF
    Авторами предложена математическая модель в виде искусственной нейронной сети, которая позволяет проводить оценку и прогнозирование концентрации загрязняющих веществ в почве в зависимости от параметров автотранспортных потоков и инженерных характеристик прилегающей автодороги. Данная модель реализована c использованием пакета прикладных программ и функций Neural Network Toolbox системы MATLA

    Toward a Robust Sparse Data Representation for Wireless Sensor Networks

    Full text link
    Compressive sensing has been successfully used for optimized operations in wireless sensor networks. However, raw data collected by sensors may be neither originally sparse nor easily transformed into a sparse data representation. This paper addresses the problem of transforming source data collected by sensor nodes into a sparse representation with a few nonzero elements. Our contributions that address three major issues include: 1) an effective method that extracts population sparsity of the data, 2) a sparsity ratio guarantee scheme, and 3) a customized learning algorithm of the sparsifying dictionary. We introduce an unsupervised neural network to extract an intrinsic sparse coding of the data. The sparse codes are generated at the activation of the hidden layer using a sparsity nomination constraint and a shrinking mechanism. Our analysis using real data samples shows that the proposed method outperforms conventional sparsity-inducing methods.Comment: 8 page

    ARTIFICIAL NEURAL NETWORKS PRUNING APPROACH FOR GEODETIC VELOCITY FIELD DETERMINATION

    Get PDF
    There has been a need for geodetic network densification since the early days oftraditional surveying. In order to densify geodetic networks in a way that willproduce the most effective reference frame improvements, the crustal velocity fieldmust be modelled. Artificial Neural Networks (ANNs) are widely used as functionapproximators in diverse fields of geoinformatics including velocity fielddetermination. Deciding the number of hidden neurons required for theimplementation of an arbitrary function is one of the major problems of ANN thatstill deserves further exploration. Generally, the number of hidden neurons isdecided on the basis of experience. This paper attempts to quantify the significanceof pruning away hidden neurons in ANN architecture for velocity fielddetermination. An initial back propagation artificial neural network (BPANN) with30 hidden neurons is educated by training data and resultant BPANN is applied ontest and validation data. The number of hidden neurons is subsequently decreased,in pairs from 30 to 2, to achieve the best predicting model. These pruned BPANNsare retrained and applied on the test and validation data. Some existing methods forselecting the number of hidden neurons are also used. The results are evaluated interms of the root mean square error (RMSE) over a study area for optimizing thenumber of hidden neurons in estimating densification point velocity by BPANN

    JMASM 55: MATLAB Algorithms and Source Codes of \u27cbnet\u27 Function for Univariate Time Series Modeling with Neural Networks (MATLAB)

    Get PDF
    Artificial Neural Networks (ANN) can be designed as a nonparametric tool for time series modeling. MATLAB serves as a powerful environment for ANN modeling. Although Neural Network Time Series Tool (ntstool) is useful for modeling time series, more detailed functions could be more useful in order to get more detailed and comprehensive analysis results. For these purposes, cbnet function with properties such as input lag generator, step-ahead forecaster, trial-error based network selection strategy, alternative network selection with various performance measure and global repetition feature to obtain more alternative network has been developed, and MATLAB algorithms and source codes has been introduced. A detailed comparison with the ntstool is carried out, showing that the cbnet function covers the shortcomings of ntstool

    Comparison of file sanitization techniques in usb based on average file entropy valves

    Get PDF
    Nowadays, the technology has become so advanced that many electronic gadgets are in every household today. The fast growth of technology today gives the ability for digital devices like smartphones and laptops to have a huge size of storage which is letting people to keep many of their infonnation like contact lists, photos, videos and even personal infonnation. When these infonnation are not useful anymore, users will delete them. However, the growth of technology also letting people to recover back data that has been deleted. In this case, users do not realise that their deleted data can be recovered and then used by unauthorized user. The data deleted is invisible but not gone. This is where file sanitization plays it role. File sanitization is the process of deleting the memory of the content and over write it with a different characters. In this research, the methods chosen to sanitize file are Write Zero, Write Zero Randomly and Write Zero Alternately. All of the techniques will overwrite data with zero. The best technique is chosen based on the comparison of average entropy value of the files after they have been overwritten. Write Zero is the only technique that is provided by many software like WipeFile and BitKiller. There is no software that provide Write Zero Randomly technique except for sanitizing disk using dd. As for that, Write Zero Randomly and proposed technique, Write Zero Alternately are developed using C programming language in Dev-C++. In this research, sanitization with Write Zero has the lowest average entropy value for text document (TXT), Microsoft Word (DOCX) and image (JPG) with 100% of data in the files undergone this technique have been zero-filled compared to Write Zero Randomly and Write Zero Alternately. Next, Write Zero Alternately is more efficient in tenns of average entropy by 4.64 bpB to its closest competitor which is Write Zero Randomly with 5.02 bpB. This shows that Write Zero is the best sanitization method. These file sanitization techniques are important to keep the confidentiality against unauthorized user

    Neural Network Based Models for Short-Term Traffic Flow Forecasting Using a Hybrid Exponential Smoothing and Levenberg–Marquardt Algorithm

    Get PDF
    This paper proposes a novel neural network (NN) training method that employs the hybrid exponential smoothing method and the Levenberg–Marquardt (LM) algorithm, which aims to improve the generalization capabilities of previously used methods for training NNs for short-term traffic flow forecasting. The approach uses exponential smoothing to preprocess traffic flow data by removing the lumpiness from collected traffic flow data, before employing a variant of the LM algorithm to train the NN weights of an NN model. This approach aids NN training, as the preprocessed traffic flow data are more smooth and continuous than the original unprocessed traffic flow data. The proposed method was evaluated by forecasting short-term traffic flow conditions on the Mitchell freeway in Western Australia. With regard to the generalization capabilities for short-term traffic flow forecasting, the NN models developed using the proposed approach outperform those that are developed based on the alternative tested algorithms, which are particularly designed either for short-term traffic flow forecasting or for enhancing generalization capabilities of NNs

    Enhanced genetic algorithm-based back propagation neural network to diagnose conditions of multiple-bearing system

    Get PDF
    Condition diagnosis of critical system such as multiple-bearing system is one of the most important maintenance activities in industry because it is essential that faults are detected early before the performance of the whole system is affected. Currently, the most significant issues in condition diagnosis are how to improve accuracy and stability of accuracy, as well as lessen the complexity of the diagnosis which would reduce processing time. Researchers have developed diagnosis techniques based on metaheuristic, specifically, Back Propagation Neural Network (BPNN) for single bearing system and small numbers of condition classes. However, they are not directly applicable or effective for multiple-bearing system because the diagnosis accuracy achieved is unsatisfactory. Therefore, this research proposed hybrid techniques to improve the performance of BPNN in terms of accuracy and stability of accuracy by using Adaptive Genetic Algorithm and Back Propagation Neural Network (AGA-BPNN), and multiple BPNN with AGA-BPNN (mBPNNAGA- BPNN). These techniques are tested and validated on vibration signal data of multiple-bearing system. Experimental results showed the proposed techniques outperformed the BPPN in condition diagnosis. However, the large number of features from multiple-bearing system has affected the complexity of AGA-BPNN and mBPNN-AGA-BPNN, and significantly increased the amount of required processing time. Thus to investigate further, whether the number of features required can be reduced without compromising the diagnosis accuracy and stability, Grey Relational Analysis (GRA) was applied to determine the most dominant features in reducing the complexity of the diagnosis techniques. The experimental results showed that the hybrid of GRA and mBPNN-AGA-BPNN achieved accuracies of 99% for training, 100% for validation and 100% for testing. Besides that, the performance of the proposed hybrid accuracy increased by 11.9%, 13.5% and 11.9% in training, validation and testing respectively when compared to the standard BPNN. This hybrid has lessened the complexity which reduced nearly 55.96% of processing time. Furthermore, the hybrid has improved the stability of the accuracy whereby the differences in accuracy between the maximum and minimum values were 0.2%, 0% and 0% for training, validation and testing respectively. Hence, it can be concluded that the proposed diagnosis techniques have improved the accuracy and stability of accuracy within the minimum complexity and significantly reduced processing time

    3D facial model analysis for clinical medicine

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    Light curves and multidimensional reconstructions of photon observations

    Get PDF
    Diese Dissertation konzentriert sich auf die Entwicklung und Anwendung von bayesianischen Inferenzmethoden, um physikalisch relevante Informationen aus verrauschten Photonenbeobachtungen zu extrahieren. Des Weiteren wird eine Methode entwickelt, um Beobachtungen von komplexen Systemen, welche sich stochastisch mit der Zeit entwickeln, anhand weniger Trainingsbeispiele in verschiedene Klassen einzuordnen. Zu letztem Zweck entwickeln wir den Dynamic System Classifier (DSC). Dieser basiert auf der grundlegenden Annahme, dass viele komplexe Systeme in einem vereinfachten Rahmen durch stochastische Differentialgleichungen (SDE) mit zeitabhängigen Koeffizienten beschrieben werden können. Diese werden verwendet, um Informationen aus einer Klasse ähnlicher, aber nicht identischer simulierter Systeme zu abstrahieren. Der DSC ist in zwei Phasen unterteilt. In der ersten Lernphase werden die Koeffizienten der SDE aus einem kleinen Trainingsdatensatz gelernt. Sobald diese gelernt wurden, dienen sie für einen kostengünstigen Vergleich von Daten und abstrahierter Information. Wir entwickeln, implementieren und testen beide Schritte in dem Rahmen bayesianischer Logik für kontinuierliche Größen, nämlich der Informationsfeldtheorie. Der zweite Teil dieser Arbeit beschäftigt sich mit astronomischer Bildgebung basierend auf Zählraten von Photonen. Die Notwendigkeit hierfür ergibt sich unter anderem aus der Verfügbarkeit von zahlreichen Satelliten, welche die Röntgen- und γ−Strahlen im Weltraum beobachten. In diesem Zusammenhang entwickeln wir den existierenden D3PO-Algorithmus weiter, hin zu D4PO, um multidimensionale Photonenbeobachtungen zu entrauschen, zu dekonvolvieren und in morphologisch unterschiedliche Komponenten aufzuteilen. Die Zerlegung wird durch ein hierarchisches bayesianisches Parametermodell gesteuert. Dieses erlaubt es, Felder zu rekonstruieren, die über den Produktraum von mehreren Mannigfaltigkeiten definiert sind. D4PO zerlegt den beobachteten Fluss an Photonen in eine diffuse, eine punktförmige und eine Hintergrundkomponente, während er gleichzeitig die Korrelationsstruktur für jede einzelne Komponente in jeder ihrer Mannigfaltigkeiten lernt. Die Funktionsweise von D4PO wird anhand eines simulierten Datensatzes hochenergetischer Photonen demonstriert. Schließlich wenden wir D4PO auf Daten der Magnetar-Flares von SGR 1806-20 und SGR 1900+14 an, um nach deren charakteristischen Eigenschwingungen zu suchen. Der Algorithmus rekonstruierte erfolgreich den logarithmischen Photonenfluss sowie dessen spektrale Leistungsdichte. Im Gegensatz zu früheren Arbeiten anderer Autoren können wir quasi- periodische Oszillationen (QPO) in den abklingenden Enden dieser Ereignisse bei Frequenzen ν > 17 Hz nicht bestätigen. Deren Echtheit ist fraglich, da diese in das von Rauschen dominierende Regime fallen. Dennoch finden wir neue Kandidaten für Oszillationen bei ν ≈ 9.2 Hz (SGR 1806-20) und ν ≈ 7.7 Hz (SGR 1900+14). Für den Fall, dass diese Oszillationen real sind, bevorzugen moderne theoretische Modelle von Magnetaren relativ schwache Magnetfelder im Bereich von B ≈ 6 × 1013 − 3 × 1014 G.This thesis focuses on the development and application of Bayesian inference methods to extract physical relevant information from noise contaminated photon observations and to classify the observations of complex stochastically evolving systems into different classes based on a few training samples of each class. To this latter end we develop the dynamic system classifier (DSC). This is based on the fundamental assumption that many complex systems may be described in a simplified framework by stochastic differential equations (SDE) with time dependent coefficients. These are used to abstract information from a class of similar but not identical simulated systems. The DSC is split into two phases. In the first learning phase the coefficients of the SDE are learned from a small training data set. Once these are obtained, they serve for an inexpensive data - class comparison. We develop, implement, and test both steps in a Bayesian inference framework for continuous quantities, namely information field theory. Astronomical imaging based on photon count data is a challenging task but absolutely necessary due to todays availability of space based X-ray and γ- ray telescopes. In this context we advance the existing D3PO algorithm into D4PO to denoise, denconvolve, and decompose multidimensional photon observations into morphologically different components. The decomposition is driven by a probabilistic hierarchical Bayesian parameter model, allowing us to reconstruct fields, that are defined over the product space of multiple manifolds. Thereby D4PO decomposes the photon count data into a diffuse, point-like, and background component, while it simultaneously learns the correlation structure over each of their manifolds individually. The capabilities of the algorithm are demonstrated by applying it to a simulated high energy photon count data set. Finally we apply D4PO to analyse the giant magnetar flare data of SGR 1806-20 and SGR 1900+14. The algorithm successfully reconstructs the logarithmic photon flux as well as its power spectrum. In contrast to previous findings we cannot confirm quasi periodic oscillations (QPO) in the decaying tails of these events at frequencies ν > 17 Hz. They might not be real as these fall into the noise dominated regime of the spectrum. Nevertheless we find new candidates for oscillations at ν ≈ 9.2 Hz (SGR 1806-20) and ν ≈ 7.7 Hz (SGR 1900+14). In case these oscillations are real, state of the art theoretical models of magnetars favour relatively weak magnetic fields in the range of B ≈ 6×1013−3×1014 G
    corecore