80 research outputs found

    Assessment of climate change and development of data based prediction models of sediment yields in Upper Indus Basin

    Get PDF
    Hohe Raten von Sedimentflüssen und ihre Schätzungen in Flusseinzugsgebieten erfordern die Auswahl effizienter Quantifizierungsansätze mit einem besseren Verständnis der dominierten Faktoren, die den Erosionsprozess auf zeitlicher und räumlicher Ebene steuern. Die vorherige Bewertung von Einflussfaktoren wie Abflussvariation, Klima, Landschaft und Fließprozess ist hilfreich, um den geeigneten Modellierungsansatz zur Quantifizierung der Sedimenterträge zu entwickeln. Einer der schwächsten Aspekte bei der Quantifizierung der Sedimentfracht ist die Verwendung traditioneller Beziehung zwischen Strömungsgeschwindigkeit und Bodensatzlöschung (SRC), bei denen die hydrometeorologischen Schwankungen, Abflusserzeugungsprozesse wie Schneedecke, Schneeschmelzen, Eisschmelzen usw. nicht berücksichtigt werden können. In vielen Fällen führt die empirische Q-SSC Beziehung daher zu ungenauen Prognosen. Heute können datenbasierte Modelle mit künstlicher Intelligenz die Sedimentfracht präziser abschätzen. Die datenbasierten Modelle lernen aus den eingespeisten Datensätzen, indem sie bei komplexen Phänomenen wie dem Sedimenttransport die geeignete funktionale Beziehung zwischen dem Output und seinen Input-Variablen herstellen. In diesem Zusammenhang wurden die datenbasierten Modellierungsalgorithmen in der vorliegenden Forschungsarbeit am Lehrstuhl für Wasser- und Flussgebietsmanagement des Karlsruher Instituts für Technologie in Karlsruhe entwickelt, die zur Vorhersage von Sedimenten in oberen unteren Einzugsgebieten des oberen Indusbeckens von Pakistan (UIB) verwendet wurden. Die dieser Arbeit zugrunde liegende Methodik gliedert sich in vier Bearbeitungsschritte: (1) Vergleichende Bewertung der räumlichen Variabilität und der Trends von Abflüssen und Sedimentfrachten unter dem Einfluss des Klimawandels im oberen Indus-Becken (2) Anwendung von Soft-Computing-Modellen mit Eingabevektoren der schneedeckten Fläche zusätzlich zu hydro-klimatischen Daten zur Vorhersage der Sedimentfracht (3) Vorhersage der Sedimentfracht unter Verwendung der NDVI-Datensätze (Hydroclimate and Normalized Difference Vegetation Index) mit Soft-Computing-Modellen (4) Klimasignalisierung bei suspendierten Sedimentausträge aus Gletscher und Schnee dominierten Teileinzugsgebeiten im oberen Indus-Becken (UIB). Diese im UIB durchgeführte Analyse hat es ermöglicht, die dominiertenden Parameter wie Schneedecke und hydrologischen Prozesses besser zu und in eine verbesserte Prognose der Sedimentfrachten einfließen zu lassen. Die Analyse der Bewertung des Klimawandels von Flüssen und Sedimenten in schnee- und gletscherdominierten UIB von 13 Messstationen zeigt, dass sich die jährlichen Flüsse und suspendierten Sedimente am Hauptindus in Besham Qila stromaufwärts des Tarbela-Reservoirs im ausgeglichenen Zustand befinden. Jedoch, die jährlichen Konzentrationen suspendierter Sedimente (SSC) wurden signifikant gesenkt und lagen zwischen 18,56% und 28,20% pro Jahrzehnt in Gilgit an der Alam Bridge (von Schnee und Gletschern dominiertes Becken), Indus in Kachura und Brandu in Daggar (von weniger Niederschlag dominiertes Becken). Während der Sommerperiode war der SSC signifikant reduziert und lag zwischen 18,63% und 27,79% pro Jahrzehnt, zusammen mit den Flüssen in den Regionen Hindukush und West-Karakorum aufgrund von Anomalien des Klimawandels und im unteren Unterbecken mit Regen aufgrund der Niederschlagsreduzierung. Die SSC während der Wintersaison waren jedoch aufgrund der signifikanten Erwärmung der durchschnittlichen Lufttemperatur signifikant erhöht und lagen zwischen 20,08% und 40,72% pro Jahrzehnt. Die datenbasierte Modellierung im schnee und gletscherdominierten Gilgit Teilbecken unter Verwendung eines künstlichen neuronalen Netzwerks (ANN), eines adaptiven Neuro-Fuzzy-Logik-Inferenzsystems mit Gitterpartition (ANFIS-GP) und eines adaptiven Neuro-Fuzzy-Logik-Inferenzsystems mit subtraktivem Clustering (ANFIS) -SC), ein adaptives Neuro-Fuzzy-Logik- Inferenzsystem mit Fuzzy-C-Mittel-Clustering, multiplen adaptiven Regressionssplines (MARS) und Sedimentbewertungskurven (SRC) durchgeführt. Die Ergebnisse von Algorithmen für maschinelles Lernen zeigen, dass die Eingabekombination aus täglichen Abflüssen (Qt), Schneedeckenfläche (SCAt), Temperatur (Tt-1) und Evapotranspiration (Evapt-1) die Leistung der Sedimentvorhersagemodelle verbesserne. Nach dem Vergleich der Gesamtleistung der Modelle schnitt das ANN-Modell besser ab als die übrigen Modelle. Bei der Vorhersage der Sedimentfrachten in Spitzenzeiten lag die Vorhersage der ANN-, ANIS-FCM- und MARS-Modelle näher an den gemessenen Sedimentbelastungen. Das ANIS-FCM-Modell mit einem absoluten Gesamtfehler von 81,31% schnitt bei der Vorhersage der Spitzensedimente besser ab als ANN und MARS mit einem absoluten Gesamtfehler von 80,17% bzw. 80,16%. Die datenbasierte Modellierung der Sedimentfrachten im von Regen dominierten Brandu-Teilbecken wurde unter Verwendung von Datensätzen für Hydroklima und biophysikalische Eingaben durchgeführt, die aus Strömungen, Niederschlag, mittlerer Lufttemperatur und normalisiertem Differenzvegetationsindex (NDVI) bestehen. Die Ergebnisse von vier ANNs (Artificial Neural Networks) und drei ANFIS-Algorithmen (Adaptive Neuro-Fuzzy Logic Inference System) für das Brandu Teilnbecken haben gezeigt, dass der mittels Fernerkundung bestimmte NDVI als biophysikalische Parameter zusätzlich zu den Hydroklima-Parametern die Leistung das Modell nicht verbessert. Der ANFIS-GP schnitt in der Testphase besser ab als andere Modelle mit einer Eingangskombination aus Durchfluss und Niederschlag. ANN, eingebettet in Levenberg-Marquardt (ANN-LM) für den Zeitraum 1981-2010, schnitt jedoch am besten mit Eingabekombinationen aus Strömungen, Niederschlag und mittleren Lufttemperaturen ab. Die Ergebnisgenauigkeit R2 unter Verwendung des ANN-LM-Algorithmus verbesserte sich im Vergleich zur Sedimentbewertungskurve (SRC) um bis zu 28%. Es wurde gezeigt, dass für den unteren Teil der UIB-Flüsse Niederschlag und mittlere Lufttemperatur dominierende Faktoren für die Vorhersage von Sedimenterträgen sind und biophysikalische Parameter (NDVI) eine untergeordnete Rolle spielen. Die Modellierung zur Bewertung der Änderungen des SSC in schnee- und gletschergespeiste Gilgit- und Astore-Teilbecken wurde unter Verwendung des Temp-Index degree day modell durchgeführt. Die Ergebnisse des Mann-Kendall-Trendtests in den Flüssen Gilgit und Astore zeigten, dass der Anstieg des SSC während der Wintersaison auf die Erwärmung der mittleren Lufttemperatur, die Zunahme der Winterniederschläge und die Zunahme der Schneeschmelzen im Winter zurückzuführen ist. Während der Frühjahrssaison haben die Niederschlags- und Schneedeckenanteile im Gilgit-Unterbecken zugenommen, im Gegensatz zu seiner Verringerung im Astore-Unterbecken. Im Gilgit-Unterbecken war der SSC im Sommer aufgrund des kombinierten Effekts der Karakorum-Klimaanomalie und der vergrößerten Schneedecke signifikant reduziert. Die Reduzierung des Sommer-SSC im Gilgit Fluss ist auf die Abkühlung der Sommertemperatur und die Bedeckung der exponierten proglazialen Landschaft zurückzuführen, die auf erhöhten Schnee, verringerte Trümmerflüsse Trümmerflüsse und verringerte Schneeschmelzen von Trümmergletschern zurückzuführen sind. Im Gegensatz zum Gilgit River sind die SSC im Astore River im Sommer erhöht. Der Anstieg des SSC im Astore-Unterbecken ist auf die Verringerung des Frühlingsniederschlags und der Schneedecke, die Erwärmung der mittleren Sommerlufttemperatur und den Anstieg des effektiven Niederschlags zurückzuführen. Die Ergebnisse zeigen ferner eine Verschiebung der Dominanz von Gletscherschmelzen zu Schneeschmelzen im Gilgit-Unterbecken und von Schnee zu Niederschlägen im Astore-Unterbecken bei Sedimenteden Sedimentfrachten in UIB. Die vorliegende Forschungsarbeit zur Bewertung der klimabedingten Veränderungen des SSC und seiner Vorhersage sowohl in den oberen als auch in den unteren Teilbecken des UIB wird nützlich sein, um den Sedimenttransportprozess besser zu verstehen und aufbauen auf dem verbessertenProzessverständnis ein angepasstes Sedimentmanagement und angepasste Planungen der zukünftigen Wasserinfrastrukturen im UIB ableiten zu können

    Classification of Acute Lymphocytic Leukemic Blood Cell Images using Hybrid CNN-Enhanced Ensemble SVM Models and Machine Learning Classifiers

    Get PDF
    Acute Lymphocytic Leukemia is a dangerous kind of malignant cancer caused due to the overproduction of white blood cells. The white blood cells in our body are responsible for fighting against infections, if the WBC increases the immunity will decrease and it would lead to serious health conditions. Malignant cancers such as ALL is life threatening if the disease is not diagnosed at an early stage. If a person is suffering from ALL the disease needs to be diagnosed at an early stage before it starts spreading, if it starts spreading the person’s chances of survival would also reduce. Here comes the need of an accurate automated system which would assist the oncologists to diagnose the disease as early as possible. In this paper some of the algorithms that are enhanced to detect and classify ALL are incorporated. In order to classify the Acute Lymphocytic Leukemia a hybrid model has been deployed to improve the accuracy of the diagnosis and it is termed as Hybrid CNN Enhanced Ensemble SVM for the classification of malignancy. Machine Learning classifiers are also used to design the system and it is then compared with enhanced CNN based on the performance metrics

    A survey on automated detection and classification of acute leukemia and WBCs in microscopic blood cells

    Full text link
    Leukemia (blood cancer) is an unusual spread of White Blood Cells or Leukocytes (WBCs) in the bone marrow and blood. Pathologists can diagnose leukemia by looking at a person's blood sample under a microscope. They identify and categorize leukemia by counting various blood cells and morphological features. This technique is time-consuming for the prediction of leukemia. The pathologist's professional skills and experiences may be affecting this procedure, too. In computer vision, traditional machine learning and deep learning techniques are practical roadmaps that increase the accuracy and speed in diagnosing and classifying medical images such as microscopic blood cells. This paper provides a comprehensive analysis of the detection and classification of acute leukemia and WBCs in the microscopic blood cells. First, we have divided the previous works into six categories based on the output of the models. Then, we describe various steps of detection and classification of acute leukemia and WBCs, including Data Augmentation, Preprocessing, Segmentation, Feature Extraction, Feature Selection (Reduction), Classification, and focus on classification step in the methods. Finally, we divide automated detection and classification of acute leukemia and WBCs into three categories, including traditional, Deep Neural Network (DNN), and mixture (traditional and DNN) methods based on the type of classifier in the classification step and analyze them. The results of this study show that in the diagnosis and classification of acute leukemia and WBCs, the Support Vector Machine (SVM) classifier in traditional machine learning models and Convolutional Neural Network (CNN) classifier in deep learning models have widely employed. The performance metrics of the models that use these classifiers compared to the others model are higher

    Application of fuzzy c-means clustering for analysis of chemical ionization mass spectra: insights into the gas-phase chemistry of NO3-initiated oxidation of isoprene

    Get PDF
    Oxidation of volatile organic compounds (VOCs) can lead to the formation of secondary organic aerosol, a significant component of atmospheric fine particles, which can affect air quality, human health, and climate change. However, current understanding of the formation mechanism of SOA is still incomplete, which is not only due to the complexity of the chemistry, but also relates to analytical challenges in SOA precursor detection and quantification. Recent instrumental advances, especially the developments of high-resolution time-of-flight chemical ionization mass spectrometry (CIMS), greatly enhanced the capability to detect low- and extremely low-volatility organic molecules (L/ELVOCs). Although detection and characterization of low volatility vapors largely improved our understanding of SOA formation, analyzing and interpreting complex mass spectrometric data remains a challenging task. This necessitates the use of dimension-reduction techniques to simplify mass spectrometric data with the purpose of extracting chemical and kinetic information of the investigated system. Here we present an approach by using fuzzy c-means clustering (FCM) to analyze CIMS data from chamber experiments aiming to investigate the gas-phase chemistry of nitrate radical initiated oxidation of isoprene. The performance of FCM was evaluated and validated. By applying FCM various oxidation products were classified into different groups according to their chemical and kinetic properties, and the common patterns of their time series were identified, which gave insights into the chemistry of the system investigated. The chemical properties are characterized by elemental ratios and average carbon oxidation state, and the kinetic behaviors are parameterized with generation number and effective rate coefficient (describing the average reactivity of a species) by using the gamma kinetic parameterization model. In addition, the fuzziness of FCM algorithm provides a possibility to separate isomers or different chemical processes species are involved in, which could be useful for mechanism development. Overall FCM is a well applicable technique to simplify complex mass spectrometric data, and the chemical and kinetic properties derived from clustering can be utilized to understand the reaction system of interest.</p

    Data mining using intelligent systems : an optimized weighted fuzzy decision tree approach

    Get PDF
    Data mining can be said to have the aim to analyze the observational datasets to find relationships and to present the data in ways that are both understandable and useful. In this thesis, some existing intelligent systems techniques such as Self-Organizing Map, Fuzzy C-means and decision tree are used to analyze several datasets. The techniques are used to provide flexible information processing capability for handling real-life situations. This thesis is concerned with the design, implementation, testing and application of these techniques to those datasets. The thesis also introduces a hybrid intelligent systems technique: Optimized Weighted Fuzzy Decision Tree (OWFDT) with the aim of improving Fuzzy Decision Trees (FDT) and solving practical problems. This thesis first proposes an optimized weighted fuzzy decision tree, incorporating the introduction of Fuzzy C-Means to fuzzify the input instances but keeping the expected labels crisp. This leads to a different output layer activation function and weight connection in the neural network (NN) structure obtained by mapping the FDT to the NN. A momentum term was also introduced into the learning process to train the weight connections to avoid oscillation or divergence. A new reasoning mechanism has been also proposed to combine the constructed tree with those weights which had been optimized in the learning process. This thesis also makes a comparison between the OWFDT and two benchmark algorithms, Fuzzy ID3 and weighted FDT. SIx datasets ranging from material science to medical and civil engineering were introduced as case study applications. These datasets involve classification of composite material failure mechanism, classification of electrocorticography (ECoG)/Electroencephalogram (EEG) signals, eye bacteria prediction and wave overtopping prediction. Different intelligent systems techniques were used to cluster the patterns and predict the classes although OWFDT was used to design classifiers for all the datasets. In the material dataset, Self-Organizing Map and Fuzzy C-Means were used to cluster the acoustic event signals and classify those events to different failure mechanism, after the classification, OWFDT was introduced to design a classifier in an attempt to classify acoustic event signals. For the eye bacteria dataset, we use the bagging technique to improve the classification accuracy of Multilayer Perceptrons and Decision Trees. Bootstrap aggregating (bagging) to Decision Tree also helped to select those most important sensors (features) so that the dimension of the data could be reduced. Those features which were most important were used to grow the OWFDT and the curse of dimensionality problem could be solved using this approach. The last dataset, which is concerned with wave overtopping, was used to benchmark OWFDT with some other Intelligent Systems techniques, such as Adaptive Neuro-Fuzzy Inference System (ANFIS), Evolving Fuzzy Neural Network (EFuNN), Genetic Neural Mathematical Method (GNMM) and Fuzzy ARTMAP. Through analyzing these datasets using these Intelligent Systems Techniques, it has been shown that patterns and classes can be found or can be classified through combining those techniques together. OWFDT has also demonstrated its efficiency and effectiveness as compared with a conventional fuzzy Decision Tree and weighted fuzzy Decision Tree

    Two and three dimensional segmentation of multimodal imagery

    Get PDF
    The role of segmentation in the realms of image understanding/analysis, computer vision, pattern recognition, remote sensing and medical imaging in recent years has been significantly augmented due to accelerated scientific advances made in the acquisition of image data. This low-level analysis protocol is critical to numerous applications, with the primary goal of expediting and improving the effectiveness of subsequent high-level operations by providing a condensed and pertinent representation of image information. In this research, we propose a novel unsupervised segmentation framework for facilitating meaningful segregation of 2-D/3-D image data across multiple modalities (color, remote-sensing and biomedical imaging) into non-overlapping partitions using several spatial-spectral attributes. Initially, our framework exploits the information obtained from detecting edges inherent in the data. To this effect, by using a vector gradient detection technique, pixels without edges are grouped and individually labeled to partition some initial portion of the input image content. Pixels that contain higher gradient densities are included by the dynamic generation of segments as the algorithm progresses to generate an initial region map. Subsequently, texture modeling is performed and the obtained gradient, texture and intensity information along with the aforementioned initial partition map are used to perform a multivariate refinement procedure, to fuse groups with similar characteristics yielding the final output segmentation. Experimental results obtained in comparison to published/state-of the-art segmentation techniques for color as well as multi/hyperspectral imagery, demonstrate the advantages of the proposed method. Furthermore, for the purpose of achieving improved computational efficiency we propose an extension of the aforestated methodology in a multi-resolution framework, demonstrated on color images. Finally, this research also encompasses a 3-D extension of the aforementioned algorithm demonstrated on medical (Magnetic Resonance Imaging / Computed Tomography) volumes

    Automatic online spike sorting with singular value decomposition and fuzzy C-mean clustering

    Get PDF
    Background: Understanding how neurons contribute to perception, motor functions and cognition requires the reliable detection of spiking activity of individual neurons during a number of different experimental conditions. An important problem in computational neuroscience is thus to develop algorithms to automatically detect and sort the spiking activity of individual neurons from extracellular recordings. While many algorithms for spike sorting exist, the problem of accurate and fast online sorting still remains a challenging issue.Results: Here we present a novel software tool, called FSPS (Fuzzy SPike Sorting), which is designed to optimize: (i) fast and accurate detection, (ii) offline sorting and (iii) online classification of neuronal spikes with very limited or null human intervention. The method is based on a combination of Singular Value Decomposition for fast and highly accurate pre-processing of spike shapes, unsupervised Fuzzy C-mean, high-resolution alignment of extracted spike waveforms, optimal selection of the number of features to retain, automatic identification the number of clusters, and quantitative quality assessment of resulting clusters independent on their size. After being trained on a short testing data stream, the method can reliably perform supervised online classification and monitoring of single neuron activity. The generalized procedure has been implemented in our FSPS spike sorting software (available free for non-commercial academic applications at the address: http://www.spikesorting.com) using LabVIEW (National Instruments, USA). We evaluated the performance of our algorithm both on benchmark simulated datasets with different levels of background noise and on real extracellular recordings from premotor cortex of Macaque monkeys. The results of these tests showed an excellent accuracy in discriminating low-amplitude and overlapping spikes under strong background noise. The performance of our method is competitive with respect to other robust spike sorting algorithms.Conclusions: This new software provides neuroscience laboratories with a new tool for fast and robust online classification of single neuron activity. This feature could become crucial in situations when online spike detection from multiple electrodes is paramount, such as in human clinical recordings or in brain-computer interfaces.Instituto de Física La Plat

    Fuzzy Logic

    Get PDF
    The capability of Fuzzy Logic in the development of emerging technologies is introduced in this book. The book consists of sixteen chapters showing various applications in the field of Bioinformatics, Health, Security, Communications, Transportations, Financial Management, Energy and Environment Systems. This book is a major reference source for all those concerned with applied intelligent systems. The intended readers are researchers, engineers, medical practitioners, and graduate students interested in fuzzy logic systems

    Data mining using intelligent systems : an optimized weighted fuzzy decision tree approach

    Get PDF
    Data mining can be said to have the aim to analyze the observational datasets to find relationships and to present the data in ways that are both understandable and useful. In this thesis, some existing intelligent systems techniques such as Self-Organizing Map, Fuzzy C-means and decision tree are used to analyze several datasets. The techniques are used to provide flexible information processing capability for handling real-life situations. This thesis is concerned with the design, implementation, testing and application of these techniques to those datasets. The thesis also introduces a hybrid intelligent systems technique: Optimized Weighted Fuzzy Decision Tree (OWFDT) with the aim of improving Fuzzy Decision Trees (FDT) and solving practical problems. This thesis first proposes an optimized weighted fuzzy decision tree, incorporating the introduction of Fuzzy C-Means to fuzzify the input instances but keeping the expected labels crisp. This leads to a different output layer activation function and weight connection in the neural network (NN) structure obtained by mapping the FDT to the NN. A momentum term was also introduced into the learning process to train the weight connections to avoid oscillation or divergence. A new reasoning mechanism has been also proposed to combine the constructed tree with those weights which had been optimized in the learning process. This thesis also makes a comparison between the OWFDT and two benchmark algorithms, Fuzzy ID3 and weighted FDT. SIx datasets ranging from material science to medical and civil engineering were introduced as case study applications. These datasets involve classification of composite material failure mechanism, classification of electrocorticography (ECoG)/Electroencephalogram (EEG) signals, eye bacteria prediction and wave overtopping prediction. Different intelligent systems techniques were used to cluster the patterns and predict the classes although OWFDT was used to design classifiers for all the datasets. In the material dataset, Self-Organizing Map and Fuzzy C-Means were used to cluster the acoustic event signals and classify those events to different failure mechanism, after the classification, OWFDT was introduced to design a classifier in an attempt to classify acoustic event signals. For the eye bacteria dataset, we use the bagging technique to improve the classification accuracy of Multilayer Perceptrons and Decision Trees. Bootstrap aggregating (bagging) to Decision Tree also helped to select those most important sensors (features) so that the dimension of the data could be reduced. Those features which were most important were used to grow the OWFDT and the curse of dimensionality problem could be solved using this approach. The last dataset, which is concerned with wave overtopping, was used to benchmark OWFDT with some other Intelligent Systems techniques, such as Adaptive Neuro-Fuzzy Inference System (ANFIS), Evolving Fuzzy Neural Network (EFuNN), Genetic Neural Mathematical Method (GNMM) and Fuzzy ARTMAP. Through analyzing these datasets using these Intelligent Systems Techniques, it has been shown that patterns and classes can be found or can be classified through combining those techniques together. OWFDT has also demonstrated its efficiency and effectiveness as compared with a conventional fuzzy Decision Tree and weighted fuzzy Decision Tree.EThOS - Electronic Theses Online ServiceUniversity of WarwickOverseas Research Students Awards Scheme (ORSAS)GBUnited Kingdo

    Investigating The Relationship Between Adverse Events And Infrastructure Development In An Active War Theater Using Soft Computing Techniques

    Get PDF
    The military recently recognized the importance of taking sociocultural factors into consideration. Therefore, Human Social Culture Behavior (HSCB) modeling has been getting much attention in current and future operational requirements to successfully understand the effects of social and cultural factors on human behavior. There are different kinds of modeling approaches to the data that are being used in this field and so far none of them has been widely accepted. HSCB modeling needs the capability to represent complex, ill-defined, and imprecise concepts, and soft computing modeling can deal with these concepts. There is currently no study on the use of any computational methodology for representing the relationship between adverse events and infrastructure development investments in an active war theater. This study investigates the relationship between adverse events and infrastructure development projects in an active war theater using soft computing techniques including fuzzy inference systems (FIS), artificial neural networks (ANNs), and adaptive neuro-fuzzy inference systems (ANFIS) that directly benefits from their accuracy in prediction applications. Fourteen developmental and economic improvement project types were selected based on allocated budget values and a number of projects at different time periods, urban and rural population density, and total adverse event numbers at previous month selected as independent variables. A total of four outputs reflecting the adverse events in terms of the number of people killed, wounded, hijacked, and total number of adverse events has been estimated. For each model, the data was grouped for training and testing as follows: years between 2004 and 2009 (for training purpose) and year 2010 (for testing). Ninety-six different models were developed and investigated for Afghanistan iv and the country was divided into seven regions for analysis purposes. Performance of each model was investigated and compared to all other models with the calculated mean absolute error (MAE) values and the prediction accuracy within ±1 error range (difference between actual and predicted value). Furthermore, sensitivity analysis was performed to determine the effects of input values on dependent variables and to rank the top ten input parameters in order of importance. According to the the results obtained, it was concluded that the ANNs, FIS, and ANFIS are useful modeling techniques for predicting the number of adverse events based on historical development or economic projects’ data. When the model accuracy was calculated based on the MAE for each of the models, the ANN had better predictive accuracy than FIS and ANFIS models in general as demonstrated by experimental results. The percentages of prediction accuracy with values found within ±1 error range around 90%. The sensitivity analysis results show that the importance of economic development projects varies based on the regions, population density, and occurrence of adverse events in Afghanistan. For the purpose of allocating resources and development of regions, the results can be summarized by examining the relationship between adverse events and infrastructure development in an active war theater; emphasis was on predicting the occurrence of events and assessing the potential impact of regional infrastructure development efforts on reducing number of such events
    corecore