144 research outputs found

    A Survey on Hybrid Techniques Using SVM

    Get PDF
    Support Vector Machines (SVM) with linear or nonlinear kernels has become one of the most promising learning algorithms for classification as well as for regression. All the multilayer perceptron (MLP),Radial Basic Function(RBF) and Learning Polynomials are also worked efficiently with SVM. SVM is basically derived from statistical Learning Theory and it is very powerful statistical tool. The basic principal for the SVM is structural risk minimization and closely related to regularization theory. SVM is a group of supervised learning techniques or methods, which is used to do for classification or regression. In this paper discussed the importance of Support Vector Machines in various areas. This paper discussing the efficiency of SVM with the combination of other classification techniques

    Subpixel Target Enhancement in Hyperspectral Images

    Get PDF
    Hyperspectral images due to their higher spectral resolution are increasingly being used for various remote sensing applications including information extraction at subpixel level. Typically whenever an object gets spectrally resolved but not spatially, mixed pixels in the images result. Numerous man made and/or natural disparatetar gets may thus occur inside such mixed pixels giving rise to subpixel target detection problem. Various spectral unmixing models such as linear mixture modeling (LMM) are in vogue to recover components of a mixed pixel. Spectral unmixing outputs both the endmember spectrum and their corresponding a bundance fractions inside the pixel. It, however, does not provide spatial distribution of these abundance fractions within a pixel. This limits the applicability of hyperspectral data for subpixel target detection. In this paper, a new inverse Euclidean distance based super-resolution mapping method has been presented. In this method, the subpixel target detection is performed by adjusting spatial distribution of abundance fraction within a pixel of an hyperspectral image. Results obtainedat different resolutions indicate that super-resolution mapping may effectively be utilized in enhancing the target detection at sub-pixel level.Defence Science Journal, 2013, 63(1), pp.63-68, DOI:http://dx.doi.org/10.14429/dsj.63.376

    Remotely sensed data capacities to assess soil degradation

    Get PDF
    AbstractThis research has tried to take advantage of the two-field based methodology in order to assess remote sensing data capacities for modeling soil degradation. Based on the findings of our investigation, preprocessing analysis types have not shown significant effects on the accuracy of the model. Conversely, type of indicators and indices of the used field based model has a large impact on the accuracy of the model. In addition, using some remote sensed indices such as iron oxide index and ferrous minerals index can help to improve modeling accuracy of some field indices of soil condition assessment. According to the results, the model capacities can significantly be improved by using time-series remotely sensed data compared with using single date data. In addition, if artificial neural networks are used on single remotely sensed data instead of multivariate linear regression, accuracy of the model can be increased dramatically because it helps the model to take the nonlinear form. However, if time series of remotely sensed data are used, the accuracy of the artificial neural network modeling is not much different from the accuracy of the regression model. It turned out to be contrary to what is thought, but according to our results, increasing the number of inputs to artificial neural network modeling in practice reduces the actual accuracy of the model

    Development of soft computing and applications in agricultural and biological engineering

    Get PDF
    Soft computing is a set of “inexact” computing techniques, which are able to model and analyze very complex problems. For these complex problems, more conventional methods have not been able to produce cost-effective, analytical, or complete solutions. Soft computing has been extensively studied and applied in the last three decades for scientific research and engineering computing. In agricultural and biological engineering, researchers and engineers have developed methods of fuzzy logic, artificial neural networks, genetic algorithms, decision trees, and support vector machines to study soil and water regimes related to crop growth, analyze the operation of food processing, and support decision-making in precision farming. This paper reviews the development of soft computing techniques. With the concepts and methods, applications of soft computing in the field of agricultural and biological engineering are presented, especially in the soil and water context for crop management and decision support in precision agriculture. The future of development and application of soft computing in agricultural and biological engineering is discussed

    Computational intelligence techniques for maritime and coastal remote sensing

    Get PDF
    The aim of this thesis is to investigate the potential of computational intelligence techniques for some applications in the analysis of remotely sensed multi-spectral images. In particular, two problems are addressed. The first one is the classification of oil spills at sea, while the second one is the estimation of sea bottom depth. In both cases, the exploitation of optical satellite data allows to develop operational tools for easily accessing and monitoring large marine areas, in an efficient and cost effective way. Regarding the oil spill problem, today public opinion is certainly aware of the huge impact that oil tanker accidents and oil rig leaks have on marine and coastal environment. However, it is less known that most of the oil released in our seas cannot be ascribed to accidental spills, but rather to illegal ballast waters discharge, and to pollutant dumping at sea, during routine operations of oil tankers. For this reason, any effort for improving oil spill detection systems is of great importance. So far, Synthetic Aperture Radar (SAR) data have been preferred to multi-spectral data for oil spill detection applications, because of their all-weather and all-day capabilities, while optical images necessitate of clear sky conditions and day-light. On the other hand, many features make an optical approach desirable, such as lower cost and higher revisit time. Moreover, unlike SAR data, optical data are not affected by sea state, and are able to reduce false alarm rate, since they do not suffer from the main false alarm source in SAR data, that is represented by the presence of calm sea regions. In this thesis the problem of oil spill classification is tackled by applying different machine learning techniques to a significant dataset of regions of interest, collected in multi-spectral satellite images, acquired by MODIS sensor. These regions are then classified in one of two possible classes, that are oil spills and look-alikes, where look-alikes include any phenomena other than oil spills (e.g. algal blooms...). Results show that efficient and reliable oil spill classification systems based on optical data are feasible, and could offer a valuable support to the existing satellite-based monitoring systems. The estimation of sea bottom depth from high resolution multi-spectral satellite images is the second major topic of this thesis. The motivations for dealing with this problem arise from the necessity of limiting expensive and time consuming measurement campaigns. Since satellite data allow to quickly analyse large areas, a solution for this issue is to employ intelligent techniques, which, by exploiting a small set of depth measurements, are able to extend bathymetry estimate to a much larger area, covered by a multi-spectral satellite image. Such techniques, once that the training phase has been completed, allow to achieve very accurate results, and, thanks to their generalization capabilities, provide reliable bathymetric maps which cover wide areas. A crucial element is represented by the training dataset, which is built by coupling a number of depth measurements, located in a limited part of the image, with corresponding radiances, acquired by the satellite sensor. A successful estimate essentially depends on how the training dataset resembles the rest of the scene. On the other hand, the result is not affected by model uncertainties and systematic errors, as results from model-based analytic approaches are. In this thesis a neuro-fuzzy technique is applied to two case studies, more precisely, two high resolution multi-spectral images related to the same area, but acquired in different years and in different meteorological conditions. Different situations of in-situ depths availability are considered in the study, and the effect of limited in-situ data availability on performance is evaluated. The effect of both meteorological conditions and training set size reduction on the overall performance is also taken into account. Results outperform previous studies on bathymetry estimation techniques, and allow to give indications on the optimal paths which can be adopted when planning data collection at sea

    Groundwater prediction using machine-learning tools

    Get PDF
    Predicting groundwater availability is important to water sustainability and drought mitigation. Machine-learning tools have the potential to improve groundwater prediction, thus enabling resource planners to: (1) anticipate water quality in unsampled areas or depth zones; (2) design targeted monitoring programs; (3) inform groundwater protection strategies; and (4) evaluate the sustainability of groundwater sources of drinking water. This paper proposes a machine-learning approach to groundwater prediction with the following characteristics: (i) the use of a regression-based approach to predict full groundwater images based on sequences of monthly groundwater maps; (ii) strategic automatic feature selection (both local and global features) using extreme gradient boosting; and (iii) the use of a multiplicity of machine-learning techniques (extreme gradient boosting, multivariate linear regression, random forests, multilayer perceptron and support vector regression). Of these techniques, support vector regression consistently performed best in terms of minimizing root mean square error and mean absolute error. Furthermore, including a global feature obtained from a Gaussian Mixture Model produced models with lower error than the best which could be obtained with local geographical features

    A review of machine learning applications in wildfire science and management

    Full text link
    Artificial intelligence has been applied in wildfire science and management since the 1990s, with early applications including neural networks and expert systems. Since then the field has rapidly progressed congruently with the wide adoption of machine learning (ML) in the environmental sciences. Here, we present a scoping review of ML in wildfire science and management. Our objective is to improve awareness of ML among wildfire scientists and managers, as well as illustrate the challenging range of problems in wildfire science available to data scientists. We first present an overview of popular ML approaches used in wildfire science to date, and then review their use in wildfire science within six problem domains: 1) fuels characterization, fire detection, and mapping; 2) fire weather and climate change; 3) fire occurrence, susceptibility, and risk; 4) fire behavior prediction; 5) fire effects; and 6) fire management. We also discuss the advantages and limitations of various ML approaches and identify opportunities for future advances in wildfire science and management within a data science context. We identified 298 relevant publications, where the most frequently used ML methods included random forests, MaxEnt, artificial neural networks, decision trees, support vector machines, and genetic algorithms. There exists opportunities to apply more current ML methods (e.g., deep learning and agent based learning) in wildfire science. However, despite the ability of ML models to learn on their own, expertise in wildfire science is necessary to ensure realistic modelling of fire processes across multiple scales, while the complexity of some ML methods requires sophisticated knowledge for their application. Finally, we stress that the wildfire research and management community plays an active role in providing relevant, high quality data for use by practitioners of ML methods.Comment: 83 pages, 4 figures, 3 table
    • …
    corecore