1,988 research outputs found

    Development of soft computing and applications in agricultural and biological engineering

    Get PDF
    Soft computing is a set of “inexact” computing techniques, which are able to model and analyze very complex problems. For these complex problems, more conventional methods have not been able to produce cost-effective, analytical, or complete solutions. Soft computing has been extensively studied and applied in the last three decades for scientific research and engineering computing. In agricultural and biological engineering, researchers and engineers have developed methods of fuzzy logic, artificial neural networks, genetic algorithms, decision trees, and support vector machines to study soil and water regimes related to crop growth, analyze the operation of food processing, and support decision-making in precision farming. This paper reviews the development of soft computing techniques. With the concepts and methods, applications of soft computing in the field of agricultural and biological engineering are presented, especially in the soil and water context for crop management and decision support in precision agriculture. The future of development and application of soft computing in agricultural and biological engineering is discussed

    A novel approach for estimation of above-ground biomass of sugar beet based on wavelength selection and optimized support vector machine

    Get PDF
    Timely diagnosis of sugar beet above-ground biomass (AGB) is critical for the prediction of yield and optimal precision crop management. This study established an optimal quantitative prediction model of AGB of sugar beet by using hyperspectral data. Three experiment campaigns in 2014, 2015 and 2018 were conducted to collect ground-based hyperspectral data at three different growth stages, across different sites, for different cultivars and nitrogen (N) application rates. A competitive adaptive reweighted sampling (CARS) algorithm was applied to select the most sensitive wavelengths to AGB. This was followed by developing a novel modified differential evolution grey wolf optimization algorithm (MDE-GWO) by introducing differential evolution algorithm (DE) and dynamic non-linear convergence factor to grey wolf optimization algorithm (GWO) to optimize the parameters c and gamma of a support vector machine (SVM) model for the prediction of AGB. The prediction performance of SVM models under the three GWO, DE-GWO and MDE-GWO optimization methods for CARS selected wavelengths and whole spectral data was examined. Results showed that CARS resulted in a huge wavelength reduction of 97.4% for the rapid growth stage of leaf cluster, 97.2% for the sugar growth stage and 97.4% for the sugar accumulation stage. Models resulted after CARS wavelength selection were found to be more accurate than models developed using the entire spectral data. The best prediction accuracy was achieved after the MDE-GWO optimization of SVM model parameters for the prediction of AGB in sugar beet, independent of growing stage, years, sites and cultivars. The best coefficient of determination (R-2), root mean square error (RMSE) and residual prediction deviation (RPD) ranged, respectively, from 0.74 to 0.80, 46.17 to 65.68 g/m(2) and 1.42 to 1.97 for the rapid growth stage of leaf cluster, 0.78 to 0.80, 30.16 to 37.03 g/m(2) and 1.69 to 2.03 for the sugar growth stage, and 0.69 to 0.74, 40.17 to 104.08 g/m(2) and 1.61 to 1.95 for the sugar accumulation stage. It can be concluded that the methodology proposed can be implemented for the prediction of AGB of sugar beet using proximal hyperspectral sensors under a wide range of environmental conditions

    Liana canopy cover mapped throughout a tropical forest with high-fidelity imaging spectroscopy

    Get PDF
    Increasing size and abundance of lianas relative to trees are pervasive changes in Neotropical forests that may lead to reduced forest carbon stocks. Yet the liana growth form is chronically understudied in large-scale tropical forest censuses, resulting in few data on the scale, cause, and impact of increasing lianas. Satellite and airborne remote sensing provide potential tools to map and monitor lianas at much larger spatial and rapid temporal scales than are possible with plot-based forest censuses. We combined high-resolution airborne imaging spectroscopy and a ground-based tree canopy census to investigate whether tree canopies supporting lianas could be discriminated from tree canopies with no liana coverage. Using support vector machine algorithms, we achieved accuracies of nearly 90% in discriminating the presence–absence of lianas, and low error (15.7% RMSE) when predicting liana percent canopy cover. When applied to the full image of the study site, our model had a 4.1% false-positive error rate as validated against an independent plot-level dataset of liana canopy cover. Using the derived liana cover classification map, we show that 6.1%–10.2% of the 1823 ha study site has high-to-severe (50–100%) liana canopy cover. Given that levels of liana infestation are increasing in Neotropical forests and can result in high tree mortality, the extent of high-to-severe liana canopy cover across the landscape may have broad implications for ecosystem function and forest carbon storage. The ability to accurately map landscape-scale liana infestation is crucial to quantifying their effects on forest function and uncovering the mechanisms underlying their increase

    Bayesian gravitation based classification for hyperspectral images.

    Get PDF
    Integration of spectral and spatial information is extremely important for the classification of high-resolution hyperspectral images (HSIs). Gravitation describes interaction among celestial bodies which can be applied to measure similarity between data for image classification. However, gravitation is hard to combine with spatial information and rarely been applied in HSI classification. This paper proposes a Bayesian Gravitation based Classification (BGC) to integrate the spectral and spatial information of local neighbors and training samples. In the BGC method, each testing pixel is first assumed as a massive object with unit volume and a particular density, where the density is taken as the data mass in BGC. Specifically, the data mass is formulated as an exponential function of the spectral distribution of its neighbors and the spatial prior distribution of its surrounding training samples based on the Bayesian theorem. Then, a joint data gravitation model is developed as the classification measure, in which the data mass is taken to weigh the contribution of different neighbors in a local region. Four benchmark HSI datasets, i.e. the Indian Pines, Pavia University, Salinas, and Grss_dfc_2014, are tested to verify the BGC method. The experimental results are compared with that of several well-known HSI classification methods, including the support vector machines, sparse representation, and other eight state-of-the-art HSI classification methods. The BGC shows apparent superiority in the classification of high-resolution HSIs and also flexibility for HSIs with limited samples

    Coastal wetland mapping with sentinel-2 MSI imagery based on gravitational optimized multilayer perceptron and morphological attribute profiles.

    Get PDF
    Coastal wetland mapping plays an essential role in monitoring climate change, the hydrological cycle, and water resources. In this study, a novel classification framework based on the gravitational optimized multilayer perceptron classifier and extended multi-attribute profiles (EMAPs) is presented for coastal wetland mapping using Sentinel-2 multispectral instrument (MSI) imagery. In the proposed method, the morphological attribute profiles (APs) are firstly extracted using four attribute filters based on the characteristics of wetlands in each band from Sentinel-2 imagery. These APs form a set of EMAPs which comprehensively represent the irregular wetland objects in multiscale and multilevel. The EMAPs and original spectral features are then classified with a new multilayer perceptron (MLP) classifier whose parameters are optimized by a stability-constrained adaptive alpha for a gravitational search algorithm. The performance of the proposed method was investigated using Sentinel-2 MSI images of two coastal wetlands, i.e., the Jiaozhou Bay and the Yellow River Delta in Shandong province of eastern China. Comparisons with four other classifiers through visual inspection and quantitative evaluation verified the superiority of the proposed method. Furthermore, the effectiveness of different APs in EMAPs were also validated. By combining the developed EMAPs features and novel MLP classifier, complicated wetland types with high within-class variability and low between-class disparity were effectively discriminated. The superior performance of the proposed framework makes it available and preferable for the mapping of complicated coastal wetlands using Sentinel-2 data and other similar optical imagery

    Thirty Years of Machine Learning: The Road to Pareto-Optimal Wireless Networks

    Full text link
    Future wireless networks have a substantial potential in terms of supporting a broad range of complex compelling applications both in military and civilian fields, where the users are able to enjoy high-rate, low-latency, low-cost and reliable information services. Achieving this ambitious goal requires new radio techniques for adaptive learning and intelligent decision making because of the complex heterogeneous nature of the network structures and wireless services. Machine learning (ML) algorithms have great success in supporting big data analytics, efficient parameter estimation and interactive decision making. Hence, in this article, we review the thirty-year history of ML by elaborating on supervised learning, unsupervised learning, reinforcement learning and deep learning. Furthermore, we investigate their employment in the compelling applications of wireless networks, including heterogeneous networks (HetNets), cognitive radios (CR), Internet of things (IoT), machine to machine networks (M2M), and so on. This article aims for assisting the readers in clarifying the motivation and methodology of the various ML algorithms, so as to invoke them for hitherto unexplored services as well as scenarios of future wireless networks.Comment: 46 pages, 22 fig

    Advanced of Mathematics-Statistics Methods to Radar Calibration for Rainfall Estimation; A Review

    Get PDF
    Ground-based radar is known as one of the most important systems for precipitation measurement at high spatial and temporal resolutions. Radar data are recorded in digital manner and readily ingested to any statistical analyses. These measurements are subjected to specific calibration to eliminate systematic errors as well as minimizing the random errors, respectively. Since statistical methods are based on mathematics, they offer more precise results and easy interpretation with lower data detail. Although they have challenge to interpret due to their mathematical structure, but the accuracy of the conclusions and the interpretation of the output are appropriate. This article reviews the advanced methods in using the calibration of ground-based radar for forecasting meteorological events include two aspects: statistical techniques and data mining. Statistical techniques refer to empirical analyses such as regression, while data mining includes the Artificial Neural Network (ANN), data Kriging, Nearest Neighbour (NN), Decision Tree (DT) and fuzzy logic. The results show that Kriging is more applicable for interpolation. Regression methods are simple to use and data mining based on Artificial Intelligence is very precise. Thus, this review explores the characteristics of the statistical parameters in the field of radar applications and shows which parameters give the best results for undefined cases. DOI: 10.17762/ijritcc2321-8169.15012
    corecore