5,464 research outputs found

    Comparison between random forests, artificial neural networks and gradient boosted machines methods of on-line vis-NIR spectroscopy measurements of soil total nitrogen and total carbon

    Get PDF
    Accurate and detailed spatial soil information about within-field variability is essential for variable-rate applications of farm resources. Soil total nitrogen (TN) and total carbon (TC) are important fertility parameters that can be measured with on-line (mobile) visible and near infrared (vis-NIR) spectroscopy. This study compares the performance of local farm scale calibrations with those based on the spiking of selected local samples from both fields into an European dataset for TN and TC estimation using three modelling techniques, namely gradient boosted machines (GBM), artificial neural networks (ANNs) and random forests (RF). The on-line measurements were carried out using a mobile, fiber type, vis-NIR spectrophotometer (305-2200 nm) (AgroSpec from tec5, Germany), during which soil spectra were recorded in diffuse reflectance mode from two fields in the UK. After spectra pre-processing, the entire datasets were then divided into calibration (75%) and prediction (25%) sets, and calibration models for TN and TC were developed using GBM, ANN and RF with leave-one-out cross-validation. Results of cross-validation showed that the effect of spiking of local samples collected from a field into an European dataset when combined with RF has resulted in the highest coefficients of determination (R-2) values of 0.97 and 0.98, the lowest root mean square error (RMSE) of 0.01% and 0.10%, and the highest residual prediction deviations (RPD) of 5.58 and 7.54, for TN and TC, respectively. Results for laboratory and on-line predictions generally followed the same trend as for cross-validation in one field, where the spiked European dataset-based RF calibration models outperformed the corresponding GBM and ANN models. In the second field ANN has replaced RF in being the best performing. However, the local field calibrations provided lower R-2 and RPD in most cases. Therefore, from a cost-effective point of view, it is recommended to adopt the spiked European dataset-based RF/ANN calibration models for successful prediction of TN and TC under on-line measurement conditions

    Detection algorithms for the Nano nose

    Full text link
    The Nano nose is an instrument with an array of nano sized optical sensors that produces digital patterns when exposed to radiation passing through a gaseous mixture. The digital patterns correspond to the amount of photocurrent registered on each of the sensors. The problem is to find the gas constituents in the gaseous mixture and estimate their concentrations. This thesis outlines an algorithm using a combination of a mixed gas detector and a gas concentration predictor. The mixed gas detector is an array of neural networks corresponding to the number of gases. There are two techniques outlined for the implementation of the gas concentration predictor which are the partial least squares regression (PLS) and the Kalman filter. The output of the developed algorithm would not only show the detection of the individual constituents in the gaseous mixture but also provide the prediction of their concentrations. The algorithm designed is entirely re-configurable providing greater amount of flexibility and has detected the constituents along with the prediction of their concentrations of a mixture of three gases

    A Comprehensive Workflow for General-Purpose Neural Modeling with Highly Configurable Neuromorphic Hardware Systems

    Full text link
    In this paper we present a methodological framework that meets novel requirements emerging from upcoming types of accelerated and highly configurable neuromorphic hardware systems. We describe in detail a device with 45 million programmable and dynamic synapses that is currently under development, and we sketch the conceptual challenges that arise from taking this platform into operation. More specifically, we aim at the establishment of this neuromorphic system as a flexible and neuroscientifically valuable modeling tool that can be used by non-hardware-experts. We consider various functional aspects to be crucial for this purpose, and we introduce a consistent workflow with detailed descriptions of all involved modules that implement the suggested steps: The integration of the hardware interface into the simulator-independent model description language PyNN; a fully automated translation between the PyNN domain and appropriate hardware configurations; an executable specification of the future neuromorphic system that can be seamlessly integrated into this biology-to-hardware mapping process as a test bench for all software layers and possible hardware design modifications; an evaluation scheme that deploys models from a dedicated benchmark library, compares the results generated by virtual or prototype hardware devices with reference software simulations and analyzes the differences. The integration of these components into one hardware-software workflow provides an ecosystem for ongoing preparative studies that support the hardware design process and represents the basis for the maturity of the model-to-hardware mapping software. The functionality and flexibility of the latter is proven with a variety of experimental results

    Primary proton spectrum between 200 TeV and 1000 TeV observed with the Tibet burst detector and air shower array

    Full text link
    Since 1996, a hybrid experiment consisting of the emulsion chamber and burst detector array and the Tibet-II air-shower array has been operated at Yangbajing (4300 m above sea level, 606 g/cm^2) in Tibet. This experiment can detect air-shower cores, called as burst events, accompanied by air showers in excess of about 100 TeV. We observed about 4300 burst events accompanied by air showers during 690 days of operation and selected 820 proton-induced events with its primary energy above 200 TeV using a neural network method. Using this data set, we obtained the energy spectrum of primary protons in the energy range from 200 to 1000 TeV. The differential energy spectrum obtained in this energy region can be fitted by a power law with the index of -2.97 ±\pm 0.06, which is steeper than that obtained by direct measurements at lower energies. We also obtained the energy spectrum of helium nuclei at particle energies around 1000 TeV.Comment: 25 pages, 22 figures, Accepted for publication in Phys. Rev.

    Modelling activated sludge wastewater treatment plants using artificial intelligence techniques (fuzzy logic and neural networks)

    Get PDF
    Activated sludge process (ASP) is the most commonly used biological wastewater treatment system. Mathematical modelling of this process is important for improving its treatment efficiency and thus the quality of the effluent released into the receiving water body. This is because the models can help the operator to predict the performance of the plant in order to take cost-effective and timely remedial actions that would ensure consistent treatment efficiency and meeting discharge consents. However, due to the highly complex and non-linear characteristics of this biological system, traditional mathematical modelling of this treatment process has remained a challenge. This thesis presents the applications of Artificial Intelligence (AI) techniques for modelling the ASP. These include the Kohonen Self Organising Map (KSOM), backpropagation artificial neural networks (BPANN), and adaptive network based fuzzy inference system (ANFIS). A comparison between these techniques has been made and the possibility of the hybrids between them was also investigated and tested. The study demonstrated that AI techniques offer viable, flexible and effective modelling methodology alternative for the activated sludge system. The KSOM was found to be an attractive tool for data preparation because it can easily accommodate missing data and outliers and because of its power in extracting salient features from raw data. As a consequence of the latter, the KSOM offers an excellent tool for the visualisation of high dimensional data. In addition, the KSOM was used to develop a software sensor to predict biological oxygen demand. This soft-sensor represents a significant advance in real-time BOD operational control by offering a very fast estimation of this important wastewater parameter when compared to the traditional 5-days bio-essay BOD test procedure. Furthermore, hybrids of KSOM-ANN and KSOM-ANFIS were shown to result much more improved model performance than using the respective modelling paradigms on their own.Damascus Universit

    Theory, Design, and Implementation of Landmark Promotion Cooperative Simultaneous Localization and Mapping

    Get PDF
    Simultaneous Localization and Mapping (SLAM) is a challenging problem in practice, the use of multiple robots and inexpensive sensors poses even more demands on the designer. Cooperative SLAM poses specific challenges in the areas of computational efficiency, software/network performance, and robustness to errors. New methods in image processing, recursive filtering, and SLAM have been developed to implement practical algorithms for cooperative SLAM on a set of inexpensive robots. The Consolidated Unscented Mixed Recursive Filter (CUMRF) is designed to handle non-linear systems with non-Gaussian noise. This is accomplished using the Unscented Transform combined with Gaussian Mixture Models. The Robust Kalman Filter is an extension of the Kalman Filter algorithm that improves the ability to remove erroneous observations using Principal Component Analysis (PCA) and the X84 outlier rejection rule. Forgetful SLAM is a local SLAM technique that runs in nearly constant time relative to the number of visible landmarks and improves poor performing sensors through sensor fusion and outlier rejection. Forgetful SLAM correlates all measured observations, but stops the state from growing over time. Hierarchical Active Ripple SLAM (HAR-SLAM) is a new SLAM architecture that breaks the traditional state space of SLAM into a chain of smaller state spaces, allowing multiple robots, multiple sensors, and multiple updates to occur in linear time with linear storage with respect to the number of robots, landmarks, and robots poses. This dissertation presents explicit methods for closing-the-loop, joining multiple robots, and active updates. Landmark Promotion SLAM is a hierarchy of new SLAM methods, using the Robust Kalman Filter, Forgetful SLAM, and HAR-SLAM. Practical aspects of SLAM are a focus of this dissertation. LK-SURF is a new image processing technique that combines Lucas-Kanade feature tracking with Speeded-Up Robust Features to perform spatial and temporal tracking. Typical stereo correspondence techniques fail at providing descriptors for features, or fail at temporal tracking. Several calibration and modeling techniques are also covered, including calibrating stereo cameras, aligning stereo cameras to an inertial system, and making neural net system models. These methods are important to improve the quality of the data and images acquired for the SLAM process

    Event-based Vision: A Survey

    Get PDF
    Event cameras are bio-inspired sensors that differ from conventional frame cameras: Instead of capturing images at a fixed rate, they asynchronously measure per-pixel brightness changes, and output a stream of events that encode the time, location and sign of the brightness changes. Event cameras offer attractive properties compared to traditional cameras: high temporal resolution (in the order of microseconds), very high dynamic range (140 dB vs. 60 dB), low power consumption, and high pixel bandwidth (on the order of kHz) resulting in reduced motion blur. Hence, event cameras have a large potential for robotics and computer vision in challenging scenarios for traditional cameras, such as low-latency, high speed, and high dynamic range. However, novel methods are required to process the unconventional output of these sensors in order to unlock their potential. This paper provides a comprehensive overview of the emerging field of event-based vision, with a focus on the applications and the algorithms developed to unlock the outstanding properties of event cameras. We present event cameras from their working principle, the actual sensors that are available and the tasks that they have been used for, from low-level vision (feature detection and tracking, optic flow, etc.) to high-level vision (reconstruction, segmentation, recognition). We also discuss the techniques developed to process events, including learning-based techniques, as well as specialized processors for these novel sensors, such as spiking neural networks. Additionally, we highlight the challenges that remain to be tackled and the opportunities that lie ahead in the search for a more efficient, bio-inspired way for machines to perceive and interact with the world

    Evaluating the performance of PC-ANN for the estimation of rice nitrogen concentration from canopy hyperspectral reflectance

    Get PDF
    In this study, a wide range of leaf nitrogen concentration levels was established in field-grown rice with the application of three fertilizer levels. Hyperspectral reflectance data of the rice canopy through rice whole growth stages were acquired over the 350 nm to 2500 nm range. Comparisons of prediction power of two statistical methods (linear regression technique (LR) and artificial neural network (ANN)), for rice N estimation (nitrogen concentration, mg nitrogen g(-1) leaf dry weight) were performed using two different input variables (nitrogen sensitive hyperspectral reflectance and principal component scores). The results indicted very good agreement between the observed and the predicted N with all model methods, which was especially true for the PC-ANN model (artificial neural network based on principal component scores), with an RMSE 0.347 and REP 13.14%. Compared to the LR algorithm, the ANN increased accuracy by lowering the RMSE by 17.6% and 25.8% for models based on spectral reflectance and PCs, respectively
    corecore