343 research outputs found

    Neuro-fuzzy resource forecast in site suitability assessment for wind and solar energy: a mini review

    Get PDF
    Abstract:Site suitability problems in renewable energy studies have taken a new turn since the advent of geographical information system (GIS). GIS has been used for site suitability analysis for renewable energy due to its prowess in processing and analyzing attributes with geospatial components. Multi-criteria decision making (MCDM) tools are further used for criteria ranking in the order of influence on the study. Upon location of most appropriate sites, the need for intelligent resource forecast to aid in strategic and operational planning becomes necessary if viability of the investment will be enhanced and resource variability will be better understood. One of such intelligent models is the adaptive neuro-fuzzy inference system (ANFIS) and its variants. This study presents a mini-review of GIS-based MCDM facility location problems in wind and solar resource site suitability analysis and resource forecast using ANFIS-based models. We further present a framework for the integration of the two concepts in wind and solar energy studies. Various MCDM techniques for decision making with their strengths and weaknesses were presented. Country specific studies which apply GIS-based method in site suitability were presented with criteria considered. Similarly, country-specific studies in ANFIS-based resource forecasts for wind and solar energy were also presented. From our findings, there has been no technically valid range of values for spatial criteria and the analytical hierarchical process (AHP) has been commonly used for criteria ranking leaving other techniques less explored. Also, hybrid ANFIS models are more effective compared to standalone ANFIS models in resource forecast, and ANFIS optimized with population-based models has been mostly used. Finally, we present a roadmap for integrating GIS-MCDM site suitability studies with ANFIS-based modeling for improved strategic and operational planning

    Cyber-Security Challenges with SMEs in Developing Economies: Issues of Confidentiality, Integrity & Availability (CIA)

    Get PDF

    Simulated annealing coupled with a Naïve Bayes model and base flow separation for streamflow simulation in a snow dominated basin

    Get PDF
    Streamflow simulation in a snow dominated basin is complex due to the presence of a high number of interrelated hydrological processes. This complexity is affected by the delayed responses of the catchment to snow accumulation and snow melting processes. In this study, long short-term memory (LSTM) and artificial neural network (ANN) models were utilized for rainfall–runoff simulation in a snow dominated basin, the Carson River basin in the United States (US). The input structure of the models was determined using the simulated annealing algorithm with a naïve Bayes model from a high dimensional feature space to represent the long-term impacts of historical events (i.e. the hysteresis effect) on current observations. Further, to represent the different responses of the catchment in the model structure, a base flow separation method was included in the simulation framework. The obtained performance indices, root mean square error, percentage bias, Nash–Sutcliffe and Kling–Gupta efficiencies are 0.331 m3 s−1, 13.00%, 0.848, and 0.852 for the ANN model and 0.235 m3 s−1, − 0.80%, 0.923, and 0.934 for the LSTM model, respectively. The proposed methodology was found to be promising for improving the streamflow simulation capability of LSTM and ANN models by only considering precipitation, temperature, and potential evapotranspiration as input variables. Analysing the flow duration curves indicated that the LSTM model is more efficient in representing different flow dynamics within the basin due to embedded cell states. Further, the uncertainty and reliability analyses were conducted by using expanded uncertainty (U95), reliability, and resilience indices. The obtained U95, reliability and resilience indices are 1.78–1.72 m3 s−1, 31.28–66.67% and 11.58–38.27% for the ANN and LSTM models, respectively, showed that the LSTM model produced less uncertainty and is more reliable. However, while lacking a memory component, the proposed methodology significantly contributes to the simulation capability of the ANN model in rainfall–runoff modelling. The results of this study indicated that the proposed methodology could enhance the learning capabilities of machine learning models in rainfall–runoff simulation.</p

    Soil Erosion and Sustainable Land Management (SLM)

    Get PDF
    This Special Issue titled “Soil Erosion and Sustainable Land Management” presents 13 chapters organized into four main parts. The first part deals with assessment of soil erosion that covers historical sediment dating to understand past environmental impacts due to tillage; laboratory simulation to clarify the effect of soil surface microtopography; integrated field observation and the random forest machine learning algorithm to assess watershed-scale soil erosion assessment; and developing the sediment delivery distributed (SEDD) model for sub-watershed erosion risk prioritization. In Part II, the factors controlling soil erosion and vegetation degradation as influenced by topographic positions and climatic regions; long-term land use change; and improper implementation of land management measures are well dealt with. Part III presents different land management technologies that could reduce soil erosion at various spatial scales; improve land productivity of marginal lands with soil microbes; and reclaim degraded farmland using dredged reservoir sediments. The final part relates livelihood diversification to climate vulnerability as well as the coping strategy to the adverse impacts of soil erosion through sustainable land management implementation which opens prospects for policy formulation. The studies cover regions of Africa, Europe, North America and Asia, being dominantly conducted under the framework of international scientific collaborations through employing a range techniques and scales, from the laboratory to watershed scales. We believe those unique features of the book could attract the interest of the wider scientific community worldwide

    Computational Intelligence in Electromyography Analysis

    Get PDF
    Electromyography (EMG) is a technique for evaluating and recording the electrical activity produced by skeletal muscles. EMG may be used clinically for the diagnosis of neuromuscular problems and for assessing biomechanical and motor control deficits and other functional disorders. Furthermore, it can be used as a control signal for interfacing with orthotic and/or prosthetic devices or other rehabilitation assists. This book presents an updated overview of signal processing applications and recent developments in EMG from a number of diverse aspects and various applications in clinical and experimental research. It will provide readers with a detailed introduction to EMG signal processing techniques and applications, while presenting several new results and explanation of existing algorithms. This book is organized into 18 chapters, covering the current theoretical and practical approaches of EMG research

    AN ARTIFICIAL INTELLIGENCE APPROACH TO THE PROCESSING OF RADAR RETURN SIGNALS FOR TARGET DETECTION

    Get PDF
    Most of the operating vessel traffic management systems experience problems, such as track loss and track swap, which may cause confusion to the traffic regulators and lead to potential hazards in the harbour operation. The reason is mainly due to the limited adaptive capabilities of the algorithms used in the detection process. The decision on whether a target is present is usually based on the magnitude of the returning echoes. Such a method has a low efficiency in discriminating between the target and clutter, especially when the signal to noise ratio is low. The performance of radar target detection depends on the features, which can be used to discriminate between clutter and targets. To have a significant improvement in the detection of weak targets, more obvious discriminating features must be identified and extracted. This research investigates conventional Constant False Alarm Rate (CFAR) algorithms and introduces the approach of applying ar1ificial intelligence methods to the target detection problems. Previous research has been unde11aken to improve the detection capability of the radar system in the heavy clutter environment and many new CFAR algorithms, which are based on amplitude information only, have been developed. This research studies these algorithms and proposes that it is feasible to design and develop an advanced target detection system that is capable of discriminating targets from clutters by learning the .different features extracted from radar returns. The approach adopted for this further work into target detection was the use of neural networks. Results presented show that such a network is able to learn particular features of specific radar return signals, e.g. rain clutter, sea clutter, target, and to decide if a target is present in a finite window of data. The work includes a study of the characteristics of radar signals and identification of the features that can be used in the process of effective detection. The use of a general purpose marine radar has allowed the collection of live signals from the Plymouth harbour for analysis, training and validation. The approach of using data from the real environment has enabled the developed detection system to be exposed to real clutter conditions that cannot be obtained when using simulated data. The performance of the neural network detection system is evaluated with further recorded data and the results obtained are compared with the conventional CFAR algorithms. It is shown that the neural system can learn the features of specific radar signals and provide a superior performance in detecting targets from clutters. Areas for further research and development arc presented; these include the use of a sophisticated recording system, high speed processors and the potential for target classification

    A data driven approach for diagnosis and management of yield variability attributed to soil constraints

    Get PDF
    Australian agriculture does not value data to the level required for true precision management. Consequently, agronomic recommendations are frequently based on limited soil information and do not adequately address the spatial variance of the constraints presented. This leads to lost productivity. Due to the costs of soil analysis, land owners and practitioners are often reluctant to invest in soil sampling exercises as the likely economic gain from this investment has not been adequately investigated. A value proposition is therefore required to realise the agronomic and economic benefits of increased site-specific data collection with the aim of ameliorating soil constraints. This study is principally concerned with identifying this value proposition by investigating the spatially variable nature of soil constraints and their interactions with crop yield at the sub-field scale. Agronomic and economic benefits are quantified against simulated ameliorant recommendations made on the basis of varied sampling approaches. In order to assess the effects of sampling density on agronomic recommendations, a 108 ha site was investigated, where 1200 direct soil measurements were obtained (300 sample locations at 4 depth increments) to form a benchmark dataset for analysis used in this study. Random transect sampling (for field average estimates), zone management, regression kriging (SSPFe) and ordinary kriging approaches were first investigated at various sampling densities (N=10, 20, 50, 100, 150, 200, 250 and 300) to observe the effects of lime and gypsum ameliorant recommendation advice. It was identified that the ordinary kriging method provided the most accurate spatial recommendation advice for gypsum and lime at all depth increments investigated (i.e. 0–10 cm, 10–20 cm, 20–40 cm and 40–60 cm), with the majority of improved accuracy being achieved up to 50 samples (≈0.5 samples/ha). The lack of correlation between the environmental covariates and target soil variables inhibited the ability for regression kriging to outperform ordinary kriging. To extend these findings in an attempt to identify the economically optimal sampling density for the investigation site, a yield prediction model was required to estimate the spatial yield response due to amelioration. Given the complex nonlinear relationships between soil properties and yield, this was achieved by applying four machine learning models (both linear and nonlinear) consisting of a mixed-linear regression, a regression tree (Cubist), an artificial neural network and a support vector machine. These were trained using the 1200 directly measured soil samples, each with 9 soil measurements describing structural features (i.e. soil pH, exchangeable sodium percentage, electrical conductivity, clay, silt, sand, bulk density, potassium, cation exchange capacity) to predict the spatial yield variability at the investigation site with four years of yield data. It was concluded that the Cubist regression tree model produced superior results in terms of improved generalization, whilst achieving an acceptable R2 for training and validation (up to R2 =0.80 for training and R2 =0.78 for validation). The lack of temporal yield information constrained the ability to develop a temporally stable yield prediction model to account for the uncertainties of climate interactions associated with the spatial variability of yield. Accurate predictive performance was achieved for single-season models. Of the spatial prediction methods investigated, random transect sampling and ordinary kriging approaches were adopted to simulate ‘blanket-rate’ (BR) and ‘variable-rate’ (VR) gypsum applications, respectively, for the amelioration of sodicity at the investigated site. For each sampling density, the spatial yield response as a result of a BR and VR application of gypsum was estimated by application of the developed Cubist yield prediction model, calibrated for the investigation site. Accounting for the cost of sampling and financial gains, due to a yield response, the most economically optimum sampling density for the investigation site was 0.2 cores/ha for 0–20 cm treatment and 0.5 cores/ha for 0–60 cm treatment taking a VR approach. Whilst this resulted in an increased soil data investment of 26.4/haand26.4/ha and 136/ha for 0–20 cm and 0–60 cm treatment respectively in comparison to a BR approach, the yield gains due to an improved spatial gypsum application were in excess of 6 t and 26 t per annum. Consequently, the net benefit of increased data investment was estimated to be up to $104,000 after 20 years for 0–60 cm profile treatment. Identifying the influence on qualitative data and management information on soil-yield interaction, a probabilistic approach was investigated to offer an alternative approach where empirical models fail. Using soil compaction as an example, a Bayesian Belief Network was developed to explore the interactions of machine loading, soil wetness and site characteristics with the potential yield declines due to compaction induced by agricultural traffic. The developed tool was subsequently able to broadly describe the agronomic impacts of decisions made in data limiting environments. This body of work presents a combined approach to improving both the diagnosis and management of soil constraints using a data driven approach. Subsequently, a detailed discussion is provided to further this work, and improve upon the results obtained. By continuing this work it is possible to change the industry attitude to data collection and significantly improve the productivity, profitability and soil husbandry of agricultural systems

    Densification of spatially-sparse legacy soil data at a national scale: a digital mapping approach

    Get PDF
    Digital soil mapping (DSM) is a viable approach to providing spatial soil information but its adoption at the national scale, especially in sub-Saharan Africa, is limited by low spread of data. Therefore, the focus of this thesis is on optimizing DSM techniques for densification of sparse legacy soil data using Nigeria as a case study. First, the robustness of Random Forest model (RFM) was tested in predicting soil particle-size fractions as a compositional data using additive log-ratio technique. Results indicated good prediction accuracy with RFM while soils are largely coarse-textured especially in the northern region. Second, soil organic carbon (SOC) and bulk density (BD) were predicted from which SOC density and stock were calculated. These were overlaid with land use/land cover (LULC), agro-ecological zone (AEZ) and soil maps to quantify the carbon sequestration of soils and their variation across different AEZs. Results showed that 6.5 Pg C with an average of 71.60 Mg C ha–1 abound in the top 1 m soil depth. Furthermore, to improve the performance of BD and effective cation exchange capacity (ECEC) pedotransfer functions (PTFs), the inclusion of environmental data was explored using multiple linear regression (MLR) and RFM. Results showed an increase in performance of PTFs with the use of soil and environmental data. Finally, the application of Choquet fuzzy integral (CI) technique in irrigation suitability assessment was assessed. This was achieved through multi-criteria analysis of soil, climatic, landscape and socio-economic indices. Results showed that CI is a better aggregation operator compared to weighted mean technique. A total of 3.34 x 106 ha is suitable for surface irrigation in Nigeria while major limitations are due to topographic and soil attributes. Research findings will provide quantitative basis for framing appropriate policies on sustainable food production and environmental management, especially in resource-poor countries of the world

    Soft Computing approaches in ocean wave height prediction for marine energy applications

    Get PDF
    El objetivo de esta tesis consiste en investigar el uso de técnicas de Soft Computing (SC) aplicadas a la energía producida por las olas o energía undimotriz. Ésta es, entre todas las energías marinas disponibles, la que exhibe el mayor potencial futuro porque, además de ser eficiente desde el punto de vista técnico, no causa problemas ambientales significativos. Su importancia práctica radica en dos hechos: 1) es aproximadamente 1000 veces más densa que la energía eólica, y 2) hay muchas regiones oceánicas con abundantes recursos de olas que están cerca de zonas pobladas que demandan energía eléctrica. La contrapartida negativa se encuentra en que las olas son más difíciles de caracterizar que las mareas debido a su naturaleza estocástica. Las técnicas SC exhiben resultados similares e incluso superiores a los de otros métodos estadísticos en las estimaciones a corto plazo (hasta 24 h), y tienen la ventaja adicional de requerir un esfuerzo computacional mucho menor que los métodos numérico-físicos. Esta es una de las razones por la que hemos decidido explorar el uso de técnicas de SC en la energía producida por el oleaje. La otra se encuentra en el hecho de que su intermitencia puede afectar a la forma en la que se integra la electricidad que genera con la red eléctrica. Estas dos son las razones que nos han impulsado a explorar la viabilidad de nuevos enfoques de SC en dos líneas de investigación novedosas. La primera de ellas es un nuevo enfoque que combina un algoritmo genético (GA: Genetic Algorithm) con una Extreme Learning Machine (ELM) aplicado a un problema de reconstrucción de la altura de ola significativa (en un boya donde los datos se han perdido, por ejemplo, por una tormenta) utilizando datos de otras boyas cercanas. Nuestro algoritmo GA-ELM es capaz de seleccionar un conjunto reducido de parámetros del oleaje que maximizan la reconstrucción de la altura de ola significativa en la boya cuyos datos se han perdido utilizando datos de boyas vecinas. El método y los resultados de esta investigación han sido publicados en: Alexandre, E., Cuadra, L., Nieto-Borge, J. C., Candil-García, G., Del Pino, M., & Salcedo-Sanz, S. (2015). A hybrid genetic algorithm—extreme learning machine approach for accurate significant wave height reconstruction. Ocean Modelling, 92, 115-123. La segunda contribución combina conceptos de SC, Smart Grids (SG) y redes complejas (CNs: Complex Networks). Está motivada por dos aspectos importantes, mutuamente interrelacionados: 1) la forma en la que los conversores WECs (wave energy converters) se interconectan eléctricamente para formar un parque, y 2) cómo conectar éste con la red eléctrica en la costa. Ambos están relacionados con el carácter aleatorio e intermitente de la energía eléctrica producida por las olas. Para poder integrarla mejor sin afectar a la estabilidad de la red se debería recurrir al concepto Smart Wave Farm (SWF). Al igual que una SG, una SWF utiliza sensores y algoritmos para predecir el olaje y controlar la producción y/o almacenamiento de la electricidad producida y cómo se inyecta ésta en la red. En nuestro enfoque, una SWF y su conexión con la red eléctrica se puede ver como una SG que, a su vez, se puede modelar como una red compleja. Con este planteamiento, que se puede generalizar a cualquier red formada por generadores renovables y nodos que consumen y/o almacenan energía, hemos propuesto un algoritmo evolutivo que optimiza la robustez de dicha SG modelada como una red compleja ante fallos aleatorios o condiciones anormales de funcionamiento. El modelo y los resultados han sido publicados en: Cuadra, L., Pino, M. D., Nieto-Borge, J. C., & Salcedo-Sanz, S. (2017). Optimizing the Structure of Distribution Smart Grids with Renewable Generation against Abnormal Conditions: A Complex Networks Approach with Evolutionary Algorithms. Energies, 10(8), 1097

    An intelligent classification system for land use and land cover mapping using spaceborne remote sensing and GIS

    Get PDF
    The objectives of this study were to experiment with and extend current methods of Synthetic Aperture Rader (SAR) image classification, and to design and implement a prototype intelligent remote sensing image processing and classification system for land use and land cover mapping in wet season conditions in Bangladesh, which incorporates SAR images and other geodata. To meet these objectives, the problem of classifying the spaceborne SAR images, and integrating Geographic Information System (GIS) data and ground truth data was studied first. In this phase of the study, an extension to traditional techniques was made by applying a Self-Organizing feature Map (SOM) to include GIS data with the remote sensing data during image segmentation. The experimental results were compared with those of traditional statistical classifiers, such as Maximum Likelihood, Mahalanobis Distance, and Minimum Distance classifiers. The performances of the classifiers were evaluated in terms of the classification accuracy with respect to the collected real-time ground truth data. The SOM neural network provided the highest overall accuracy when a GIS layer of land type classification (with respect to the period of inundation by regular flooding) was used in the network. Using this method, the overall accuracy was around 15% higher than the previously mentioned traditional classifiers. It also achieved higher accuracies for more classes in comparison to the other classifiers. However, it was also observed that different classifiers produced better accuracy for different classes. Therefore, the investigation was extended to consider Multiple Classifier Combination (MCC) techniques, which is a recently emerging research area in pattern recognition. The study has tested some of these techniques to improve the classification accuracy by harnessing the goodness of the constituent classifiers. A Rule-based Contention Resolution method of combination was developed, which exhibited an improvement in the overall accuracy of about 2% in comparison to its best constituent (SOM) classifier. The next phase of the study involved the design of an architecture for an intelligent image processing and classification system (named ISRIPaC) that could integrate the extended methodologies mentioned above. Finally, the architecture was implemented in a prototype and its viability was evaluated using a set of real data. The originality of the ISRIPaC architecture lies in the realisation of the concept of a complete system that can intelligently cover all the steps of image processing classification and utilise standardised metadata in addition to a knowledge base in determining the appropriate methods and course of action for the given task. The implemented prototype of the ISRIPaC architecture is a federated system that integrates the CLIPS expert system shell, the IDRISI Kilimanjaro image processing and GIS software, and the domain experts' knowledge via a control agent written in Visual C++. It starts with data assessment and pre-processing and ends up with image classification and accuracy assessment. The system is designed to run automatically, where the user merely provides the initial information regarding the intended task and the source of available data. The system itself acquires necessary information about the data from metadata files in order to make decisions and perform tasks. The test and evaluation of the prototype demonstrates the viability of the proposed architecture and the possibility of extending the system to perform other image processing tasks and to use different sources of data. The system design presented in this study thus suggests some directions for the development of the next generation of remote sensing image processing and classification systems
    corecore