495 research outputs found

    An Investigation of Weather Forecasting using Machine Learning Techniques

    Get PDF
    Customarily, climate expectations are performed with the assistance of enormous complex models of material science, which use distinctive air conditions throughout a significant stretch of time. In this paper, we studied  a climate expectation  strategy that uses recorded information from  numerous climate stations to prepare basic AI models, which can give usable figures about certain climate conditions for the not so distant future inside a brief  timeframe These conditions are frequently flimsy on account of annoyances of the climate framework, making the models give mistaken estimates.[1] The model are for the most part run on many hubs in an enormous High Performance Computing (HPC) climate which burns through a lot of energy.. The modes can be run on significantly less asset serious conditions. In this paper we describe that the sufficient to be utilized status of the workmanship methods. Moreover, we described that it is valuable to use the climate stations information from various adjoining territories over the information of just the region for which climate anticipating is being performed

    Establishment of Dynamic Evolving Neural-Fuzzy Inference System Model for Natural Air Temperature Prediction

    Get PDF
    Air temperature (AT) prediction can play a significant role in studies related to climate change, radiation and heat flux estimation, and weather forecasting. This study applied and compared the outcomes of three advanced fuzzy inference models, i.e., dynamic evolving neural-fuzzy inference system (DENFIS), hybrid neural-fuzzy inference system (HyFIS), and adaptive neurofuzzy inference system (ANFIS) for AT prediction. Modelling was done for three stations in North Dakota (ND), USA, i.e., Robinson, Ada, and Hillsboro. The results reveal that FIS type models are well suited when handling highly variable data, such as AT, which shows a high positive correlation with average daily dew point (DP), total solar radiation (TSR), and negative correlation with average wind speed (WS). At the Robinson station, DENFIS performed the best with a coefficient of determination (R2^{2}) of 0.96 and a modified index of agreement (md) of 0.92, followed by ANFIS with R2^{2} of 0.94 and md of 0.89, and HyFIS with R2^{2} of 0.90 and md of 0.84. A similar result was observed for the other two stations, i.e., Ada and Hillsboro stations where DENFIS performed the best with R2^{2}: 0.953/0.960, md: 0.903/0.912, then ANFIS with R2^{2}: 0.943/0.942, md: 0.888/0.890, and HyFIS with R2^{2} 0.908/0.905, md: 0.845/0.821, respectively. It can be concluded that all three models are capable of predicting AT with high efficiency by only using DP, TSR, and WS as input variables. This makes the application of these models more reliable for a meteorological variable with the need for the least number of input variables. The study can be valuable for the areas where the climatological and seasonal variations are studied and will allow providing excellent prediction results with the least error margin and without a huge expenditure

    A Dynamic Neural Network Architecture with immunology Inspired Optimization for Weather Data Forecasting

    Get PDF
    Recurrent neural networks are dynamical systems that provide for memory capabilities to recall past behaviour, which is necessary in the prediction of time series. In this paper, a novel neural network architecture inspired by the immune algorithm is presented and used in the forecasting of naturally occurring signals, including weather big data signals. Big Data Analysis is a major research frontier, which attracts extensive attention from academia, industry and government, particularly in the context of handling issues related to complex dynamics due to changing weather conditions. Recently, extensive deployment of IoT, sensors, and ambient intelligence systems led to an exponential growth of data in the climate domain. In this study, we concentrate on the analysis of big weather data by using the Dynamic Self Organized Neural Network Inspired by the Immune Algorithm. The learning strategy of the network focuses on the local properties of the signal using a self-organised hidden layer inspired by the immune algorithm, while the recurrent links of the network aim at recalling previously observed signal patterns. The proposed network exhibits improved performance when compared to the feedforward multilayer neural network and state-of-the-art recurrent networks, e.g., the Elman and the Jordan networks. Three non-linear and non-stationary weather signals are used in our experiments. Firstly, the signals are transformed into stationary, followed by 5-steps ahead prediction. Improvements in the prediction results are observed with respect to the mean value of the error (RMS) and the signal to noise ratio (SNR), however to the expense of additional computational complexity, due to presence of recurrent links

    What representations and computations underpin the contribution of the hippocampus to generalization and inference?

    Get PDF
    Empirical research and theoretical accounts have traditionally emphasized the function of the hippocampus in episodic memory. Here we draw attention to the importance of the hippocampus to generalization, and focus on the neural representations and computations that might underpin its role in tasks such as the paired associate inference (PAI) paradigm. We make a principal distinction between two different mechanisms by which the hippocampus may support generalization: an encoding-based mechanism that creates overlapping representations which capture higher-order relationships between different items [e.g., Temporal Context Model (TCM): Howard et al., 2005]—and a retrieval-based model [Recurrence with Episodic Memory Results in Generalization (REMERGE): Kumaran and McClelland, in press] that effectively computes these relationships at the point of retrieval, through a recurrent mechanism that allows the dynamic interaction of multiple pattern separated episodic codes. We also discuss what we refer to as transfer effects—a more abstract example of generalization that has also been linked to the function of the hippocampus. We consider how this phenomenon poses inherent challenges for models such as TCM and REMERGE, and outline the potential applicability of a separate class of models—hierarchical Bayesian models (HBMs) in this context. Our hope is that this article will provide a basic framework within which to consider the theoretical mechanisms underlying the role of the hippocampus in generalization, and at a minimum serve as a stimulus for future work addressing issues that go to the heart of the function of the hippocampus

    IoT-enabled Flood Severity Prediction via Ensemble Machine Learning Models

    Get PDF
    River flooding is a natural phenomenon that can have a devastating effect on human life and economic losses. There have been various approaches in studying river flooding; however, insufficient understanding and limited knowledge about flooding conditions hinder the development of prevention and control measures for this natural phenomenon. This paper entails a new approach for the prediction of water level in association with flood severity using the ensemble model. Our approach leverages the latest developments in the Internet of Things (IoT) and machine learning for the automated analysis of flood data that might be useful to prevent natural disasters. Research outcomes indicate that ensemble learning provides a more reliable tool to predict flood severity levels. The experimental results indicate that the ensemble learning using the Long-Short Term memory model and random forest outperformed individual models with a sensitivity, specificity and accuracy of 71.4%, 85.9%, 81.13%, respectively

    From specific examples to general knowledge in language learning

    Get PDF
    AbstractThe extraction of general knowledge from individual episodes is critical if we are to learn new knowledge or abilities. Here we uncover some of the key cognitive mechanisms that characterise this process in the domain of language learning. In five experiments adult participants learned new morphological units embedded in fictitious words created by attaching new affixes (e.g., -afe) to familiar word stems (e.g., “sleepafe is a participant in a study about the effects of sleep”). Participants’ ability to generalise semantic knowledge about the affixes was tested using tasks requiring the comprehension and production of novel words containing a trained affix (e.g., sailafe). We manipulated the delay between training and test (Experiment 1), the number of unique exemplars provided for each affix during training (Experiment 2), and the consistency of the form-to-meaning mapping of the affixes (Experiments 3–5). In a task where speeded online language processing is required (semantic priming), generalisation was achieved only after a memory consolidation opportunity following training, and only if the training included a sufficient number of unique exemplars. Semantic inconsistency disrupted speeded generalisation unless consolidation was allowed to operate on one of the two affix-meanings before introducing inconsistencies. In contrast, in tasks that required slow, deliberate reasoning, generalisation could be achieved largely irrespective of the above constraints. These findings point to two different mechanisms of generalisation that have different cognitive demands and rely on different types of memory representations

    AI Methods in Algorithmic Composition: A Comprehensive Survey

    Get PDF
    Algorithmic composition is the partial or total automation of the process of music composition by using computers. Since the 1950s, different computational techniques related to Artificial Intelligence have been used for algorithmic composition, including grammatical representations, probabilistic methods, neural networks, symbolic rule-based systems, constraint programming and evolutionary algorithms. This survey aims to be a comprehensive account of research on algorithmic composition, presenting a thorough view of the field for researchers in Artificial Intelligence.This study was partially supported by a grant for the MELOMICS project (IPT-300000-2010-010) from the Spanish Ministerio de Ciencia e Innovación, and a grant for the CAUCE project (TSI-090302-2011-8) from the Spanish Ministerio de Industria, Turismo y Comercio. The first author was supported by a grant for the GENEX project (P09-TIC- 5123) from the Consejería de Innovación y Ciencia de Andalucía

    A Wholistic View of Continual Learning with Deep Neural Networks: Forgotten Lessons and the Bridge to Active and Open World Learning

    Full text link
    Current deep learning research is dominated by benchmark evaluation. A method is regarded as favorable if it empirically performs well on the dedicated test set. This mentality is seamlessly reflected in the resurfacing area of continual learning, where consecutively arriving sets of benchmark data are investigated. The core challenge is framed as protecting previously acquired representations from being catastrophically forgotten due to the iterative parameter updates. However, comparison of individual methods is nevertheless treated in isolation from real world application and typically judged by monitoring accumulated test set performance. The closed world assumption remains predominant. It is assumed that during deployment a model is guaranteed to encounter data that stems from the same distribution as used for training. This poses a massive challenge as neural networks are well known to provide overconfident false predictions on unknown instances and break down in the face of corrupted data. In this work we argue that notable lessons from open set recognition, the identification of statistically deviating data outside of the observed dataset, and the adjacent field of active learning, where data is incrementally queried such that the expected performance gain is maximized, are frequently overlooked in the deep learning era. Based on these forgotten lessons, we propose a consolidated view to bridge continual learning, active learning and open set recognition in deep neural networks. Our results show that this not only benefits each individual paradigm, but highlights the natural synergies in a common framework. We empirically demonstrate improvements when alleviating catastrophic forgetting, querying data in active learning, selecting task orders, while exhibiting robust open world application where previously proposed methods fail.Comment: 32 page
    corecore