1,059 research outputs found

    Crude Oil Cost Forecasting using Variants of Recurrent Neural Network

    Get PDF
    Crude oil cost plays very important role in the country’s economic growth. It is  having close impact on economical stability of nation. Because of these reasons it is very important to have accurate oil forecasting system. Due to impact of different factors oil cost data is highly nonlinear and in fluctuated manner. Performing prediction on those data using data driven approaches is very complex task which require lots of preprocessing of data. Working on such a non-stationary data is very difficult. This research proposes recurrent neural network (RNN) based approaches such as simple RNN, deep RNN and RNN with LSTM. To compare performance of RNN variants this research has also implemented Naive forecast and Sequential ANN methods. Performance of all these models are evaluated based on root mean square error(RMSE), mean absolute error(MAE) and mean absolute percentage error(MAPE). The experimental result shows that RNN with LSTM is more accurate compare to all other models. Accuracy of LSTM is more than 96% for the dataset of U.S. Energy Information administration from March 1983 to June 2022. On the basis of experimental result, we come to the conclusion that RNN with LSTM is best suitable for time series data which is highly nonlinear

    Large Area Crop Inventory Experiment (LACIE). User requirements

    Get PDF
    There are no author-identified significant results in this report

    Prediction of energy consumption in campus buildings using long short-term memory

    Get PDF
    In this paper, Long Short-Term Memory (LSTM) was proposed to predict the energy consumption of an institutional building. A novel energy usage prediction method was demonstrated for daily day-ahead energy consumption by using forecasted weather data. It used weather forecasting data from a local meteorological organization, the Malaysian Meteorological Department (MET). The predictive model was trained by considering the dependencies between energy usage and weather data. The performance of the model was compared with Support Vector Regression (SVR) and Gaussian Process Regression (GPR). The experimental results with a dataset obtained from a building in Multimedia University, Malacca Campus from January 2018 to July 2021 outperformed the SVR and GPR. The proposed model achieved the best RMSE scores (561.692–592.319) when compared to SVR (3135.590–3472.765) and GPR (1243.307–1334.919). Through experimentation and research, the dropout method reduced overfitting significantly. Furthermore, feature analysis was done with SHapley Additive exPlanation to identify the most important weather variables. The results showed that temperature, wind speed, rainfall duration and the amount had a positive effect on the model. Thus, the proposed approach could aid in the implementation of energy policies because accurate predictions of energy consumption could serve as system fault detection and diagnosis for buildings

    Demand Side Management in the Smart Grid

    Get PDF

    A framework for evaluating the impact of communication on performance in large-scale distributed urban simulations

    Get PDF
    A primary motivation for employing distributed simulation is to enable the execution of large-scale simulation workloads that cannot be handled by the resources of a single stand-alone computing node. To make execution possible, the workload is distributed among multiple computing nodes connected to one another via a communication network. The execution of a distributed simulation involves alternating phases of computation and communication to coordinate the co-operating nodes and ensure correctness of the resulting simulation outputs. Reliably estimating the execution performance of a distributed simulation can be difficult due to non-deterministic execution paths involved in alternating computation and communication operations. However, performance estimates are useful as a guide for the simulation time that can be expected when using a given set of computing resources. Performance estimates can support decisions to commit time and resources to running distributed simulations, especially where significant amounts of funds or computing resources are necessary. Various performance estimation approaches are employed in the distributed computing literature, including the influential Bulk Synchronous Parallel (BSP) and LogP models. Different approaches make various assumptions that render them more suitable for some applications than for others. Actual performance depends on characteristics inherent to each distributed simulation application. An important aspect of these individual characteristics is the dynamic relationship between the communication and computation phases of the distributed simulation application. This work develops a framework for estimating the performance of distributed simulation applications, focusing mainly on aspects relevant to the dynamic relationship between communication and computation during distributed simulation execution. The framework proposes a meta-simulation approach based on the Multi-Agent Simulation (MAS) paradigm. Using the approach proposed by the framework, meta-simulations can be developed to investigate the performance of specific distributed simulation applications. The proposed approach enables the ability to compare various what-if scenarios. This ability is useful for comparing the effects of various parameters and strategies such as the number of computing nodes, the communication strategy, and the workload-distribution strategy. The proposed meta-simulation approach can also aid a search for optimal parameters and strategies for specific distributed simulation applications. The framework is demonstrated by implementing a meta-simulation which is based on case studies from the Urban Simulation domain

    Transactions and data management in NoSQL cloud databases

    Get PDF
    NoSQL databases have become the preferred option for storing and processing data in cloud computing as they are capable of providing high data availability, scalability and efficiency. But in order to achieve these attributes, NoSQL databases make certain trade-offs. First, NoSQL databases cannot guarantee strong consistency of data. They only guarantee a weaker consistency which is based on eventual consistency model. Second, NoSQL databases adopt a simple data model which makes it easy for data to be scaled across multiple nodes. Third, NoSQL databases do not support table joins and referential integrity which by implication, means they cannot implement complex queries. The combination of these factors implies that NoSQL databases cannot support transactions. Motivated by these crucial issues this thesis investigates into the transactions and data management in NoSQL databases. It presents a novel approach that implements transactional support for NoSQL databases in order to ensure stronger data consistency and provide appropriate level of performance. The novelty lies in the design of a Multi-Key transaction model that guarantees the standard properties of transactions in order to ensure stronger consistency and integrity of data. The model is implemented in a novel loosely-coupled architecture that separates the implementation of transactional logic from the underlying data thus ensuring transparency and abstraction in cloud and NoSQL databases. The proposed approach is validated through the development of a prototype system using real MongoDB system. An extended version of the standard Yahoo! Cloud Services Benchmark (YCSB) has been used in order to test and evaluate the proposed approach. Various experiments have been conducted and sets of results have been generated. The results show that the proposed approach meets the research objectives. It maintains stronger consistency of cloud data as well as appropriate level of reliability and performance

    Evaluation of cloud computing modelling tools: simulators and predictive models

    Get PDF
    Experimenting with novel algorithms and configurations for the automatic management of Cloud Computing infrastructures is expensive and time consuming on real systems. Cloud computing delivers the benefits of using virtualisation techniques to data centers instead of physical servers for customers. However, it is still complex for researchers to test and run their experiments on data center due to the cost for repeating the experiments. To address this, various tools are available to enable simulators, emulators, mathematical models, statistical models and benchmarking. Despite this, there are different methods used by researchers to avoid the difficulty of conducting Cloud Computing research on actual large data centre infrastructure. However, it is still difficult to chose the best tool to evaluate the proposed research. This research focuses on investigating the level of accuracy of existing known simulators in the field of cloud computing. Simulation tools are generally developed for particular experiments, so there is little assurance that using them with different workloads will be reliable. Moreover, a predictive model based on a data set from a realistic data center is delivered as an alternative model of simulators as there is a lack of their sufficient accuracy. So, this work addresses the problem of investigating the accuracy of different modelling tools by developing and validating a procedure based on the performance of a target micro data centre. Key insights and contributions are: Involving three alternative models for Cloud Computing real infrastructure showing the level of accuracy of selected simulation tools. Developing and validating a predictive model based on a Raspberry Pi small scale data centre. The use of predictive model based on Linear Regression and Artificial Neural Net- works models based on training data set drawn from a Raspberry Pi Cloud infrastructure provides better accuracy
    • …
    corecore