1,392 research outputs found
Multi-objective particle swarm optimization algorithm for multi-step electric load forecasting
As energy saving becomes more and more popular, electric load forecasting has played a more and more crucial role in power management systems in the last few years. Because of the real-time characteristic of electricity and the uncertainty change of an electric load, realizing the accuracy and stability of electric load forecasting is a challenging task. Many predecessors have obtained the expected forecasting results by various methods. Considering the stability of time series prediction, a novel combined electric load forecasting, which based on extreme learning machine (ELM), recurrent neural network (RNN), and support vector machines (SVMs), was proposed. The combined model first uses three neural networks to forecast the electric load data separately considering that the single model has inevitable disadvantages, the combined model applies the multi-objective particle swarm optimization algorithm (MOPSO) to optimize the parameters. In order to verify the capacity of the proposed combined model, 1-step, 2-step, and 3-step are used to forecast the electric load data of three Australian states, including New South Wales, Queensland, and Victoria. The experimental results intuitively indicate that for these three datasets, the combined model outperforms all three individual models used for comparison, which demonstrates its superior capability in terms of accuracy and stability
Effect of Schedule Compression on Project Effort
Schedule pressure is often faced by project managers and software developers who want to quickly deploy
information systems. Typical strategies to compress project time scales might include adding more
staff/personnel, investing in development tools, improving hardware, or improving development methods. The
tradeoff between cost, schedule, and performance is one of the most important analyses performed during the
planning stages of software development projects. In order to adequately compare the effects of these three
constraints on the project it is essential to understand their individual influence on the project’s outcome.
In this paper, we present an investigation into the effect of schedule compression on software project
development effort and cost and show that people are generally optimistic when estimating the amount of
schedule compression. This paper is divided into three sections. First, we follow the Ideal Effort Multiplier
(IEM) analysis on the SCED cost driver of the COCOMO II model. Second, compare the real schedule
compression ratio exhibited by 161 industry projects and the ratio represented by the SCED cost driver.
Finally, based on the above analysis, a set of newly proposed SCED driver ratings for COCOMO II are
introduced which show an improvement of 6% in the model estimating accuracy
Neighborhood VAR: Efficient estimation of multivariate timeseries with neighborhood information
In data science, vector autoregression (VAR) models are popular in modeling
multivariate time series in the environmental sciences and other applications.
However, these models are computationally complex with the number of parameters
scaling quadratically with the number of time series.
In this work, we propose a so-called neighborhood vector autoregression
(NVAR) model to efficiently analyze large-dimensional multivariate time series.
We assume that the time series have underlying neighborhood relationships,
e.g., spatial or network, among them based on the inherent setting of the
problem. When this neighborhood information is available or can be summarized
using a distance matrix, we demonstrate that our proposed NVAR method provides
a computationally efficient and theoretically sound estimation of model
parameters. The performance of the proposed method is compared with other
existing approaches in both simulation studies and a real application of stream
nitrogen study
Monadic Deep Learning
The Java and Scala community has built a very successful big data ecosystem.
However, most of neural networks running on it are modeled in dynamically typed
programming languages. These dynamically typed deep learning frameworks treat
neural networks as differentiable expressions that contain many trainable
variable, and perform automatic differentiation on those expressions when
training them.
Until 2019, none of the learning frameworks in statically typed languages
provided the expressive power of traditional frameworks. Their users are not
able to use custom algorithms unless creating plenty of boilerplate code for
hard-coded back-propagation.
We solved this problem in DeepLearning.scala 2. Our contributions are:
1. We discovered a novel approach to perform automatic differentiation in
reverse mode for statically typed functions that contain multiple trainable
variable, and can interoperate freely with the metalanguage.
2. We designed a set of monads and monad transformers, which allow users to
create monadic expressions that represent dynamic neural networks.
3. Along with these monads, we provide some applicative functors, to perform
multiple calculations in parallel.
With these features, users of DeepLearning.scala were able to create complex
neural networks in an intuitive and concise way, and still maintain type
safety.Comment: 27 pages, 7 figures, 3 table
- …