12,148 research outputs found
Forecasting bus passenger flows by using a clustering-based support vector regression approach
As a significant component of the intelligent transportation system, forecasting bus passenger
flows plays a key role in resource allocation, network planning, and frequency setting. However, it remains
challenging to recognize high fluctuations, nonlinearity, and periodicity of bus passenger flows due to
varied destinations and departure times. For this reason, a novel forecasting model named as affinity
propagation-based support vector regression (AP-SVR) is proposed based on clustering and nonlinear
simulation. For the addressed approach, a clustering algorithm is first used to generate clustering-based
intervals. A support vector regression (SVR) is then exploited to forecast the passenger flow for each
cluster, with the use of particle swarm optimization (PSO) for obtaining the optimized parameters. Finally,
the prediction results of the SVR are rearranged by chronological order rearrangement. The proposed model
is tested using real bus passenger data from a bus line over four months. Experimental results demonstrate
that the proposed model performs better than other peer models in terms of absolute percentage error and
mean absolute percentage error. It is recommended that the deterministic clustering technique with stable
cluster results (AP) can improve the forecasting performance significantly.info:eu-repo/semantics/publishedVersio
A linear approach for sparse coding by a two-layer neural network
Many approaches to transform classification problems from non-linear to
linear by feature transformation have been recently presented in the
literature. These notably include sparse coding methods and deep neural
networks. However, many of these approaches require the repeated application of
a learning process upon the presentation of unseen data input vectors, or else
involve the use of large numbers of parameters and hyper-parameters, which must
be chosen through cross-validation, thus increasing running time dramatically.
In this paper, we propose and experimentally investigate a new approach for the
purpose of overcoming limitations of both kinds. The proposed approach makes
use of a linear auto-associative network (called SCNN) with just one hidden
layer. The combination of this architecture with a specific error function to
be minimized enables one to learn a linear encoder computing a sparse code
which turns out to be as similar as possible to the sparse coding that one
obtains by re-training the neural network. Importantly, the linearity of SCNN
and the choice of the error function allow one to achieve reduced running time
in the learning phase. The proposed architecture is evaluated on the basis of
two standard machine learning tasks. Its performances are compared with those
of recently proposed non-linear auto-associative neural networks. The overall
results suggest that linear encoders can be profitably used to obtain sparse
data representations in the context of machine learning problems, provided that
an appropriate error function is used during the learning phase
Evolving spiking neural networks for temporal pattern recognition in the presence of noise
Creative Commons - Attribution-NonCommercial-NoDerivs 3.0 United StatesNervous systems of biological organisms use temporal patterns of spikes to encode sensory input, but the mechanisms that underlie the recognition of such patterns are unclear. In the present work, we explore how networks of spiking neurons can be evolved to recognize temporal input patterns without being able to adjust signal conduction delays. We evolve the networks with GReaNs, an artificial life platform that encodes the topology of the network (and the weights of connections) in a fashion inspired by the encoding of gene regulatory networks in biological genomes. The number of computational nodes or connections is not limited in GReaNs, but here we limit the size of the networks to analyze the functioning of the networks and the effect of network size on the evolvability of robustness to noise. Our results show that even very small networks of spiking neurons can perform temporal pattern recognition in the presence of input noiseFinal Published versio
- …