34,130 research outputs found
Online Learning Algorithm for Time Series Forecasting Suitable for Low Cost Wireless Sensor Networks Nodes
Time series forecasting is an important predictive methodology which can be
applied to a wide range of problems. Particularly, forecasting the indoor
temperature permits an improved utilization of the HVAC (Heating, Ventilating
and Air Conditioning) systems in a home and thus a better energy efficiency.
With such purpose the paper describes how to implement an Artificial Neural
Network (ANN) algorithm in a low cost system-on-chip to develop an autonomous
intelligent wireless sensor network. The present paper uses a Wireless Sensor
Networks (WSN) to monitor and forecast the indoor temperature in a smart home,
based on low resources and cost microcontroller technology as the 8051MCU. An
on-line learning approach, based on Back-Propagation (BP) algorithm for ANNs,
has been developed for real-time time series learning. It performs the model
training with every new data that arrive to the system, without saving enormous
quantities of data to create a historical database as usual, i.e., without
previous knowledge. Consequently to validate the approach a simulation study
through a Bayesian baseline model have been tested in order to compare with a
database of a real application aiming to see the performance and accuracy. The
core of the paper is a new algorithm, based on the BP one, which has been
described in detail, and the challenge was how to implement a computational
demanding algorithm in a simple architecture with very few hardware resources.Comment: 28 pages, Published 21 April 2015 at MDPI's journal "Sensors
Recommended from our members
Daytime precipitation estimation using bispectral cloud classification system
Two previously developed Precipitation Estimation from Remotely Sensed Information using Artificial Neural Networks (PERSIANN) algorithms that incorporate cloud classification system (PERSIANN-CCS) and multispectral analysis (PERSIANN-MSA) are integrated and employed to analyze the role of cloud albedo from Geostationary Operational Environmental Satellite-12 (GOES-12) visible (0.65 μm) channel in supplementing infrared (10.7 mm) data. The integrated technique derives finescale (0.04° × 0.04° latitudelongitude every 30 min) rain rate for each grid box through four major steps: 1) segmenting clouds into a number of cloud patches using infrared or albedo images; 2) classification of cloud patches into a number of cloud types using radiative, geometrical, and textural features for each individual cloud patch; 3) classification of each cloud type into a number of subclasses and assigning rain rates to each subclass using a multidimensional histogram matching method; and 4) associating satellite gridbox information to the appropriate corresponding cloud type and subclass to estimate rain rate in grid scale. The technique was applied over a study region that includes the U.S. landmass east of 115°W. One reference infrared-only and three different bis-pectral (visible and infrared) rain estimation scenarios were compared to investigate the technique's ability to address two major drawbacks of infrared-only methods: 1) underestimating warm rainfall and 2) the inability to screen out no-rain thin cirrus clouds. Radar estimates were used to evaluate the scenarios at a range of temporal (3 and 6 hourly) and spatial (0.04°, 0.08°, 0.12°, and 0.24° latitude-longitude) scales. Overall, the results using daytime data during June-August 2006 indicate that significant gain over infrared-only technique is obtained once albedo is used for cloud segmentation followed by bispectral cloud classification and rainfall estimation. At 3-h, 0.04° resolution, the observed improvement using bispectral information was about 66% for equitable threat score and 26% for the correlation coefficient. At coarser 0.24° resolution, the gains were 34% and 32% for the two performance measures, respectively. © 2010 American Meteorological Society
Depth Superresolution using Motion Adaptive Regularization
Spatial resolution of depth sensors is often significantly lower compared to
that of conventional optical cameras. Recent work has explored the idea of
improving the resolution of depth using higher resolution intensity as a side
information. In this paper, we demonstrate that further incorporating temporal
information in videos can significantly improve the results. In particular, we
propose a novel approach that improves depth resolution, exploiting the
space-time redundancy in the depth and intensity using motion-adaptive low-rank
regularization. Experiments confirm that the proposed approach substantially
improves the quality of the estimated high-resolution depth. Our approach can
be a first component in systems using vision techniques that rely on high
resolution depth information
Learning-based Image Enhancement for Visual Odometry in Challenging HDR Environments
One of the main open challenges in visual odometry (VO) is the robustness to
difficult illumination conditions or high dynamic range (HDR) environments. The
main difficulties in these situations come from both the limitations of the
sensors and the inability to perform a successful tracking of interest points
because of the bold assumptions in VO, such as brightness constancy. We address
this problem from a deep learning perspective, for which we first fine-tune a
Deep Neural Network (DNN) with the purpose of obtaining enhanced
representations of the sequences for VO. Then, we demonstrate how the insertion
of Long Short Term Memory (LSTM) allows us to obtain temporally consistent
sequences, as the estimation depends on previous states. However, the use of
very deep networks does not allow the insertion into a real-time VO framework;
therefore, we also propose a Convolutional Neural Network (CNN) of reduced size
capable of performing faster. Finally, we validate the enhanced representations
by evaluating the sequences produced by the two architectures in several
state-of-art VO algorithms, such as ORB-SLAM and DSO
- …