5,384 research outputs found
Machine Learning in Wireless Sensor Networks: Algorithms, Strategies, and Applications
Wireless sensor networks monitor dynamic environments that change rapidly
over time. This dynamic behavior is either caused by external factors or
initiated by the system designers themselves. To adapt to such conditions,
sensor networks often adopt machine learning techniques to eliminate the need
for unnecessary redesign. Machine learning also inspires many practical
solutions that maximize resource utilization and prolong the lifespan of the
network. In this paper, we present an extensive literature review over the
period 2002-2013 of machine learning methods that were used to address common
issues in wireless sensor networks (WSNs). The advantages and disadvantages of
each proposed algorithm are evaluated against the corresponding problem. We
also provide a comparative guide to aid WSN designers in developing suitable
machine learning solutions for their specific application challenges.Comment: Accepted for publication in IEEE Communications Surveys and Tutorial
ATDN vSLAM: An all-through Deep Learning-Based Solution for Visual Simultaneous Localization and Mapping
In this paper, a novel solution is introduced for visual Simultaneous
Localization and Mapping (vSLAM) that is built up of Deep Learning components.
The proposed architecture is a highly modular framework in which each component
offers state of the art results in their respective fields of vision-based deep
learning solutions. The paper shows that with the synergic integration of these
individual building blocks, a functioning and efficient all-through deep neural
(ATDN) vSLAM system can be created. The Embedding Distance Loss function is
introduced and using it the ATDN architecture is trained. The resulting system
managed to achieve 4.4% translation and 0.0176 deg/m rotational error on a
subset of the KITTI dataset. The proposed architecture can be used for
efficient and low-latency autonomous driving (AD) aiding database creation as
well as a basis for autonomous vehicle (AV) control.Comment: Published in Periodica Polytechnica Electrical Engineering 11 page
Computational intelligence approaches to robotics, automation, and control [Volume guest editors]
No abstract available
PL-SLAM: a stereo SLAM system through the combination of points and line segments
Traditional approaches to stereo visual simultaneous localization and mapping (SLAM) rely on point features to estimate the camera trajectory and build a map of the environment. In low-textured environments, though, it is often difficult to find a sufficient number of reliable point features and, as a consequence, the performance of such algorithms degrades. This paper proposes PL-SLAM, a stereo visual SLAM system that combines both points and line segments to work robustly in a wider variety of scenarios, particularly in those where point features are scarce or not well-distributed in the image. PL-SLAM leverages both points and line segments at all the instances of the process: visual odometry, keyframe selection, bundle adjustment, etc. We contribute also with a loop-closure procedure through a novel bag-of-words approach that exploits the combined descriptive power of the two kinds of features. Additionally, the resulting map is richer and more diverse in three-dimensional elements, which can be exploited to infer valuable, high-level scene structures, such as planes, empty spaces, ground plane, etc. (not addressed in this paper). Our proposal has been tested with several popular datasets (such as EuRoC or KITTI), and is compared with state-of-the-art methods such as ORB-SLAM2, revealing a more robust performance in most of the experiments while still running in real time. An open-source version of the PL-SLAM C++ code has been released for the benefit of the community.Ministerio de Economía y Competitividad (DPI2017-84827-R, BES-2015-071606), Junta de Andalucía (TEP2012-530)
- …