9,419 research outputs found
Localisation of mobile nodes in wireless networks with correlated in time measurement noise.
Wireless sensor networks are an inherent part of decision making, object tracking and location awareness systems. This work is focused on simultaneous localisation of mobile nodes based on received signal strength indicators (RSSIs) with correlated in time measurement noises. Two approaches to deal with the correlated measurement noises are proposed in the framework of auxiliary particle filtering: with a noise augmented state vector and the second approach implements noise decorrelation. The performance of the two proposed multi model auxiliary particle filters (MM AUX-PFs) is validated over simulated and real RSSIs and high localisation accuracy is demonstrated
Machine Learning in Wireless Sensor Networks: Algorithms, Strategies, and Applications
Wireless sensor networks monitor dynamic environments that change rapidly
over time. This dynamic behavior is either caused by external factors or
initiated by the system designers themselves. To adapt to such conditions,
sensor networks often adopt machine learning techniques to eliminate the need
for unnecessary redesign. Machine learning also inspires many practical
solutions that maximize resource utilization and prolong the lifespan of the
network. In this paper, we present an extensive literature review over the
period 2002-2013 of machine learning methods that were used to address common
issues in wireless sensor networks (WSNs). The advantages and disadvantages of
each proposed algorithm are evaluated against the corresponding problem. We
also provide a comparative guide to aid WSN designers in developing suitable
machine learning solutions for their specific application challenges.Comment: Accepted for publication in IEEE Communications Surveys and Tutorial
ARBEx: Attentive Feature Extraction with Reliability Balancing for Robust Facial Expression Learning
In this paper, we introduce a framework ARBEx, a novel attentive feature
extraction framework driven by Vision Transformer with reliability balancing to
cope against poor class distributions, bias, and uncertainty in the facial
expression learning (FEL) task. We reinforce several data pre-processing and
refinement methods along with a window-based cross-attention ViT to squeeze the
best of the data. We also employ learnable anchor points in the embedding space
with label distributions and multi-head self-attention mechanism to optimize
performance against weak predictions with reliability balancing, which is a
strategy that leverages anchor points, attention scores, and confidence values
to enhance the resilience of label predictions. To ensure correct label
classification and improve the models' discriminative power, we introduce
anchor loss, which encourages large margins between anchor points.
Additionally, the multi-head self-attention mechanism, which is also trainable,
plays an integral role in identifying accurate labels. This approach provides
critical elements for improving the reliability of predictions and has a
substantial positive effect on final prediction capabilities. Our adaptive
model can be integrated with any deep neural network to forestall challenges in
various recognition tasks. Our strategy outperforms current state-of-the-art
methodologies, according to extensive experiments conducted in a variety of
contexts.Comment: 10 pages, 7 figures. Code: https://github.com/takihasan/ARBE
Synthetic Aperture Radar (SAR) Meets Deep Learning
This reprint focuses on the application of the combination of synthetic aperture radars and depth learning technology. It aims to further promote the development of SAR image intelligent interpretation technology. A synthetic aperture radar (SAR) is an important active microwave imaging sensor, whose all-day and all-weather working capacity give it an important place in the remote sensing community. Since the United States launched the first SAR satellite, SAR has received much attention in the remote sensing community, e.g., in geological exploration, topographic mapping, disaster forecast, and traffic monitoring. It is valuable and meaningful, therefore, to study SAR-based remote sensing applications. In recent years, deep learning represented by convolution neural networks has promoted significant progress in the computer vision community, e.g., in face recognition, the driverless field and Internet of things (IoT). Deep learning can enable computational models with multiple processing layers to learn data representations with multiple-level abstractions. This can greatly improve the performance of various applications. This reprint provides a platform for researchers to handle the above significant challenges and present their innovative and cutting-edge research results when applying deep learning to SAR in various manuscript types, e.g., articles, letters, reviews and technical reports
Shifting Perspective to See Difference: A Novel Multi-View Method for Skeleton based Action Recognition
Skeleton-based human action recognition is a longstanding challenge due to
its complex dynamics. Some fine-grain details of the dynamics play a vital role
in classification. The existing work largely focuses on designing incremental
neural networks with more complicated adjacent matrices to capture the details
of joints relationships. However, they still have difficulties distinguishing
actions that have broadly similar motion patterns but belong to different
categories. Interestingly, we found that the subtle differences in motion
patterns can be significantly amplified and become easy for audience to
distinct through specified view directions, where this property haven't been
fully explored before. Drastically different from previous work, we boost the
performance by proposing a conceptually simple yet effective Multi-view
strategy that recognizes actions from a collection of dynamic view features.
Specifically, we design a novel Skeleton-Anchor Proposal (SAP) module which
contains a Multi-head structure to learn a set of views. For feature learning
of different views, we introduce a novel Angle Representation to transform the
actions under different views and feed the transformations into the baseline
model. Our module can work seamlessly with the existing action classification
model. Incorporated with baseline models, our SAP module exhibits clear
performance gains on many challenging benchmarks. Moreover, comprehensive
experiments show that our model consistently beats down the state-of-the-art
and remains effective and robust especially when dealing with corrupted data.
Related code will be available on https://github.com/ideal-idea/SAP
- …