1,726 research outputs found
Beam Training and Tracking with Limited Sampling Sets: Exploiting Environment Priors
Beam training and tracking (BTT) are key technologies for millimeter wave communications. However, since the effectiveness of BTT methods heavily depends on wireless environments, complexity and randomness of practical environments severely limit the application scope of many BTT algorithms and even invalidate them. To tackle this issue, from the perspective of stochastic process (SP), in this paper we propose to model beam directions as a SP and address the problem of BTT via process inference. The benefit of the SP design methodology is that environment priors and uncertainties can be naturally taken into account (e.g., to encode them into SP distribution) to improve prediction efficiencies (e.g., accuracy and robustness). We take the Gaussian process (GP) as an example to elaborate on the design methodology and propose novel learning methods to optimize the prediction models. In particular, beam training subset is optimized based on derived posterior distribution. The GP-based SP methodology enjoys two advantages. First, good performance can be achieved even for small data, which is very appealing in dynamic communication scenarios. Second, in contrast to most BTT algorithms that only predict a single beam, our algorithms output an optimizable beam subset, which enables a flexible tradeoff between training overhead and desired performance. Simulation results show the superiority of our approach
CSI-Free Geometric Symbol Detection via Semi-supervised Learning and Ensemble Learning
Symbol detection (SD) plays an important role in a digital communication system. However, most SD algorithms require channel state information (CSI), which is often difficult to estimate accurately. As a consequence, it is challenging for these SD algorithms to approach the performance of the maximum likelihood detection (MLD) algorithm. To address this issue, we employ both semi-supervised learning and ensemble learning to design a flexible parallelizable approach in this paper. First, we prove theoretically that the proposed algorithms can arbitrarily approach the performance of the MLD algorithm with perfect CSI. Second, to enable parallel implementation and also enhance design flexibility, we further propose a parallelizable approach for multi-output systems. Finally, comprehensive simulation results are provided to demonstrate the effectiveness and superiority of the designed algorithms. In particular, the proposed algorithms approach the performance of the MLD algorithm with perfect CSI, and outperform it when the CSI is imperfect. Interestingly, a detector constructed with received signals from only two receiving antennas (less than the size of the whole receiving antenna array) can also provide good detection performance
FedSR: A Semi-Decentralized Federated Learning Algorithm for Non-IIDness in IoT System
In the Industrial Internet of Things (IoT), a large amount of data will be
generated every day. Due to privacy and security issues, it is difficult to
collect all these data together to train deep learning models, thus the
federated learning, a distributed machine learning paradigm that protects data
privacy, has been widely used in IoT. However, in practical federated learning,
the data distributions usually have large differences across devices, and the
heterogeneity of data will deteriorate the performance of the model. Moreover,
federated learning in IoT usually has a large number of devices involved in
training, and the limited communication resource of cloud servers become a
bottleneck for training. To address the above issues, in this paper, we combine
centralized federated learning with decentralized federated learning to design
a semi-decentralized cloud-edge-device hierarchical federated learning
framework, which can mitigate the impact of data heterogeneity, and can be
deployed at lage scale in IoT. To address the effect of data heterogeneity, we
use an incremental subgradient optimization algorithm in each ring cluster to
improve the generalization ability of the ring cluster models. Our extensive
experiments show that our approach can effectively mitigate the impact of data
heterogeneity and alleviate the communication bottleneck in cloud servers.Comment: 11 pages, 10 figure
An Iterative Co-Saliency Framework for RGBD Images
As a newly emerging and significant topic in computer vision community,
co-saliency detection aims at discovering the common salient objects in
multiple related images. The existing methods often generate the co-saliency
map through a direct forward pipeline which is based on the designed cues or
initialization, but lack the refinement-cycle scheme. Moreover, they mainly
focus on RGB image and ignore the depth information for RGBD images. In this
paper, we propose an iterative RGBD co-saliency framework, which utilizes the
existing single saliency maps as the initialization, and generates the final
RGBD cosaliency map by using a refinement-cycle model. Three schemes are
employed in the proposed RGBD co-saliency framework, which include the addition
scheme, deletion scheme, and iteration scheme. The addition scheme is used to
highlight the salient regions based on intra-image depth propagation and
saliency propagation, while the deletion scheme filters the saliency regions
and removes the non-common salient regions based on interimage constraint. The
iteration scheme is proposed to obtain more homogeneous and consistent
co-saliency map. Furthermore, a novel descriptor, named depth shape prior, is
proposed in the addition scheme to introduce the depth information to enhance
identification of co-salient objects. The proposed method can effectively
exploit any existing 2D saliency model to work well in RGBD co-saliency
scenarios. The experiments on two RGBD cosaliency datasets demonstrate the
effectiveness of our proposed framework.Comment: 13 pages, 13 figures, Accepted by IEEE Transactions on Cybernetics
2017. Project URL: https://rmcong.github.io/proj_RGBD_cosal_tcyb.htm
- …