212 research outputs found
Mixture of Bilateral-Projection Two-dimensional Probabilistic Principal Component Analysis
The probabilistic principal component analysis (PPCA) is built upon a global
linear mapping, with which it is insufficient to model complex data variation.
This paper proposes a mixture of bilateral-projection probabilistic principal
component analysis model (mixB2DPPCA) on 2D data. With multi-components in the
mixture, this model can be seen as a soft cluster algorithm and has capability
of modeling data with complex structures. A Bayesian inference scheme has been
proposed based on the variational EM (Expectation-Maximization) approach for
learning model parameters. Experiments on some publicly available databases
show that the performance of mixB2DPPCA has been largely improved, resulting in
more accurate reconstruction errors and recognition rates than the existing
PCA-based algorithms
CSD: Discriminance with Conic Section for Improving Reverse k Nearest Neighbors Queries
The reverse nearest neighbor (RNN) query finds all points that have
the query point as one of their nearest neighbors (NN), where the NN
query finds the closest points to its query point. Based on the
characteristics of conic section, we propose a discriminance, named CSD (Conic
Section Discriminance), to determine points whether belong to the RNN set
without issuing any queries with non-constant computational complexity. By
using CSD, we also implement an efficient RNN algorithm CSD-RNN with a
computational complexity at . The comparative
experiments are conducted between CSD-RNN and other two state-of-the-art
RkNN algorithms, SLICE and VR-RNN. The experimental results indicate that
the efficiency of CSD-RNN is significantly higher than its competitors
A Survey of Coverage Problems in Wireless Sensor Networks
Coverage problem is an important issue in wireless sensor networks, which has a great impact on the performance of wireless sensor networks. Given a sensor network, the coverage problem is to determine how well the sensing field is monitored or tracked by sensors. In this paper, we classify the coverage problem into three categories: area coverage, target coverage, and barrier coverage, give detailed description of different algorithms belong to these three categories. Moreover, we specify the advantages and disadvantages of the existing classic algorithms, which can give a useful direction in this area
Study on vibration characteristics and tooth profile modification of a plus planetary gear set
The governing vibration differential equation of a plus planetary gear set has derived from the Lagrange method. Its three often neglected components are considered: [1] the meshing damping, [2] the elastic bearing support of the sun wheel, [3] and the angles between the movement direction of the planet carrier and the gear meshing line. A simulation model for a plus planetary gear set is built. The influence that the key components have on vibration characteristics is analyzed. Model validation is performed by comparing the theoretical, simulated and measured natural frequencies. In order to reduce vibration and noise, a comprehensive finite element model of a plus planetary gear set is built. It provides useful information on dynamic transmission errors of the plus planetary gear set. The tooth profile modification is optimized by using the genetic algorithm. The optimal tooth profile modification is validated by the results of the experiment
Accelerated and Deep Expectation Maximization for One-Bit MIMO-OFDM Detection
In this paper we study the expectation maximization (EM) technique for
one-bit MIMO-OFDM detection (OMOD). Arising from the recent interest in massive
MIMO with one-bit analog-to-digital converters, OMOD is a massive-scale
problem. EM is an iterative method that can exploit the OFDM structure to
process the problem in a per-iteration efficient fashion. In this study we
analyze the convergence rate of EM for a class of approximate
maximum-likelihood OMOD formulations, or, in a broader sense, a class of
problems involving regression from quantized data. We show how the SNR and
channel conditions can have an impact on the convergence rate. We do so by
making a connection between the EM and the proximal gradient methods in the
context of OMOD. This connection also gives us insight to build new accelerated
and/or inexact EM schemes. The accelerated scheme has faster convergence in
theory, and the inexact scheme provides us with the flexibility to implement EM
more efficiently, with convergence guarantee. Furthermore we develop a deep EM
algorithm, wherein we take the structure of our inexact EM algorithm and apply
deep unfolding to train an efficient structured deep net. Simulation results
show that our accelerated exact/inexact EM algorithms run much faster than
their standard EM counterparts, and that the deep EM algorithm gives promising
detection and runtime performances
FedGST:Federated Graph Spatio-Temporal Framework for Brain Functional Disease Prediction
Currently, most medical institutions face the challenge of training a unified model using fragmented and isolated data to address disease prediction problems. Although federated learning has become the recognized paradigm for privacy-preserving model training, how to integrate federated learning with fMRI temporal characteristics to enhance predictive performance remains an open question for functional disease prediction. To address this challenging task, we propose a novel Federated Graph Spatio-Temporal (FedGST) framework for brain functional disease prediction. Specifically, anchor sampling is used to process variable-length time series data on local clients. Then dynamic functional connectivity graphs are generated via sliding windows and Pearson correlation coefficients. Next, we propose an InceptionTime model to extract temporal information from the dynamic functional connectivity graphs on the local clients. Finally, the hidden activation variables are sent to a global server. We propose a UniteGCN model on the global server to receive and process the hidden activation variables from clients. Then, the global server returns gradient information to clients for backpropagation and model parameter updating. Client models aggregate model parameters on the local server and distribute them to clients for the next round of training. We demonstrate that FedGST outperforms other federated learning methods and baselines on ABIDE-1 and ADHD200 datasets.</p
- …