23 research outputs found
Bayesian Neural Networks: A Min-Max Game Framework
Bayesian neural networks use random variables to describe the neural networks
rather than deterministic neural networks and are mostly trained by variational
inference which updates the mean and variance at the same time. Here, we
formulate the Bayesian neural networks as a minimax game problem. We do the
experiments on the MNIST data set and the primary result is comparable to the
existing closed-loop transcription neural network. Finally, we reveal the
connections between Bayesian neural networks and closed-loop transcription
neural networks, and show our framework is rather practical, and provide
another view of Bayesian neural networks.Comment: 3 pages, 2 figure
Online Signal Estimation on the Graph Edges via Line Graph Transformation
The processing of signals on graph edges is challenging considering that
Graph Signal Processing techniques are defined only on the graph nodes.
Leveraging the Line Graph to transform a graph edge signal onto the node of its
edge-to-vertex dual, we propose the Line Graph Least Mean Square (LGLMS)
algorithm for online time-varying graph edge signal prediction. By setting up
an -norm optimization problem, LGLMS forms an adaptive algorithm as the
graph edge analogy of the classical adaptive LMS algorithm. Additionally, the
LGLMS inherits all the GSP concepts and techniques that can previously be
deployed on the graph nodes, but without the need to redefine them on the graph
edges. Experimenting with transportation graphs and meteorological graphs, with
the signal observations having noisy and missing values, we confirmed that
LGLMS is suitable for the online prediction of time-varying edge signals
Graph Signal Processing For Cancer Gene Co-Expression Network Analysis
Cancer heterogeneity arises from complex molecular interactions. Elucidating
systems-level properties of gene interaction networks distinguishing cancer
from normal cells is critical for understanding disease mechanisms and
developing targeted therapies. Previous works focused only on identifying
differences in network structures. In this study, we used graph frequency
analysis of cancer genetic signals defined on a co-expression network to
describe the spectral properties of underlying cancer systems. We demonstrated
that cancer cells exhibit distinctive signatures in the graph frequency content
of their gene expression signals. Applying graph frequency filtering, graph
Fourier transforms, and its inverse to gene expression from different cancer
stages resulted in significant improvements in average F-statistics of the
genes compared to using their unfiltered expression levels. We propose graph
spectral properties of cancer genetic signals defined on gene co-expression
networks as cancer hallmarks with potential application for differential
co-expression analysis
Sequential Monte Carlo Graph Convolutional Network for Dynamic Brain Connectivity
An increasingly important brain function analysis modality is functional
connectivity analysis which regards connections as statistical codependency
between the signals of different brain regions. Graph-based analysis of brain
connectivity provides a new way of exploring the association between brain
functional deficits and the structural disruption related to brain disorders,
but the current implementations have limited capability due to the assumptions
of noise-free data and stationary graph topology. We propose a new methodology
based on the particle filtering algorithm, with proven success in tracking
problems, which estimates the hidden states of a dynamic graph with only
partial and noisy observations, without the assumptions of stationarity on
connectivity. We enrich the particle filtering state equation with a graph
Neural Network called Sequential Monte Carlo Graph Convolutional Network
(SMC-GCN), which due to the nonlinear regression capability, can limit spurious
connections in the graph. Experiment studies demonstrate that SMC-GCN achieves
the superior performance of several methods in brain disorder classification
Trustworthy Personalized Bayesian Federated Learning via Posterior Fine-Tune
Performance degradation owing to data heterogeneity and low output
interpretability are the most significant challenges faced by federated
learning in practical applications. Personalized federated learning diverges
from traditional approaches, as it no longer seeks to train a single model, but
instead tailors a unique personalized model for each client. However, previous
work focused only on personalization from the perspective of neural network
parameters and lack of robustness and interpretability. In this work, we
establish a novel framework for personalized federated learning, incorporating
Bayesian methodology which enhances the algorithm's ability to quantify
uncertainty. Furthermore, we introduce normalizing flow to achieve
personalization from the parameter posterior perspective and theoretically
analyze the impact of normalizing flow on out-of-distribution (OOD) detection
for Bayesian neural networks. Finally, we evaluated our approach on
heterogeneous datasets, and the experimental results indicate that the new
algorithm not only improves accuracy but also outperforms the baseline
significantly in OOD detection due to the reliable output of the Bayesian
approach
Design of finite-state machines for quantization using simulated annealing
In this paper, the combinatorial optimization algorithm known as simulated annealing is used for the optimization of the trellis structure or the next-state map of the decoder finite-state machine in trellis waveform coding. The generalized Lloyd algorithm which finds the optimum codebook is incorporated into simulated annealing. Comparison of simulation results with previous work in the literature shows that this combined method yields coding systems with good performance
HGR Correlation Pooling Fusion Framework for Recognition and Classification in Multimodal Remote Sensing Data
This paper investigates remote sensing data recognition and classification with multimodal data fusion. Aiming at the problems of low recognition and classification accuracy and the difficulty in integrating multimodal features in existing methods, a multimodal remote sensing data recognition and classification model based on a heatmap and Hirschfeld–Gebelein–Rényi (HGR) correlation pooling fusion operation is proposed. A novel HGR correlation pooling fusion algorithm is developed by combining a feature fusion method and an HGR maximum correlation algorithm. This method enables the restoration of the original signal without changing the value of transmitted information by performing reverse operations on the sample data. This enhances feature learning for images and improves performance in specific tasks of interpretation by efficiently using multi-modal information with varying degrees of relevance. Ship recognition experiments conducted on the QXS-SROPT dataset demonstrate that the proposed method surpasses existing remote sensing data recognition methods. Furthermore, land cover classification experiments conducted on the Houston 2013 and MUUFL datasets confirm the generalizability of the proposed method. The experimental results fully validate the effectiveness and significant superiority of the proposed method in the recognition and classification of multimodal remote sensing data
Neural Network Structure Optimization by Simulated Annealing
A critical problem in large neural networks is over parameterization with a large number of weight parameters, which limits their use on edge devices due to prohibitive computational power and memory/storage requirements. To make neural networks more practical on edge devices and real-time industrial applications, they need to be compressed in advance. Since edge devices cannot train or access trained networks when internet resources are scarce, the preloading of smaller networks is essential. Various works in the literature have shown that the redundant branches can be pruned strategically in a fully connected network without sacrificing the performance significantly. However, majority of these methodologies need high computational resources to integrate weight training via the back-propagation algorithm during the process of network compression. In this work, we draw attention to the optimization of the network structure for preserving performance despite compression by pruning aggressively. The structure optimization is performed using the simulated annealing algorithm only, without utilizing back-propagation for branch weight training. Being a heuristic-based, non-convex optimization method, simulated annealing provides a globally near-optimal solution to this NP-hard problem for a given percentage of branch pruning. Our simulation results have shown that simulated annealing can significantly reduce the complexity of a fully connected network while maintaining the performance without the help of back-propagation
Sparse neural network optimization by Simulated Annealing
The over-parameterization of neural networks and the local optimality of backpropagation algorithm have been two major problems associated with deep-learning. In order to reduce the redundancy of neural network parameters, the conventional approach has been to prune branches with small weights. However, this only solves the problem of parameter redundancy, not providing any global optimality guarantees. In this paper, we overturn back-propagation and combine the sparse network optimization problem and the network weight optimization problem using a non-convex optimization method, namely Simulated Annealing. This method can complete network training under the premise of controlling the amount of parameters. Different from simply updating network parameters using gradient descent, our method simultaneously optimizes the topology of the sparse network. With the guarantee of global optimality of Simulated Annealing solution, the performance of the sparse network optimized by our method has exceeded the one trained by backpropagation only