24 research outputs found
Neural Network Gradient Hamiltonian Monte Carlo
Hamiltonian Monte Carlo is a widely used algorithm for sampling from
posterior distributions of complex Bayesian models. It can efficiently explore
high-dimensional parameter spaces guided by simulated Hamiltonian flows.
However, the algorithm requires repeated gradient calculations, and these
computations become increasingly burdensome as data sets scale. We present a
method to substantially reduce the computation burden by using a neural network
to approximate the gradient. First, we prove that the proposed method still
maintains convergence to the true distribution though the approximated gradient
no longer comes from a Hamiltonian system. Second, we conduct experiments on
synthetic examples and real data sets to validate the proposed method
Modeling Dynamic Functional Connectivity with Latent Factor Gaussian Processes
Dynamic functional connectivity, as measured by the time-varying covariance
of neurological signals, is believed to play an important role in many aspects
of cognition. While many methods have been proposed, reliably establishing the
presence and characteristics of brain connectivity is challenging due to the
high dimensionality and noisiness of neuroimaging data. We present a latent
factor Gaussian process model which addresses these challenges by learning a
parsimonious representation of connectivity dynamics. The proposed model
naturally allows for inference and visualization of time-varying connectivity.
As an illustration of the scientific utility of the model, application to a
data set of rat local field potential activity recorded during a complex
non-spatial memory task provides evidence of stimuli differentiation
Bayesian Neural Decoding Using A Diversity-Encouraging Latent Representation Learning Method
It is well established that temporal organization is critical to memory, and
that the ability to temporally organize information is fundamental to many
perceptual, cognitive, and motor processes. While our understanding of how the
brain processes the spatial context of memories has advanced considerably, our
understanding of their temporal organization lags far behind. In this paper, we
propose a new approach for elucidating the neural basis of complex behaviors
and temporal organization of memories. More specifically, we focus on neural
decoding - the prediction of behavioral or experimental conditions based on
observed neural data. In general, this is a challenging classification problem,
which is of immense interest in neuroscience. Our goal is to develop a new
framework that not only improves the overall accuracy of decoding, but also
provides a clear latent representation of the decoding process. To accomplish
this, our approach uses a Variational Auto-encoder (VAE) model with a
diversity-encouraging prior based on determinantal point processes (DPP) to
improve latent representation learning by avoiding redundancy in the latent
space. We apply our method to data collected from a novel rat experiment that
involves presenting repeated sequences of odors at a single port and testing
the rats' ability to identify each odor. We show that our method leads to
substantially higher accuracy rate for neural decoding and allows to discover
novel biological phenomena by providing a clear latent representation of the
decoding process
Pulmonary function test-related prognostic models in non-small cell lung cancer patients receiving neoadjuvant chemoimmunotherapy
BackgroundThis study aimed to establish a comprehensive clinical prognostic risk model based on pulmonary function tests. This model was intended to guide the evaluation and predictive management of patients with resectable stage I-III non-small cell lung cancer (NSCLC) receiving neoadjuvant chemoimmunotherapy.MethodsClinical pathological characteristics and prognostic survival data for 175 patients were collected. Univariate and multivariate Cox regression analyses, and least absolute shrinkage and selection operator (LASSO) regression analysis were employed to identify variables and construct corresponding models. These variables were integrated to develop a ridge regression model. The models’ discrimination and calibration were evaluated, and the optimal model was chosen following internal validation. Comparative analyses between the risk scores or groups of the optimal model and clinical factors were conducted to explore the potential clinical application value.ResultsUnivariate regression analysis identified smoking, complete pathologic response (CPR), and major pathologic response (MPR) as protective factors. Conversely, T staging, D-dimer/white blood cell ratio (DWBCR), D-dimer/fibrinogen ratio (DFR), and D-dimer/minute ventilation volume actual ratio (DMVAR) emerged as risk factors. Evaluation of the models confirmed their capability to accurately predict patient prognosis, exhibiting ideal discrimination and calibration, with the ridge regression model being optimal. Survival analysis demonstrated that the disease-free survival (DFS) in the high-risk group (HRG) was significantly shorter than in the low-risk group (LRG) (P=2.57×10-13). The time-dependent receiver operating characteristic (ROC) curve indicated that the area under the curve (AUC) values at 1 year, 2 years, and 3 years were 0.74, 0.81, and 0.79, respectively. Clinical correlation analysis revealed that men with lung squamous cell carcinoma or comorbid chronic obstructive pulmonary disease (COPD) were predominantly in the LRG, suggesting a better prognosis and potentially identifying a beneficiary population for this treatment combination.ConclusionThe prognostic model developed in this study effectively predicts the prognosis of patients with NSCLC receiving neoadjuvant chemoimmunotherapy. It offers valuable predictive insights for clinicians, aiding in developing treatment plans and monitoring disease progression
Recommended from our members
Improving Statistical Inference through Flexible Approximations
In the statistics and machine learning communities, there exists a perceived dichotomy be- tween statistical inference and out-of-sample prediction. Statistical inference is often done with models that are carefully specified a priori while out-of-sample prediction is often done with “black-box” models that have greater flexibility. The former is more concerned with model theoretical properties when data become infinite; the later focuses more on algorithms that scale up to larger data sets. To a scientist who is outside of these communities, the distinction of inference and prediction might not seem so clear. With technological advancements, scientists can now collect overwhelming amounts of data in various formats and their objective is to make sense of the data. To this end, we propose the synergy of statistical inference and prediction workhorses that are neural networks and Gaussian processes. Despite hardware improvements under Moore’s law, ever bigger data and more complex models pose computational challenges for statistical inference. To address these computational challenges, we approximate functional forms of the data to effectively reduce the burden of model evaluation. In addition, we present a case study where we use flexible models to learn scientifically interesting representations of rat memories from experimental data for better understanding of the brain
Recommended from our members
Improving Statistical Inference through Flexible Approximations
In the statistics and machine learning communities, there exists a perceived dichotomy be- tween statistical inference and out-of-sample prediction. Statistical inference is often done with models that are carefully specified a priori while out-of-sample prediction is often done with “black-box” models that have greater flexibility. The former is more concerned with model theoretical properties when data become infinite; the later focuses more on algorithms that scale up to larger data sets. To a scientist who is outside of these communities, the distinction of inference and prediction might not seem so clear. With technological advancements, scientists can now collect overwhelming amounts of data in various formats and their objective is to make sense of the data. To this end, we propose the synergy of statistical inference and prediction workhorses that are neural networks and Gaussian processes. Despite hardware improvements under Moore’s law, ever bigger data and more complex models pose computational challenges for statistical inference. To address these computational challenges, we approximate functional forms of the data to effectively reduce the burden of model evaluation. In addition, we present a case study where we use flexible models to learn scientifically interesting representations of rat memories from experimental data for better understanding of the brain