Exposing meaningful and interpretable neural interactions is critical to
understanding neural circuits. Inferred neural interactions from neural signals
primarily reflect functional interactions. In a long experiment, subject
animals may experience different stages defined by the experiment, stimuli, or
behavioral states, and hence functional interactions can change over time. To
model dynamically changing functional interactions, prior work employs
state-switching generalized linear models with hidden Markov models (i.e.,
HMM-GLMs). However, we argue they lack biological plausibility, as functional
interactions are shaped and confined by the underlying anatomical connectome.
Here, we propose a novel prior-informed state-switching GLM. We introduce both
a Gaussian prior and a one-hot prior over the GLM in each state. The priors are
learnable. We will show that the learned prior should capture the
state-constant interaction, shedding light on the underlying anatomical
connectome and revealing more likely physical neuron interactions. The
state-dependent interaction modeled by each GLM offers traceability to capture
functional variations across multiple brain states. Our methods effectively
recover true interaction structures in simulated data, achieve the highest
predictive likelihood with real neural datasets, and render interaction
structures and hidden states more interpretable when applied to real neural
data