87,597 research outputs found
Keynes and the logic of econometric method
This paper analyzes the controversy between Keynes and Tinbergen on econometric testing of business cycle theories. In his writings, Keynes repeatedly emphasizes a logical objection to Tinbergen s work. It is clarified what exactly this logical objection is, and why it matters for a statistical analysis of investment. Keynes arguments can be traced back to his Treatise on Probability, where the principle of limited independent variety is introduced as the basic requirement for probabilistic inference. This requirement is not satisfied in case of investment, where expectations are complex determinants. Multiple correlation, sometimes thought to take care of required ceteris paribus clauses, does not help to counter Keynes critiqueBusiness Cycles;Econometrics;Testing;business cycle
The Search for Invariance: Repeated Positive Testing Serves the Goals of Causal Learning
Positive testing is characteristic of exploratory behavior, yet it seems to be at odds with the aim of information seeking. After all, repeated demonstrations of one’s current hypothesis often produce the same evidence and fail to distinguish it from potential alternatives. Research on the development of scientific reasoning and adult rule learning have both documented and attempted to explain this behavior. The current chapter reviews this prior work and introduces a novel theoretical account—the Search for Invariance (SI) hypothesis—which suggests that producing multiple positive examples serves the goals of causal learning. This hypothesis draws on the interventionist framework of causal reasoning, which suggests that causal learners are concerned with the invariance of candidate hypotheses. In a probabilistic and interdependent causal world, our primary goal is to determine whether, and in what contexts, our causal hypotheses provide accurate foundations for inference and intervention—not to disconfirm their alternatives. By recognizing the central role of invariance in causal learning, the phenomenon of positive testing may be reinterpreted as a rational information-seeking strategy
Probabilistic Inference of Transcription Factor Binding from Multiple Data Sources
An important problem in molecular biology is to build a complete understanding of transcriptional regulatory processes in the cell. We have developed a flexible, probabilistic framework to predict TF binding from multiple data sources that differs from the standard hypothesis testing (scanning) methods in several ways. Our probabilistic modeling framework estimates the probability of binding and, thus, naturally reflects our degree of belief in binding. Probabilistic modeling also allows for easy and systematic integration of our binding predictions into other probabilistic modeling methods, such as expression-based gene network inference. The method answers the question of whether the whole analyzed promoter has a binding site, but can also be extended to estimate the binding probability at each nucleotide position. Further, we introduce an extension to model combinatorial regulation by several TFs. Most importantly, the proposed methods can make principled probabilistic inference from multiple evidence sources, such as, multiple statistical models (motifs) of the TFs, evolutionary conservation, regulatory potential, CpG islands, nucleosome positioning, DNase hypersensitive sites, ChIP-chip binding segments and other (prior) sequence-based biological knowledge. We developed both a likelihood and a Bayesian method, where the latter is implemented with a Markov chain Monte Carlo algorithm. Results on a carefully constructed test set from the mouse genome demonstrate that principled data fusion can significantly improve the performance of TF binding prediction methods. We also applied the probabilistic modeling framework to all promoters in the mouse genome and the results indicate a sparse connectivity between transcriptional regulators and their target promoters. To facilitate analysis of other sequences and additional data, we have developed an on-line web tool, ProbTF, which implements our probabilistic TF binding prediction method using multiple data sources. Test data set, a web tool, source codes and supplementary data are available at: http://www.probtf.org
Symbolic Execution for Randomized Programs
We propose a symbolic execution method for programs that can draw random
samples. In contrast to existing work, our method can verify randomized
programs with unknown inputs and can prove probabilistic properties that
universally quantify over all possible inputs. Our technique augments standard
symbolic execution with a new class of \emph{probabilistic symbolic variables},
which represent the results of random draws, and computes symbolic expressions
representing the probability of taking individual paths. We implement our
method on top of the \textsc{KLEE} symbolic execution engine alongside multiple
optimizations and use it to prove properties about probabilities and expected
values for a range of challenging case studies written in C++, including
Freivalds' algorithm, randomized quicksort, and a randomized property-testing
algorithm for monotonicity. We evaluate our method against \textsc{Psi}, an
exact probabilistic symbolic inference engine, and \textsc{Storm}, a
probabilistic model checker, and show that our method significantly outperforms
both tools.Comment: 47 pages, 9 figures, to appear at OOPSLA 202
A Novel Predictive-Coding-Inspired Variational RNN Model for Online Prediction and Recognition
This study introduces PV-RNN, a novel variational RNN inspired by the
predictive-coding ideas. The model learns to extract the probabilistic
structures hidden in fluctuating temporal patterns by dynamically changing the
stochasticity of its latent states. Its architecture attempts to address two
major concerns of variational Bayes RNNs: how can latent variables learn
meaningful representations and how can the inference model transfer future
observations to the latent variables. PV-RNN does both by introducing adaptive
vectors mirroring the training data, whose values can then be adapted
differently during evaluation. Moreover, prediction errors during
backpropagation, rather than external inputs during the forward computation,
are used to convey information to the network about the external data. For
testing, we introduce error regression for predicting unseen sequences as
inspired by predictive coding that leverages those mechanisms. The model
introduces a weighting parameter, the meta-prior, to balance the optimization
pressure placed on two terms of a lower bound on the marginal likelihood of the
sequential data. We test the model on two datasets with probabilistic
structures and show that with high values of the meta-prior the network
develops deterministic chaos through which the data's randomness is imitated.
For low values, the model behaves as a random process. The network performs
best on intermediate values, and is able to capture the latent probabilistic
structure with good generalization. Analyzing the meta-prior's impact on the
network allows to precisely study the theoretical value and practical benefits
of incorporating stochastic dynamics in our model. We demonstrate better
prediction performance on a robot imitation task with our model using error
regression compared to a standard variational Bayes model lacking such a
procedure.Comment: The paper is accepted in Neural Computatio
- …