238 research outputs found

    Stochasticity from function -- why the Bayesian brain may need no noise

    Get PDF
    An increasing body of evidence suggests that the trial-to-trial variability of spiking activity in the brain is not mere noise, but rather the reflection of a sampling-based encoding scheme for probabilistic computing. Since the precise statistical properties of neural activity are important in this context, many models assume an ad-hoc source of well-behaved, explicit noise, either on the input or on the output side of single neuron dynamics, most often assuming an independent Poisson process in either case. However, these assumptions are somewhat problematic: neighboring neurons tend to share receptive fields, rendering both their input and their output correlated; at the same time, neurons are known to behave largely deterministically, as a function of their membrane potential and conductance. We suggest that spiking neural networks may, in fact, have no need for noise to perform sampling-based Bayesian inference. We study analytically the effect of auto- and cross-correlations in functionally Bayesian spiking networks and demonstrate how their effect translates to synaptic interaction strengths, rendering them controllable through synaptic plasticity. This allows even small ensembles of interconnected deterministic spiking networks to simultaneously and co-dependently shape their output activity through learning, enabling them to perform complex Bayesian computation without any need for noise, which we demonstrate in silico, both in classical simulation and in neuromorphic emulation. These results close a gap between the abstract models and the biology of functionally Bayesian spiking networks, effectively reducing the architectural constraints imposed on physical neural substrates required to perform probabilistic computing, be they biological or artificial

    Inference by Believers in the Law of Small Numbers

    Get PDF
    Many people believe in the "Law of Small Numbers," exaggerating the degree to which a small sample resembles the population from which it is drawn. To model this, I assume that a person exaggerates the likelihood that a short sequence of i.i.d. signals resembles the long-run rate at which those signals are generated. Such a person believes in the "gambler's fallacy", thinking early draws of one signal increase the odds of next drawing other signals. When uncertain about the rate, the person over-infers from short sequences of signals, and is prone to think the rate is more extreme than it is. When the person makes inferences about the frequency at which rates are generated by different sources -- such as the distribution of talent among financial analysts -- based on few observations from each source, he tends to exaggerate how much variance there is in the rates. Hence, the model predicts that people may pay for financial advice from "experts" whose expertise is entirely illusory. Other economic applications are discussed.

    Time series segmentation procedures to detect, locate and estimate change-points

    Get PDF
    This thesis deals with the problem of modeling an univariate nonstationary time series by a set of approximately stationary processes. The observed period is segmented into intervals, also called partitions, blocks or segments, in which the time series behaves as approximately stationary. Thus, by segmenting a time series, we aim to obtain the periods of stability and homogeneity in the behavior of the process; identify the moments of change, called change-points; represent the regularities and features of each piece or block; and, use this information in order to determine the pattern in the nonstationary time series. When the time series exhibits multiple change-points, a more intricate and difficult issue is to use an efficient procedure to detect, locate and estimate them. Thus, the main goal of the thesis consists on describing, studying comparatively with simulated data, and applying to real data, a number of segmentation and/or change-points detection procedures, which involve both, different type of statistics indicating when the data is exhibiting a potential break, and, searching algorithms to locate multiple patterns variations. The thesis is structured in five chapters. Chapter 1 introduces the main concepts involved in the segmentation problem in the context of time series. First, a summary of the main statistics to detect a single change-point is presented. Second, we point out the multiple change-points searching algorithms presented in the literature and the linear models for representing time series, both in the parametric and the non-parametric approach. Third, we introduce the locally stationary and piecewise stationary processes. Finally, we show examples of piecewise and locally stationary simulated and real time series where the detection of change-point and segmentation seems to be important. Chapter 2 deals with the problem of detecting, locating and estimating a single or multiple changes in the parameters of a stationary process. We consider changes in the marginal mean, the marginal variance, and both the mean and the variance. This is done for both uncorrelated, or serial correlated processes. The main contributions of this chapter are: a) introducing a modification in the theoretical model proposed by Al Ibrahim et al. (2003) that is useful to look for changes in the mean and the autoregressive coefficients in piecewise autoregressive processes, by using a procedure based on the Bayesian information criterion; we allow also the presence of changes in the variance of the perturbation term; b) comparing this procedure with several procedures available in the literature which are based on cusum methods (InclĂĄn and Tiao (1994), Lee et al. (2003)), minimum description length principle (Davis et al. (2006)), the time varying spectrum (Ombao et al. (2002)) and the likelihood ratio test (Killick et al. (2012)). For that, we compute the empirical size and power properties in several scenarios and; c)apply them to neurology and speech recognition datasets. Chapter 3 studies processes, with constant conditional mean and dynamic behavior in the conditional variance, which are also affected by structural changes. Thus, the goal is to explore, analyse and apply the change-point detection and estimation methods to the situation when the conditional variance of a univariate process is heteroskedastic and exhibits change-points. Procedures based on informational approach, cusum statistics, minimum description length and the spectrum assuming an heteroskedastic time series are presented. We propose a method to detect and locate change-points by using the BIC as an extension of its application in linear models. We analyse comparatively the size and power properties of the procedures presented for single and multiple change-point scenarios and illustrate their performance with the S&P 500 returns. Chapter 4 analyses the problem of detecting and estimating smooth change-points in the data, where the Linear Trend change-point (LTCP) model is considered to represent a smooth change. We propose a procedure based on the Bayesian information criterion to distinguish a smooth from an abrupt change-point. The likelihood function of the LTCP model is obtained, as well as the conditional maximum likelihood estimator of the parameters in the model. The proposed procedure is compared with the outliers analysis techniques (Fox (1972), Chang (1982), Chen and Liu (1993), Kaiser (1999), among others) performing simulation experiments. We also present an iterative procedure to detect multiple smooth and abrupt change-points. This procedure is illustrated with the number of deaths in traffic accidents in Spanish motorways. Finally, Chapter 5 summarizes the main results of the thesis and proposes some extensions for future research

    On the robustness of Bayesian phylogenetic gene tree estimation

    Get PDF

    Financial Applications of Human Perception of Fractal Time Series

    Get PDF
    The purpose of this thesis is to explore the interaction between people’s financial behaviour and the market’s fractal characteristics. In particular, I have been interested in the Hurst exponent, a measure of a series’ fractal dimension and autocorrelation. In Chapter 2 I show that people exhibit a high level of sensitivity to the Hurst exponent of visually presented graphs representing price series. I explain this sensitivity using two types of cues: the illuminance of the graphs, and the characteristic of the price change series. I further show that people can learn how to identify the Hurst exponents of fractal graphs when feedback about the correct values of the Hurst exponent is given. In Chapter 3 I investigate the relationship between risk perception and Hurst exponent. I show that people assess risk of investment in an asset according to the Hurst exponent of its price graph if it is presented along with its price change series. Analysis reveals that buy/sell decisions also depend on the Hurst exponent of the graphs. In Chapter 4 I study forecasts from financial graphs. I show that to produce forecasts, people imitate perceived noise and signals of data series. People’s forecasts depend on certain personality traits and dispositions. Similar results were obtained for experts. In Chapter 5 I explore the way people integrate visually presented price series with news. I find that people’s financial decisions are influenced by news more than the average trend of the graphs. In the case of positive trend, there is a correlation between financial forecasts and decisions. Finally, in Chapter 6 I show that the way people perceive fractal time series is correlated with the Hurst exponent of the graphs. I use the findings of the thesis to describe a possible mechanism which preserves the fractal nature of price series

    Understanding movement processes underlying camera-trap data for reliable population inference

    Get PDF
    Understanding how animal movement drives spatial distribution patterns and population dynamics is critically important as the consequences of ignoring movement in population models are increasingly recognised. While the gap between both fields is narrowing, the extent to which knowledge of animal movement is required for accurate population-level inferences remains unclear. This thesis harnesses advancements in animal movement modelling to evaluate the reliability of inferences obtained from population abundance (or density) estimation methods for both marked (i.e., individually distinguishable) and unmarked (i.e., not individually distinguishable) animals. The primary aim of my thesis was to explore the impact of animal movement (and other processes, e.g., imperfect detection) on populationlevel (i) distribution and space use patterns, and (ii) abundance estimation within the framework of camera-trap (CT) applications. Population-level patterns are fundamentally emergent from movement decisions made at the individual animal level. The first piece of research (Chapter 2) explores this mechanistic link with a multi-individual movement simulation model, in which an individual’s movement decisions are driven by two important biological processes: its memory of the resource landscape (i.e., resource memory) and conspecific scent environment (i.e., territorial exclusion). I used the framework to investigate how each process affects emergent space-use patterns. Both mechanisms together led to the formation of exclusive, resource- and population densitydependent utilisation distributions (UDs), which were responsive to perturbations in the conspecific environment (i.e., removing individuals). Model application to a population of feral cats demonstrates that general space-use measures (e.g., median UD area, resident and transient dynamics) can be approximated through simulation, though finer-scale space use patterns (e.g., cumulative UD area) are poorly matched. Despite this, the model’s process-based approach of simulating movement can be applied to studies that aim to evaluate the consequences of not accounting for more complex, fine-scale movement behaviours in ecological models. In the second data chapter (Chapter 3), the movement simulation model (developed in Chapter 2) was applied in the evaluation of abundance estimation methods for marked populations - namely, spatial capture-recapture (SCR) models. Individual detection data was generated from simulated movement trajectories and then fitted to a basic, resource selection and transience SCR model, as well as their variants accounting for resource-driven heterogeneity in density and detectability. In general, results demonstrated that population-level inferences of abundance, and resource effect on spatial variation in density, were robust when the level of individual heterogeneity induced by the underlying movement process is low. However, my evaluation framework exposed weaknesses in current SCR models’ ability to accurately reveal finer scale patterns of ecological and movement processes (e.g., resource effect on detectability and space use), and we recommend further integration of complex movement into the SCR framework to address this shortcoming. The third data chapter (Chapter 4) examines the performance of CT-based abundance estimation methods for unmarked populations. While methods of abundance estimation of marked animals are well-established, the equivalent methods for unmarked animals are relatively new, at a stage where robustness of abundance estimates is in question. Hence, the complexity of movement simulated in Chapter 2 and 3 was not applied here, and a simpler approach that represented a breadth of population scenarios (e.g., discrete-time biased correlated random walks, stateswitching behaviour in group-living populations), was used. I evaluated abundance estimator performance from three unmarked methods (REM: random encounter model, REST: random encounter and staying time, CTDS: camera trap distance sampling) under a wide range of scenarios that vary in population densities, movement rates (i.e., speed and home-range size). Results imply that CTDS and REST can reliably estimate abundance for both solitary and group-living animals if population densities are not low and trap effort is substantial. REM is potentially reliable under the same circumstances, but model parameters are prone to bias without careful consideration in the measurement and parameterisation process. Unmarked density estimation currently requires substantial investment of effort, thus improving their cost-effectiveness should be a priority for future research. Finally, I synthesise the findings from my thesis and discuss the future of research at the interface of movement and population ecology, and for abundance estimation in CT applications (Chapter 5: General Discussion). In my view, the integration of animal movement and population dynamics models has great potential for improved inferences at the interface of population, movement and landscape ecology. As for CT-based abundance estimation, several key challenges to practical use remain, namely: (i) the substantial labour required in data collection; (ii) inconsistency in analytical approaches for unmarked animals; (iii) considerable level of statistical knowledge and computational resources required for model implementation, and; (iv) limitations in ecological inference. However, I believe that many of the solutions to these problems will be available in the near future – especially in light of rapid technological advances in data automation workflows and cloud-platform solutions.Thesis (Ph.D.) -- University of Adelaide, School of Biological Sciences, 202

    Harnessing function from form: towards bio-inspired artificial intelligence in neuronal substrates

    Get PDF
    Despite the recent success of deep learning, the mammalian brain is still unrivaled when it comes to interpreting complex, high-dimensional data streams like visual, auditory and somatosensory stimuli. However, the underlying computational principles allowing the brain to deal with unreliable, high-dimensional and often incomplete data while having a power consumption on the order of a few watt are still mostly unknown. In this work, we investigate how specific functionalities emerge from simple structures observed in the mammalian cortex, and how these might be utilized in non-von Neumann devices like “neuromorphic hardware”. Firstly, we show that an ensemble of deterministic, spiking neural networks can be shaped by a simple, local learning rule to perform sampling-based Bayesian inference. This suggests a coding scheme where spikes (or “action potentials”) represent samples of a posterior distribution, constrained by sensory input, without the need for any source of stochasticity. Secondly, we introduce a top-down framework where neuronal and synaptic dynamics are derived using a least action principle and gradient-based minimization. Combined, neurosynaptic dynamics approximate real-time error backpropagation, mappable to mechanistic components of cortical networks, whose dynamics can again be described within the proposed framework. The presented models narrow the gap between well-defined, functional algorithms and their biophysical implementation, improving our understanding of the computational principles the brain might employ. Furthermore, such models are naturally translated to hardware mimicking the vastly parallel neural structure of the brain, promising a strongly accelerated and energy-efficient implementation of powerful learning and inference algorithms, which we demonstrate for the physical model system “BrainScaleS–1”

    Econometrics: A Bird’s Eye View

    Get PDF
    As a unified discipline, econometrics is still relatively young and has been transforming and expanding very rapidly over the past few decades. Major advances have taken place in the analysis of cross sectional data by means of semi-parametric and non-parametric techniques. Heterogeneity of economic relations across individuals, firms and industries is increasingly acknowledged and attempts have been made to take them into account either by integrating out their effects or by modeling the sources of heterogeneity when suitable panel data exists. The counterfactual considerations that underlie policy analysis and treatment evaluation have been given a more satisfactory foundation. New time series econometric techniques have been developed and employed extensively in the areas of macroeconometrics and finance. Non-linear econometric techniques are used increasingly in the analysis of cross section and time series observations. Applications of Bayesian techniques to econometric problems have been given new impetus largely thanks to advances in computer power and computational techniques. The use of Bayesian techniques have in turn provided the investigators with a unifying framework where the tasks of forecasting, decision making, model evaluation and learning can be considered as parts of the same interactive and iterative process; thus paving the way for establishing the foundation of “real time econometrics”. This paper attempts to provide an overview of some of these developments.history of econometrics, microeconometrics, macroeconometrics, Bayesian econometrics, nonparametric and semi-parametric analysis
    • 

    corecore