9 research outputs found
Behavior quantification as the missing link between fields: Tools for digital psychiatry and their role in the future of neurobiology
The great behavioral heterogeneity observed between individuals with the same
psychiatric disorder and even within one individual over time complicates both
clinical practice and biomedical research. However, modern technologies are an
exciting opportunity to improve behavioral characterization. Existing
psychiatry methods that are qualitative or unscalable, such as patient surveys
or clinical interviews, can now be collected at a greater capacity and analyzed
to produce new quantitative measures. Furthermore, recent capabilities for
continuous collection of passive sensor streams, such as phone GPS or
smartwatch accelerometer, open avenues of novel questioning that were
previously entirely unrealistic. Their temporally dense nature enables a
cohesive study of real-time neural and behavioral signals.
To develop comprehensive neurobiological models of psychiatric disease, it
will be critical to first develop strong methods for behavioral quantification.
There is huge potential in what can theoretically be captured by current
technologies, but this in itself presents a large computational challenge --
one that will necessitate new data processing tools, new machine learning
techniques, and ultimately a shift in how interdisciplinary work is conducted.
In my thesis, I detail research projects that take different perspectives on
digital psychiatry, subsequently tying ideas together with a concluding
discussion on the future of the field. I also provide software infrastructure
where relevant, with extensive documentation.
Major contributions include scientific arguments and proof of concept results
for daily free-form audio journals as an underappreciated psychiatry research
datatype, as well as novel stability theorems and pilot empirical success for a
proposed multi-area recurrent neural network architecture.Comment: PhD thesis cop
Proceedings of the 19th Sound and Music Computing Conference
Proceedings of the 19th Sound and Music Computing Conference - June 5-12, 2022 - Saint-Ătienne (France).
https://smc22.grame.f
Integration of Leaky-Integrate-and-Fire-Neurons in Deep Learning Architectures
Up to now, modern Machine Learning is mainly based on fitting high
dimensional functions to enormous data sets, taking advantage of huge hardware
resources. We show that biologically inspired neuron models such as the
Leaky-Integrate-and-Fire (LIF) neurons provide novel and efficient ways of
information encoding. They can be integrated in Machine Learning models, and
are a potential target to improve Machine Learning performance.
Thus, we derived simple update-rules for the LIF units from the differential
equations, which are easy to numerically integrate. We apply a novel approach
to train the LIF units supervisedly via backpropagation, by assigning a
constant value to the derivative of the neuron activation function exclusively
for the backpropagation step. This simple mathematical trick helps to
distribute the error between the neurons of the pre-connected layer. We apply
our method to the IRIS blossoms image data set and show that the training
technique can be used to train LIF neurons on image classification tasks.
Furthermore, we show how to integrate our method in the KERAS (tensorflow)
framework and efficiently run it on GPUs. To generate a deeper understanding of
the mechanisms during training we developed interactive illustrations, which we
provide online.
With this study we want to contribute to the current efforts to enhance
Machine Intelligence by integrating principles from biology
Dynamical Systems in Spiking Neuromorphic Hardware
Dynamical systems are universal computers. They can perceive stimuli, remember, learn from feedback, plan sequences of actions, and coordinate complex behavioural responses. The Neural Engineering Framework (NEF) provides a general recipe to formulate models of such systems as coupled sets of nonlinear differential equations and compile them onto recurrently connected spiking neural networks â akin to a programming language for spiking models of computation. The Nengo software ecosystem supports the NEF and compiles such models onto neuromorphic hardware. In this thesis, we analyze the theory driving the success of the NEF, and expose several core principles underpinning its correctness, scalability, completeness, robustness, and extensibility. We also derive novel theoretical extensions to the framework that enable it to far more effectively leverage a wide variety of dynamics in digital hardware, and to exploit the device-level physics in analog hardware. At the same time, we propose a novel set of spiking algorithms that recruit an optimal nonlinear encoding of time, which we call the Delay Network (DN). Backpropagation across stacked layers of DNs dramatically outperforms stacked Long Short-Term Memory (LSTM) networksâa state-of-the-art deep recurrent architectureâin accuracy and training time, on a continuous-time memory task, and a chaotic time-series prediction benchmark. The basic component of this network is shown to function on state-of-the-art spiking neuromorphic hardware including Braindrop and Loihi. This implementation approaches the energy-efficiency of the human brain in the former case, and the precision of conventional computation in the latter case
Gating sensory noise in a spiking subtractive LSTM
Spiking neural networks are being investigated both as biologically plausible models of neural computation and also as a potentially more efficient type of neural network. Recurrent neural networks in the form of networks of gating memory cells have been central in state-of-the-art solutions in problem domains that involve sequence recognition or generation. Here, we design an analog Long Short-Term Memory (LSTM) cell where its neurons can be substituted with efficient spiking neurons, where we use subtractive gating (following the subLSTM in [1]) instead of multiplicative gating. Subtractive gating allows for a less sensitive gating mechanism, critical when using spiking neurons. By using fast adapting spiking neurons with a smoothed Rectified Linear Unit (ReLU)-like effective activation function, we show that then an accurate conversion from an analog subLSTM to a continuous-time spiking subLSTM is possible. This architecture results in memory networks that compute very efficiently, with low average firing rates comparable to those in biological neurons, while operating in continuous time
An Empirical Model of Area MT: Investigating the Link between Representation Properties and Function
The middle temporal area (MT) is one of the visual areas of the primate brain where neurons have highly specialized representations of motion and binocular disparity. Other stimulus features such as contrast, size, and pattern can also modulate MT activity. Since MT has been studied intensively for decades, there is a rich literature on its response characteristics. Here, I present an empirical model that incorporates some of this literature into a statistical model of population response. Specifically, the parameters of the model are drawn from distributions that I have estimated from data in the electrophysiology literature. The model accepts arbitrary stereo video as input and uses computer-vision methods to calculate dense flow, disparity, and contrast fields. The activity is then predicted using a combination of tuning functions, which have previously been used to describe data in a variety of experiments. The empirical model approximates a number of MT phenomena more closely than other models as well as reproducing three phenomena not addressed with the past models. I present three applications of the model. First, I use it for examining the relationships between MT tuning features and behaviour in an ethologically relevant task. Second, I employ it to study the functional role of MT surrounds in motion-related tasks. Third, I use it to guide the internal activity of a deep convolutional network towards a more physiologically realistic representation