74,914 research outputs found
Towards a Real-Time Data Driven Wildland Fire Model
A wildland fire model based on semi-empirical relations for the spread rate
of a surface fire and post-frontal heat release is coupled with the Weather
Research and Forecasting atmospheric model (WRF). The propagation of the fire
front is implemented by a level set method. Data is assimilated by a morphing
ensemble Kalman filter, which provides amplitude as well as position
corrections. Thermal images of a fire will provide the observations and will be
compared to a synthetic image from the model state.Comment: 5 pages, 4 figure
A statistical model for in vivo neuronal dynamics
Single neuron models have a long tradition in computational neuroscience.
Detailed biophysical models such as the Hodgkin-Huxley model as well as
simplified neuron models such as the class of integrate-and-fire models relate
the input current to the membrane potential of the neuron. Those types of
models have been extensively fitted to in vitro data where the input current is
controlled. Those models are however of little use when it comes to
characterize intracellular in vivo recordings since the input to the neuron is
not known. Here we propose a novel single neuron model that characterizes the
statistical properties of in vivo recordings. More specifically, we propose a
stochastic process where the subthreshold membrane potential follows a Gaussian
process and the spike emission intensity depends nonlinearly on the membrane
potential as well as the spiking history. We first show that the model has a
rich dynamical repertoire since it can capture arbitrary subthreshold
autocovariance functions, firing-rate adaptations as well as arbitrary shapes
of the action potential. We then show that this model can be efficiently fitted
to data without overfitting. Finally, we show that this model can be used to
characterize and therefore precisely compare various intracellular in vivo
recordings from different animals and experimental conditions.Comment: 31 pages, 10 figure
Recommended from our members
Explainable and Advisable Learning for Self-driving Vehicles
Deep neural perception and control networks are likely to be a key component of self-driving vehicles. These models need to be explainable - they should provide easy-to-interpret rationales for their behavior - so that passengers, insurance companies, law enforcement, developers, etc., can understand what triggered a particular behavior. Explanations may be triggered by the neural controller, namely introspective explanations, or informed by the neural controller's output, namely rationalizations. Our work has focused on the challenge of generating introspective explanations of deep models for self-driving vehicles. In Chapter 3, we begin by exploring the use of visual explanations. These explanations take the form of real-time highlighted regions of an image that causally influence the network's output (steering control). In the first stage, we use a visual attention model to train a convolution network end-to-end from images to steering angle. The attention model highlights image regions that potentially influence the network's output. Some of these are true influences, but some are spurious. We then apply a causal filtering step to determine which input regions actually influence the output. This produces more succinct visual explanations and more accurately exposes the network's behavior. In Chapter 4, we add an attention-based video-to-text model to produce textual explanations of model actions, e.g. "the car slows down because the road is wet". The attention maps of controller and explanation model are aligned so that explanations are grounded in the parts of the scene that mattered to the controller. We explore two approaches to attention alignment, strong- and weak-alignment. These explainable systems represent an externalization of tacit knowledge. The network's opaque reasoning is simplified to a situation-specific dependence on a visible object in the image. This makes them brittle and potentially unsafe in situations that do not match training data. In Chapter 5, we propose to address this issue by augmenting training data with natural language advice from a human. Advice includes guidance about what to do and where to attend. We present the first step toward advice-giving, where we train an end-to-end vehicle controller that accepts advice. The controller adapts the way it attends to the scene (visual attention) and the control (steering and speed). Further, in Chapter 6, we propose a new approach that learns vehicle control with the help of long-term (global) human advice. Specifically, our system learns to summarize its visual observations in natural language, predict an appropriate action response (e.g. "I see a pedestrian crossing, so I stop"), and predict the controls, accordingly
- …