332 research outputs found
Effect of calf-starter protein solubility on calf performance
Three starters containing differently processed protein supplements were fed
to Holstein heifer calves, using an early weaning program. One starter contained
soybean meal. The other starters contained soybean grits processed through an
extrusion cooker to reduce the protein solubility to an intermediate (PDI> 50%) or
low (PDI < 15 %) level. Calf performance was similar on all three starters
Noblesse Oblige
Commencement address given by James Lewis Morrill, Vice President of the University, to the Winter 1937 graduating class of The Ohio State University, University Hall Chapel, Columbus, Ohio, March 19, 1937
Generalised Interpretable Shapelets for Irregular Time Series
The shapelet transform is a form of feature extraction for time series, in
which a time series is described by its similarity to each of a collection of
`shapelets'. However it has previously suffered from a number of limitations,
such as being limited to regularly-spaced fully-observed time series, and
having to choose between efficient training and interpretability. Here, we
extend the method to continuous time, and in doing so handle the general case
of irregularly-sampled partially-observed multivariate time series.
Furthermore, we show that a simple regularisation penalty may be used to train
efficiently without sacrificing interpretability. The continuous-time
formulation additionally allows for learning the length of each shapelet
(previously a discrete object) in a differentiable manner. Finally, we
demonstrate that the measure of similarity between time series may be
generalised to a learnt pseudometric. We validate our method by demonstrating
its performance and interpretability on several datasets; for example we
discover (purely from data) that the digits 5 and 6 may be distinguished by the
chirality of their bottom loop, and that a kind of spectral gap exists in
spoken audio classification
Neural rough differential equations for long time series
Neural controlled differential equations (CDEs) are the continuous-time analogue of recurrent neural networks, as Neural ODEs are to residual networks, and offer a memory-efficient continuous-time way to model functions of potentially irregular time series. Existing methods for computing the forward pass of a Neural CDE involve embedding the incoming time series into path space, often via interpolation, and using evaluations of this path to drive the hidden state. Here, we use rough path theory to extend this formulation. Instead of directly embedding into path space, we instead represent the input signal over small time intervals through its \textit{log-signature}, which are statistics describing how the signal drives a CDE. This is the approach for solving \textit{rough differential equations} (RDEs), and correspondingly we describe our main contribution as the introduction of Neural RDEs. This extension has a purpose: by generalising the Neural CDE approach to a broader class of driving signals, we demonstrate particular advantages for tackling long time series. In this regime, we demonstrate efficacy on problems of length up to 17k observations and observe significant training speed-ups, improvements in model performance, and reduced memory requirements compared to existing approaches
Neural Controlled Differential Equations for Online Prediction Tasks
Neural controlled differential equations (Neural CDEs) are a continuous-time
extension of recurrent neural networks (RNNs), achieving state-of-the-art
(SOTA) performance at modelling functions of irregular time series. In order to
interpret discrete data in continuous time, current implementations rely on
non-causal interpolations of the data. This is fine when the whole time series
is observed in advance, but means that Neural CDEs are not suitable for use in
\textit{online prediction tasks}, where predictions need to be made in
real-time: a major use case for recurrent networks. Here, we show how this
limitation may be rectified. First, we identify several theoretical conditions
that interpolation schemes for Neural CDEs should satisfy, such as boundedness
and uniqueness. Second, we use these to motivate the introduction of new
schemes that address these conditions, offering in particular measurability
(for online prediction), and smoothness (for speed). Third, we empirically
benchmark our online Neural CDE model on three continuous monitoring tasks from
the MIMIC-IV medical database: we demonstrate improved performance on all tasks
against ODE benchmarks, and on two of the three tasks against SOTA non-ODE
benchmarks
New directions in the applications of rough path theory
This article provides a concise overview of some of the recent advances in the application of rough path theory to machine learning. Controlled differential equations (CDEs) are discussed as the key mathematical model to describe the interaction of a stream with a physical control system. A collection of iterated integrals known as the signature naturally arises in the description of the response produced by such interactions. The signature comes equipped with a variety of powerful properties rendering it an ideal feature map for streamed data. We summarise recent advances in the symbiosis between deep learning and CDEs, studying the link with RNNs and culminating with the Neural CDE model. We concluded with a discussion on signature kernel methods
- …