1,143 research outputs found
Assessment scales in stroke: clinimetric and clinical considerations
As stroke care has developed, there has been a need to robustly assess the efficacy of interventions both at the level of the individual stroke survivor and in the context of clinical trials. To describe stroke-survivor recovery meaningfully, more sophisticated measures are required than simple dichotomous end points, such as mortality or stroke recurrence. As stroke is an exemplar disabling long-term condition, measures of function are well suited as outcome assessment. In this review, we will describe functional assessment scales in stroke, concentrating on three of the more commonly used tools: the National Institutes of Health Stroke Scale, the modified Rankin Scale, and the Barthel Index. We will discuss the strengths, limitations, and application of these scales and use the scales to highlight important properties that are relevant to all assessment tools. We will frame much of this discussion in the context of "clinimetric" analysis. As they are increasingly used to inform stroke-survivor assessments, we will also discuss some of the commonly used quality-of-life measures. A recurring theme when considering functional assessment is that no tool suits all situations. Clinicians and researchers should chose their assessment tool based on the question of interest and the evidence base around clinimetric properties
A kernel method for non-linear systems identification β infinite degree volterra series estimation
Volterra series expansions are widely used in analyzing
and solving the problems of non-linear dynamical
systems. However, the problem that the number of
terms to be determined increases exponentially with the
order of the expansion restricts its practical application.
In practice, Volterra series expansions are truncated
severely so that they may not give accurate representations
of the original system. To address this problem,
kernel methods are shown to be deserving of exploration.
In this report, we make use of an existing result
from the theory of approximation in reproducing kernel
Hilbert space (RKHS) that has not yet been exploited in
the systems identification field. An exponential kernel
method, based on an RKHS called a generalized Fock
space, is introduced, to model non-linear dynamical systems
and to specify the corresponding Volterra series
expansion. In this way a non-linear dynamical system
can be modelled using a finite memory length, infinite
degree Volterra series expansion, thus reducing the
source of approximation error solely to truncation in
time. We can also, in principle, recover any coefficient
in the Volterra series
A kernel method for non-linear systems indentification - infinite degree volerra series estimation
Volterra series expansions are widely used in analyzing
and solving the problems of non-linear dynamical
systems. However, the problem that the number of
terms to be determined increases exponentially with the
order of the expansion restricts its practical application.
In practice, Volterra series expansions are truncated
severely so that they may not give accurate representations
of the original system. To address this problem,
kernel methods are shown to be deserving of exploration.
In this report, we make use of an existing result
from the theory of approximation in reproducing kernel
Hilbert space (RKHS) that has not yet been exploited in
the systems identification field. An exponential kernel
method, based on an RKHS called a generalized Fock
space, is introduced, to model non-linear dynamical systems
and to specify the corresponding Volterra series
expansion. In this way a non-linear dynamical system
can be modelled using a finite memory length, infinite
degree Volterra series expansion, thus reducing the
source of approximation error solely to truncation in
time. We can also, in principle, recover any coefficient
in the Volterra series
Some lemmas on reproducing kernel Hilbert spaces
Reproducing kernal Hilbert spaces (RKHS) provide a framework for approximation from finite data using the idea of bounded linear functionals. The approximation problem in this case can be viewed as the inverse problem of finding the optimum operator from the Euclidean space of observations to some subspace of the RKHS. In constructing the appropriate invers operator, use is made of both adjoint operators in RKHS and various norms. In this report a number of lemmas are given with respect to such adjoint operators and norms
Iterative sparse interpolation in reproducing kernel Hilbert spaces
The problem of interpolating data in reproducing kernel Hilbert spaces is well known to be ill-conditioned. In the presence of noise, regularisation can be applied to find a good solution. In the noise-free case, regularisation has the effect of over-smoothing the function and few data points are interpolated. In this paper an alternative framework, based on sparsity, is proposed for interpolation of noise-free data. Iterative construction of a sparse sequence of interpolants is shown to be well defined and produces good results
Steepest descent for a linear operator equation of the second kind with application to Tikhonov regularisation
Let H1 H2 be Hilbert spaces, T a bounded linear operator on H1 into H2 such that the range of T, R (T), is closed. Lrt T* denote the adjoint of T. In this paper, we review the generalised solution and method of steepest descent, for the linear operator equation, Tx=b,b E H2. Further, we establish the convergence of the method of steepest descent to the unique solution (T*T=.......
Reduction of kernel models
Kernel models can be expensive to compute and in a non-stationary environment can become unmanageably large. Here we present several previously reported techniques for reducing the complexity of these models in a common framework. This reformulation leads to the development of further related reduction techniques and clarifies the relationships between these and the existing techniques
Multiple-model approach to non-linear kernel-based adaptive filtering
Kernel methods now provide standard tools for the solution of function approximation and pattern classification problems. However, it is typically assumed that all data are available for training. More recently, various approaches have been proposed for extending kernel methods to sequential problems whereby the model is updated as each new data point arrives. Whilst these approaches have proven successful in estimating the basic parameters, the problem of estimating the hyperparameters which determine the overall model behaviour, remains essentially unsolved. In this paper a novel approach to the hyperparameters is presented based on a multiple model framework. An ensemble of models with different hyperparameters is trained in parallel, the outputs of which are subsequently combined based on a predictive performance measure. This new approach is sucessfully demonstrated in a standard benchmark time series problem
- β¦