8 research outputs found
Regularization and Bayesian Learning in Dynamical Systems: Past, Present and Future
Regularization and Bayesian methods for system identification have been
repopularized in the recent years, and proved to be competitive w.r.t.
classical parametric approaches. In this paper we shall make an attempt to
illustrate how the use of regularization in system identification has evolved
over the years, starting from the early contributions both in the Automatic
Control as well as Econometrics and Statistics literature. In particular we
shall discuss some fundamental issues such as compound estimation problems and
exchangeability which play and important role in regularization and Bayesian
approaches, as also illustrated in early publications in Statistics. The
historical and foundational issues will be given more emphasis (and space), at
the expense of the more recent developments which are only briefly discussed.
The main reason for such a choice is that, while the recent literature is
readily available, and surveys have already been published on the subject, in
the author's opinion a clear link with past work had not been completely
clarified.Comment: Plenary Presentation at the IFAC SYSID 2015. Submitted to Annual
Reviews in Contro
NeuroPrime: a Pythonic framework for the priming of brain states in self-regulation protocols
Due to the recent pandemic and a general boom
in technology, we are facing more and more threats of isolation,
depression, fear, overload of information, between others. In
turn, these affect our Self, psychologically and physically.
Therefore, new tools are required to assist the regulation of this
unregulated Self to a more personalized, optimal and healthy
Self. As such, we developed a Pythonic open-source humancomputer
framework for assisted priming of subjects to
“optimally” self-regulate their Neurofeedback (NF) with
external stimulation, like guided mindfulness. For this, we did a
three-part study in which: 1) we defined the foundations of the
framework and its design for priming subjects to self-regulate
their NF, 2) developed an open-source version of the framework
in Python, NeuroPrime, for utility, expandability and
reusability, and 3) we tested the framework in neurofeedback
priming versus no-priming conditions. NeuroPrime is a
research toolbox developed for the simple and fast integration
of advanced online closed-loop applications. More specifically,
it was validated and tuned for the research of priming brain
states in an EEG neurofeedback setup. In this paper, we will
explain the key aspects of the priming framework, the
NeuroPrime software developed, the design decisions and
demonstrate/validate the use of our toolbox by presenting use
cases of priming brain states during a neurofeedback setup.MIT -Massachusetts Institute of Technology(PD/BD/114033/2015
Stochastic gradient descent on Riemannian manifolds
Stochastic gradient descent is a simple approach to find the local minima of
a cost function whose evaluations are corrupted by noise. In this paper, we
develop a procedure extending stochastic gradient descent algorithms to the
case where the function is defined on a Riemannian manifold. We prove that, as
in the Euclidian case, the gradient descent algorithm converges to a critical
point of the cost function. The algorithm has numerous potential applications,
and is illustrated here by four examples. In particular a novel gossip
algorithm on the set of covariance matrices is derived and tested numerically.Comment: A slightly shorter version has been published in IEEE Transactions
Automatic Contro