268 research outputs found
Some contributions to model selection and statistical inference in Markovian models
The general theme of this thesis is providing and studying a new understanding of some statistical models and computational methods based on a Markov process/chain. Section 1-4 are devoted to reviewing the literature for the sake of completeness and the better understanding of Section 5-7 that are our original studies. Section 1 is devoted to understanding a Markov process since continuous and discrete types of a Markov process are hinges of the thesis. In particular, we will study some basics/advanced results of Markov chains and Ito diffusions. Ergodic properties of these processes are also documented. In Section 2 we first study the Metropolis-Hastings algorithm since this is basic of other MCMC methods. We then study more advanced methods such as Reversible Jump MCMC, Metropolis-adjusted Langevin algorithm, pseudo marginal MCMC and Hamiltonian Monte Carlo. These MCMC methods will appear in Section 3, 4 and 7. In Section 3 we consider another type of Monte Carlo method called sequential Monte Carlo (SMC). Unlike MCMC methods, SMC methods often give us on-line ways to approximate intractable objects. Therefore, these methods are particularly useful when one needs to play around with models with scalable computational costs. Some mathematical analysis of SMC also can be found. These SMC methods will appear in Section 4, 5, 6 and 7. In Section 4 we first discuss hidden Markov models (HMMs) since all statistical models that we consider in the thesis can be treated as HMMs or their generalisation. Since, in general, HMMs involve intractable objects, we then study approximation ways for them based on SMC methods. Statistical inference for HMMs is also considered. These topics will appear in Section 5, 6 and 7. Section 5 is largely based on a submitted paper titled Asymptotic Analysis of Model Selection Criteria for General Hidden Markov Models with Alexandros Beskos and Sumeetpal Sidhu Singh, https: //arxiv.org/abs/1811.11834v3. In this section, we study the asymptotic behaviour of some information criteria in the context of hidden Markov models, or state space models. In particular, we prove the strong consistency of BIC and evidence for general HMMs. Section 6 is largely based on a submitted paper titled Online Smoothing for Diffusion Processes Observed with Noise with Alexandros Beskos, https://arxiv.org/abs/2003.12247. In this section, we develop sequential Monte Carlo methods to estimate parameters of (jump) diffusion models. Section 7 is largely based on an ongoing paper titled Adaptive Bayesian Model Selection for Diffusion Models with Alexandros Beskos. In this section, we develop adaptive computational ways, based on sequential Monte Carlo samplers and Hamiltonian Monte Carlo on a functional space, for Bayesian model selection
Nonlinearity and noise modeling of operational transconductance amplifiers for continuous time analog filters
A general framework for performance optimization of continuous-time OTA-C
(Operational Transconductance Amplifier-Capacitor) filters is proposed. Efficient
procedures for evaluating nonlinear distortion and noise valid for any filter of arbitrary
order are developed based on the matrix description of a general OTA-C filter model .
Since these procedures use OTA macromodels, they can be used to obtain the results
significantly faster than transistor-level simulation. In the case of transient analysis, the
speed-up may be as much as three orders of magnitude without almost no loss of
accuracy. This makes it possible to carry out direct numerical optimization of OTA-C
filters with respect to important characteristics such as noise performance, THD, IM3,
DR or SNR. On the other hand, the general OTA-C filter model allows us to apply
matrix transforms that manipulate (rescale) filter element values and/or change topology
without changing its transfer function. The above features are a basis to build automated
optimization procedures for OTA-C filters. In particular, a systematic optimization
procedure using equivalence transformations is proposed. The research also proposes
suitable software implementations of the optimization process. The first part of the
research proposes a general performance optimization procedure and to verify the
process two application type examples are mentioned. An application example of the
proposed approach to optimal block sequencing and gain distribution of 8th order
cascade Butterworth filter (for two variants of OTA topologies) is given. Secondly the
modeling tool is used to select the best suitable topology for a 5th order Bessel Low Pass
Filter. Theoretical results are verified by comparing to transistor-level simulation withCADENCE. For the purpose of verification, the filters have also been fabricated in
standard 0.5mm CMOS process.
The second part of the research proposes a new linearization technique to
improve the linearity of an OTA using an Active Error Feedforward technique. Most
present day applications require very high linear circuits combined with low noise and
low power consumption. An OTA based biquad filter has also been fabricated in 0.35mm
CMOS process. The measurement results for the filter and the stand alone OTA have
been discussed. The research focuses on these issues
Design of adaptive analog filters for magnetic front-end read channels
Esta tese estuda o projecto e o comportamento de filtros em tempo contínuo de
muito-alta-frequência. A motivação deste trabalho foi a investigação de soluções de filtragem
para canais de leitura em sistemas de gravação e reprodução de dados em suporte
magnético, com custos e consumo (tamanho total inferior a 1 mm2 e consumo inferior a
1mW/polo), inferiores aos circuitos existentes. Nesse sentido, tal como foi feito neste
trabalho, o rápido desenvolvimento das tecnologias de microelectrónica suscitou esforços
muito significativos a nível mundial com o objectivo de se investigarem novas técnicas
de realização de filtros em circuito integrado monolítico, especialmente em tecnologia
CMOS (Complementary Metal Oxide Semiconductor). Apresenta-se um estudo comparativo
a diversos níveis hierárquicos do projecto, que conduziu à realização e caracterização
de soluções com as características desejadas.
Num primeiro nível, este estudo aborda a questão conceptual da gravação e transmissão
de sinal bem como a escolha de bons modelos matemáticos para o tratamento da
informação e a minimização de erro inerente às aproximações na conformidade aos princípios
físicos dos dispositivos caracterizados.
O trabalho principal da tese é focado nos níveis hierárquicos da arquitectura do
canal de leitura e da realização em circuito integrado do seu bloco principal – o bloco de
filtragem. Ao nível da arquitectura do canal de leitura, apresenta-se um estudo alargado
sobre as metodologias existentes de adaptação de sinal e recuperação de dados em suporte
magnético. Este desígnio aparece no âmbito da proposta de uma solução de baixo custo,
baixo consumo, baixa tensão de alimentação e baixa complexidade, alicerçada em tecnologia
digital CMOS, para a realização de um sistema DFE (Decision Feedback Equalization)
com base na igualização de sinal utilizando filtros integrados analógicos em tempo
contínuo.
Ao nível do projecto de realização do bloco de filtragem e das técnicas de implementação
de filtros e dos seus blocos constituintes em circuito integrado, concluiu-se que
a técnica baseada em circuitos de transcondutância e condensadores, também conhecida como filtros gm-C (ou transcondutância-C), é a mais adequada para a realização de filtros
adaptativos em muito-alta-frequência. Definiram-se neste nível hierárquico mais baixo,
dois subníveis de aprofundamento do estudo no âmbito desta tese, nomeadamente: a pesquisa
e análise de estruturas ideais no projecto de filtros recorrendo a representações no
espaço de estados; e, o estudo de técnicas de realização em tecnologia digital CMOS de
circuitos de transcondutância para a implementação de filtros integrados analógicos em
tempo contínuo.
Na sequência deste estudo, apresentam-se e comparam-se duas estruturas de filtros
no espaço de estados, correspondentes a duas soluções alternativas para a realização de
um igualador adaptativo realizado por um filtro contínuo passa-tudo de terceira ordem,
para utilização num canal de leitura de dados em suporte magnético.
Como parte constituinte destes filtros, apresenta-se uma técnica de realização de
circuitos de transcondutância, e de realização de condensadores lineares usando matrizes
de transístores MOSFET para processamento de sinal em muito-alta-frequência realizada
em circuito integrado usando tecnologia digital CMOS submicrométrica. Apresentam-se
métodos de adaptação automática capazes de compensar os erros face aos valores nominais
dos componentes, devidos às tolerâncias inerentes ao processo de fabrico, para os
quais apresentamos os resultados de simulação e de medição experimental obtidos.
Na sequência deste estudo, resultou igualmente a apresentação de um circuito passível
de constituir uma solução para o controlo de posicionamento da cabeça de leitura
em sistemas de gravação/reprodução de dados em suporte magnético. O bloco proposto
é um filtro adaptativo de primeira ordem, com base nos mesmos circuitos de transcondutância
e técnicas de igualação propostos e utilizados na implementação do filtro adaptativo
de igualação do canal de leitura.
Este bloco de filtragem foi projectado e incluído num circuito integrado (Jaguar) de
controlo de posicionamento da cabeça de leitura realizado para a empresa ATMEL em
Colorado Springs, e incluído num produto comercial em parceria com uma empresa escocesa
utilizado em discos rígidos amovíveis.This thesis studies the design and behavior of continuous-time very-high-frequency
filters. The motivation of this work was the search for filtering solutions for the readchannel
in recording and reproduction of data on magnetic media systems, with costs and
consumption (total size less than 1 mm2 and consumption under 1mW/pole), lower than
the available circuits. Accordingly, as was done in this work, the rapid development of
microelectronics technology raised very significant efforts worldwide in order to investigate
new techniques for implementing such filters in monolithic integrated circuit, especially
in CMOS technology (Complementary Metal Oxide Semiconductor). We present
a comparative study on different hierarchical levels of the project, which led to the realization
and characterization of solutions with the desired characteristics.
In the first level, this study addresses the conceptual question of recording and
transmission of signal and the choice of good mathematical models for the processing of
information and minimization of error inherent in the approaches and in accordance with
the principles of the characterized physical devices.
The main work of this thesis is focused on the hierarchical levels of the architecture
of the read channel and the integrated circuit implementation of its main block - the filtering
block. At the architecture level of the read channel this work presents a comprehensive
study on existing methodologies of adaptation and signal recovery of data on
magnetic media. This project appears in the sequence of the proposed solution for a lowcost,
low consumption, low voltage, low complexity, using CMOS digital technology for
the performance of a DFE (Decision Feedback Equalization) based on the equalization of
the signal using integrated analog filters in continuous time.
At the project level of implementation of the filtering block and techniques for implementing
filters and its building components, it was concluded that the technique based
on transconductance circuits and capacitors, also known as gm-C filters is the most appropriate
for the implementation of very-high-frequency adaptive filters. We defined in
this lower level, two sub-levels of depth study for this thesis, namely: research and analysis
of optimal structures for the design of state-space filters, and the study of techniques for the design of transconductance cells in digital CMOS circuits for the implementation
of continuous time integrated analog filters.
Following this study, we present and compare two filtering structures operating in
the space of states, corresponding to two alternatives for achieving a realization of an
adaptive equalizer by the use of a continuous-time third order allpass filter, as part of a
read-channel for magnetic media devices.
As a constituent part of these filters, we present a technique for the realization of
transconductance circuits and for the implementation of linear capacitors using arrays of
MOSFET transistors for signal processing in very-high-frequency integrated circuits using
sub-micrometric CMOS technology. We present methods capable of automatic adjustment
and compensation for deviation errors in respect to the nominal values of the
components inherent to the tolerances of the fabrication process, for which we present
the simulation and experimental measurement results obtained.
Also as a result of this study, is the presentation of a circuit that provides a solution
for the control of the head positioning on recording/playback systems of data on magnetic
media. The proposed block is an adaptive first-order filter, based on the same transconductance
circuits and equalization techniques proposed and used in the implementation
of the adaptive filter for the equalization of the read channel.
This filter was designed and included in an integrated circuit (Jaguar) used to control
the positioning of the read-head done for ATMEL company in Colorado Springs, and
part of a commercial product used in removable hard drives fabricated in partnership with a Scottish company
Novel sampling techniques for reservoir history matching optimisation and uncertainty quantification in flow prediction
Modern reservoir management has an increasing focus on accurately predicting the likely range of field recoveries. A variety of assisted history matching techniques has been developed across the research community concerned with this topic. These techniques are based on obtaining multiple models that closely reproduce the historical flow behaviour of a reservoir. The set of resulted history matched models is then used to quantify uncertainty in predicting the future performance of the reservoir and providing economic evaluations for different field development strategies. The key step in this workflow is to employ algorithms that sample the parameter space in an efficient but appropriate manner. The algorithm choice has an impact on how fast a model is obtained and how well the model fits the production data. The sampling techniques that have been developed to date include, among others, gradient based methods, evolutionary algorithms, and ensemble Kalman filter (EnKF).
This thesis has investigated and further developed the following sampling and inference techniques: Particle Swarm Optimisation (PSO), Hamiltonian Monte Carlo, and Population Markov Chain Monte Carlo. The inspected techniques have the capability of navigating the parameter space and producing history matched models that can be used to quantify the uncertainty in the forecasts in a faster and more reliable way. The analysis of these techniques, compared with Neighbourhood Algorithm (NA), has shown how the different techniques affect the predicted recovery from petroleum systems and the benefits of the developed methods over the NA.
The history matching problem is multi-objective in nature, with the production data possibly consisting of multiple types, coming from different wells, and collected at different times. Multiple objectives can be constructed from these data and explicitly be
optimised in the multi-objective scheme. The thesis has extended the PSO to handle multi-objective history matching problems in which a number of possible conflicting objectives must be satisfied simultaneously. The benefits and efficiency of innovative multi-objective particle swarm scheme (MOPSO) are demonstrated for synthetic reservoirs. It is demonstrated that the MOPSO procedure can provide a substantial improvement in finding a diverse set of good fitting models with a fewer number of very costly forward simulations runs than the standard single objective case, depending on how the objectives are constructed.
The thesis has also shown how to tackle a large number of unknown parameters through the coupling of high performance global optimisation algorithms, such as PSO, with model reduction techniques such as kernel principal component analysis (PCA), for parameterising spatially correlated random fields. The results of the PSO-PCA coupling applied to a recent SPE benchmark history matching problem have demonstrated that the approach is indeed applicable for practical problems. A comparison of PSO with the EnKF data assimilation method has been carried out and has concluded that both methods have obtained comparable results on the example case. This point reinforces the need for using a range of assisted history matching algorithms for more confidence in predictions
Application of multilevel concepts for uncertainty quantification in reservoir simulation
Uncertainty quantification is an important task in reservoir simulation and is an
active area of research. The main idea of uncertainty quantification is to compute
the distribution of a quantity of interest, for example oil rate. That uncertainty,
then feeds into the decision making process.
A statistically valid way of quantifying the uncertainty is a Markov Chain Monte
Carlo (MCMC) method, such as Random Walk Metropolis (RWM). MCMC is a
robust technique for estimating the distribution of the quantity of interest. RWM is
can be prohibitively expensive, due to the need to run a huge number of realizations,
45% - 70% of these may be rejected and, even for a simple reservoir model it
may take 15 minutes for each realization. Hamiltonian Monte Carlo accelerates the
convergence for RWM but may lead to a large increase computational cost because
it requires the gradient.
In this thesis, we present how to use the multilevel concept to accelerate convergence
for RWM. The thesis discusses how to apply Multilevel Markov Chain Monte
Carlo (MLMCMC) to uncertainty quantification. It proposes two new techniques,
one for improving the proxy based on multilevel idea called Multilevel proxy (MLproxy)
and the second one for accelerating the convergence of Hamiltonian Monte
Carlo is called Multilevel Hamiltonian Monte Carlo (MLHMC).
The idea behind the multilevel concept is a simple telescoping sum: which represents
the expensive solution (e.g., estimating the distribution for oil rate on finest
grid) in terms of a cheap solution (e.g., estimating the distribution for oil rate on
coarse grid) and `correction terms', which are the difference between the high resolution
solution and a low resolution solution. A small fraction of realizations is then
run on the finer grids to compute correction terms. This reduces the computational
cost and simulation errors significantly.
MLMCMC is a combination between RWM and multilevel concept, it greatly reduces
the computational cost compared to the RWM for uncertainty quantification.
It makes Monte Carlo estimation a feasible technique for uncertainty quantification
in reservoir simulation applications. In this thesis, MLMCMC has been implemented
on two reservoir models based on real fields in the central Gulf of Mexico and in
North Sea.
MLproxy is another way for decreasing the computational cost based on constructing
an emulator and then improving it by adding the correction term between
the proxy and simulated results.
MLHMC is a combination of Multilevel Monte Carlo method with a Hamiltonian
Monte Carlo algorithm. It accelerates Hamiltonian Monte Carlo (HMC) and is faster
than HMC. In the thesis, it has been implemented on a real field called Teal South
to assess the uncertainty
Exciting with quantum light
Tesis Doctoral inédita leída en la Universidad Autónoma de Madrid, Facultad de Ciencias, Departamento de Física Teórica de la Materia Condensada. Fecha de lectura: 22-11-2019A two-level system—the idealization of an atom with only two energy levels—is the most
fundamental quantum object. As such, it has long been at the forefront of the research in
Quantum Optics: its emission spectrum is simply a Lorentzian distribution, and the light it
produces is the most quantum that can be. The temporal distribution of the photon emission
displays a perfect antibunching, meaning that such a system will never emit two (or more)
photons simultaneously, which is consistent with the intuition that the two-level system can
only sustain a single excitation at any given time. Although these two properties have been
known for decades, it was not until the advent of the Theory of Frequency-filtered and Time-resolved
Correlations that it was observed that the perfect antibunching is not the end of the story: the
correlations between photons possess an underlying structure, which is unveiled when one
retains the information about the color of the photons. This is a consequence of the Heisenberg
uncertainty principle: measuring perfect antibunching implies an absolute knowledge about
the time at which the photons have been emitted, which in turn implies an absolute uncertainty
on their energy. Thus, keeping some information about the frequency of the emitted photons
affects the correlations between them. This means that a two-level system can be turned into
a versatile source of quantum light, providing light with a large breadth of correlation types
well beyond simply antibunching. Furthermore, when the two-level system is driven coherently
in the so-called Mollow regime (in which the two-level system becomes dressed by the laser
and the emission line is split into three), the correlations blossom: one can find every type of
statistics—from antibunching to super-bunching—provided that one measures the photons
emitted at the adequate frequency window of the triplet. In fact, the process of filtering the
emission at the frequencies corresponding to N-photon transitions is the idea behind the
Bundler, a source of light whose emission is always in bundles of exactly N photons.
The versatility of the correlations decking the emitted light motivates the topic of this
Dissertation, in which I focus on the theoretical study of the behaviour that arises when
physical systems are driven with quantum light, i.e., with light that cannot be described through
the classical theory of electromagnetism. As the canon of excitation used in the literature is
restricted to classical sources, namely lasers and thermal reservoirs, our description starts
with the most fundamental objects that can be considered as the optical targets: a harmonic
oscillator (which represents the field for non-interacting bosonic particles) and a two-level
system (which in turn represents the field for fermionic particles). We describe which regions
of the Harmonic oscillator’s Hilbert space can be accessed by driving the harmonic oscillator
with the light emitted by a two-level system, i.e., which quantum steady states can be realized.
Analogously, we find that the quality of the single-photon emission from a two-level system
can be enhanced when it is driven by quantum light. Once the advantages of using quantum,
rather than classical, sources of light are demonstrated with the fundamental optical targets, we
turn to the quantum excitation of more involved systems, such as the strong coupling between
a harmonic oscillator and either a two-level system—whose description is made through the
Jaynes-Cummings model—or a nonlinear harmonic oscillator—which can be realized in systems
of, e.g., exciton-polaritons. Here we find that the statistical versatility of the light emitted by
the Mollow triplet allows to perform Quantum Spectroscopy on these systems, thus gaining
knowledge of its internal structure and dynamics, and in particular to probe their interactions
with the least possible amount of particles: two. In the process of exciting with quantum light,
we are called to further examine the source itself. In fact, there is even the need to revisit the
concept of a single-photon source, for which we propose more robust criterion than g(2). We also
turn to toy-models of the Bundler so as to use it effectively as an optical source. We can then
xix study the advantages that one gets and shortcomings that one faces when using this source of
light to drive all the systems considered on excitation with the emission of a two-level system.
Finally, we go from the continuous to the pulsed regime of excitation, which is of higher interest
for applications and comes with its own set of fundamental questions
Recommended from our members
Fast MCMC algorithms, Stability and DeepTune
Drawing samples from a known distribution is a core computational challenge common to many disciplines, with applications in statistics, probability, operations research, and other areas involving stochastic models. In statistics, sampling methods are useful for both estimation and inference, including problems such as estimating expectations of desired quantities, computing probabilities of rare events, gauging volumes of particular sets, exploring posterior distributions and obtaining credible intervals etc.Facing massive high dimensional data, both computational efficiency and good statistical guarantees are more and more important in modern statistical and machine learning applications. In this thesis, centered around sampling algorithms, we consider the fundamental questions on their computational and statistical guarantees: How to design a fast sampling algorithm and how long should it be run? What are the statistical learning guarantee of these algorithms? Are there any trade-offs between computation and learning?To answer these questions, first we start with establishing non-asymptotic convergence guarantees for popular MCMC sampling algorithms in Bayesian literature: Metropolized Random Walk, Metropolis-adjusted Langevin algorithm and Hamiltonian Monte Carlo. To address a number of technical challenges arise enroute, we develop results based on the conductance profile in order to prove quantitative convergence guarantees general continuous state space Markov chains. Second, to confront a large class of constrained sampling problems, we introduce two new algorithms, Vaidya and John walks, to sample from polytope-constrained distributions with convergence guarantees. Third, we prove fundamental trade-off results between statistical learning performance and convergence rate of any iterative learning algorithm, including sample algorithms. The trade-off results allow us to show that a too stable algorithm can not converge too fast, and vice-versa. Finally, to help neuroscientists analyze their massive amount of brain data, we develop DeepTune, a stability-driven visualization and interpretation framework via optimization and sampling for the neural-network-based models of neurons in visual cortex
Proactive Quality Control based on Ensemble Forecast Sensitivity to Observations
Despite recent major improvements in numerical weather prediction (NWP) systems, operational NWP forecasts occasionally suffer from an abrupt drop in forecast skill, a phenomenon called "forecast skill dropout." Recent studies have shown that the "dropouts" occur not because of the model's deficiencies but by the use of flawed observations that the operational quality control (QC) system failed to filter out. Thus, to minimize the occurrences of forecast skill dropouts, we need to detect and remove such flawed observations.
A diagnostic technique called Ensemble Forecast Sensitivity to Observations (EFSO) enables us to quantify how much each observation has improved or degraded the forecast. A recent study (Ota et al., 2013) has shown that it is possible to detect flawed observations that caused regional forecast skill dropouts by using EFSO with 24-hour lead-time and that the forecast can be improved by not assimilating the detected observations.
Inspired by their success, in the first part of this study, we propose a new QC method, which we call Proactive QC (PQC), in which flawed observations are detected 6 hours after the analysis by EFSO and then the analysis and forecast are repeated without using the detected observations. This new QC technique is implemented and tested on a lower-resolution version of NCEP's operational global NWP system. The results we obtained are extremely promising; we have found that we can detect regional forecast skill dropouts and the flawed observations after only 6 hours from the analysis and that the rejection of the identified flawed observations indeed improves 24-hour forecasts.
In the second part, we show that the same approximation used in the derivation of EFSO can be used to formulate the forecast sensitivity to observation error covariance matrix R, which we call EFSR. We implement the EFSR diagnostics in both an idealized system and the quasi-operational NWP system and show that it can be used to tune the R matrix so that the utility of observations is improved.
We also point out that EFSO and EFSR can be used for the optimal assimilation of new observing systems
Scalable Bayesian Time Series Modelling for Streaming Data
Ph. D. ThesisUbiquitous cheap processing power and reduced storage costs have led to increased deployment
of connected devices used to collect and store information about their surroundings.
Examples include environmental sensors used to measure pollution levels and temperature,
or vibration sensors deployed on machinery to detect faults. This data is often streamed
in real time to cloud services and used to make decisions such as when to perform maintenance
on critical machinery, and monitor systems, such as how interventions to reduce
pollution are performing.
The data recorded at these sensors is unbounded, heterogeneous and often inaccurate,
recorded with different sampling frequencies, and often on irregular time grids. Connection
problems or hardware faults can cause information to be missing for days at a time.
Additionally, multiple co-located sensors can report different readings for the same process.
A
exible class of dynamic models can be used to ameliorate these issues and used to
smooth and interpolate the data.
Irregularly observed time series can be conveniently modelled using state space models
with a continuous time latent-state represented by di usion processes. In order to
model the wide array of different environmental sensors the observation distributions of
these dynamic models are
exible, in all cases particle filtering methods can be used for
inference and in some cases the exact Kalman filter can be used. The models along with
a binary composition operator form a semigroup, making model composition and reuse
straightforward. Heteroskedastic time series are accounted for by using a factor structure
to model a full-rank time-dependent system noise matrix for the dynamic models which
can account for changes in variance and the correlation structure between each time series
in a multivariate model. Finally, to model multiple nearby sensors a dynamic model is
used to model a time-dependent common mean and a time-invariant Gaussian process can
account for the spatial variation between the sensors.
Functional programming in Scala is used to implement these time series models. Functional
programming provides a unified principled API (application programming interface)
for interacting with different collection types using higher order functions. This, combined
with the type-class pattern, makes it possible to write inference algorithms once and deploy
them locally using serial collections and later on unbounded time series data using
libraries such as Akka streams using techniques from functional reactive programming
- …