427 research outputs found
Duration and Interval Hidden Markov Model for Sequential Data Analysis
Analysis of sequential event data has been recognized as one of the essential
tools in data modeling and analysis field. In this paper, after the examination
of its technical requirements and issues to model complex but practical
situation, we propose a new sequential data model, dubbed Duration and Interval
Hidden Markov Model (DI-HMM), that efficiently represents "state duration" and
"state interval" of data events. This has significant implications to play an
important role in representing practical time-series sequential data. This
eventually provides an efficient and flexible sequential data retrieval.
Numerical experiments on synthetic and real data demonstrate the efficiency and
accuracy of the proposed DI-HMM
Statistical identification with hidden Markov models of large order splitting strategies in an equity market
Large trades in a financial market are usually split into smaller parts and
traded incrementally over extended periods of time. We address these large
trades as hidden orders. In order to identify and characterize hidden orders we
fit hidden Markov models to the time series of the sign of the tick by tick
inventory variation of market members of the Spanish Stock Exchange. Our
methodology probabilistically detects trading sequences, which are
characterized by a net majority of buy or sell transactions. We interpret these
patches of sequential buying or selling transactions as proxies of the traded
hidden orders. We find that the time, volume and number of transactions size
distributions of these patches are fat tailed. Long patches are characterized
by a high fraction of market orders and a low participation rate, while short
patches have a large fraction of limit orders and a high participation rate. We
observe the existence of a buy-sell asymmetry in the number, average length,
average fraction of market orders and average participation rate of the
detected patches. The detected asymmetry is clearly depending on the local
market trend. We also compare the hidden Markov models patches with those
obtained with the segmentation method used in Vaglica {\it et al.} (2008) and
we conclude that the former ones can be interpreted as a partition of the
latter ones.Comment: 26 pages, 12 figure
Duration modeling with expanded HMM applied to speech recognition
The occupancy of the HMM states is modeled by means of a Markov chain. A linear estimator is introduced to compute the probabilities of the Markov chain. The distribution function (DF) represents accurately the observed data. Representing the DF as a Markov chain allows the use of standard HMM recognizers. The increase of complexity is negligible in training and strongly limited during recognition. Experiments performed on acoustic-phonetic decoding shows how the phone recognition rate increases from 60.6 to 61.1. Furthermore, on a task of database inquires, where phones are used as subword units, the correct word rate increases from 88.2 to 88.4.Peer ReviewedPostprint (published version
Implementation of hidden semi-Markov models
One of the most frequently used concepts applied to a variety of engineering and scientific studies over the recent years is that of a Hidden Markov Model (HMM). The Hidden semi-Markov model (HsMM) is contrived in such a way that it does not make any premise of constant or geometric distributions of a state duration. In other words, it allows the stochastic process to be a semi-Markov chain. Each state can have a collection of observations and the duration of each state is a variable. This allows the HsMM to be used extensively over a range of applications. Some of the most prominent work is done in speech recognition, gene prediction, and character recognition.
This thesis deals with the general structure and modeling of Hidden semi-Markov models and their implementations. It will further show the details of evaluation, decoding, and training with a running example
A Unified Multilingual Handwriting Recognition System using multigrams sub-lexical units
We address the design of a unified multilingual system for handwriting
recognition. Most of multi- lingual systems rests on specialized models that
are trained on a single language and one of them is selected at test time.
While some recognition systems are based on a unified optical model, dealing
with a unified language model remains a major issue, as traditional language
models are generally trained on corpora composed of large word lexicons per
language. Here, we bring a solution by con- sidering language models based on
sub-lexical units, called multigrams. Dealing with multigrams strongly reduces
the lexicon size and thus decreases the language model complexity. This makes
pos- sible the design of an end-to-end unified multilingual recognition system
where both a single optical model and a single language model are trained on
all the languages. We discuss the impact of the language unification on each
model and show that our system reaches state-of-the-art methods perfor- mance
with a strong reduction of the complexity.Comment: preprin
- …