1,049 research outputs found
Summed Parallel Infinite Impulse Response (SPIIR) Filters For Low-Latency Gravitational Wave Detection
With the upgrade of current gravitational wave detectors, the first detection
of gravitational wave signals is expected to occur in the next decade.
Low-latency gravitational wave triggers will be necessary to make fast
follow-up electromagnetic observations of events related to their source, e.g.,
prompt optical emission associated with short gamma-ray bursts. In this paper
we present a new time-domain low-latency algorithm for identifying the presence
of gravitational waves produced by compact binary coalescence events in noisy
detector data. Our method calculates the signal to noise ratio from the
summation of a bank of parallel infinite impulse response (IIR) filters. We
show that our summed parallel infinite impulse response (SPIIR) method can
retrieve the signal to noise ratio to greater than 99% of that produced from
the optimal matched filter. We emphasise the benefits of the SPIIR method for
advanced detectors, which will require larger template banks.Comment: 9 pages, 6 figures, for PR
Towards low-latency real-time detection of gravitational waves from compact binary coalescences in the era of advanced detectors
Electromagnetic (EM) follow-up observations of gravitational wave (GW) events
will help shed light on the nature of the sources, and more can be learned if
the EM follow-ups can start as soon as the GW event becomes observable. In this
paper, we propose a computationally efficient time-domain algorithm capable of
detecting gravitational waves (GWs) from coalescing binaries of compact objects
with nearly zero time delay. In case when the signal is strong enough, our
algorithm also has the flexibility to trigger EM observation before the merger.
The key to the efficiency of our algorithm arises from the use of chains of
so-called Infinite Impulse Response (IIR) filters, which filter time-series
data recursively. Computational cost is further reduced by a template
interpolation technique that requires filtering to be done only for a much
coarser template bank than otherwise required to sufficiently recover optimal
signal-to-noise ratio. Towards future detectors with sensitivity extending to
lower frequencies, our algorithm's computational cost is shown to increase
rather insignificantly compared to the conventional time-domain correlation
method. Moreover, at latencies of less than hundreds to thousands of seconds,
this method is expected to be computationally more efficient than the
straightforward frequency-domain method.Comment: 19 pages, 6 figures, for PR
HMM based scenario generation for an investment optimisation problem
This is the post-print version of the article. The official published version can be accessed from the link below - Copyright @ 2012 Springer-Verlag.The Geometric Brownian motion (GBM) is a standard method for modelling financial time series. An important criticism of this method is that the parameters of the GBM are assumed to be constants; due to this fact, important features of the time series, like extreme behaviour or volatility clustering cannot be captured. We propose an approach by which the parameters of the GBM are able to switch between regimes, more precisely they are governed by a hidden Markov chain. Thus, we model the financial time series via a hidden Markov model (HMM) with a GBM in each state. Using this approach, we generate scenarios for a financial portfolio optimisation problem in which the portfolio CVaR is minimised. Numerical results are presented.This study was funded by NET ACE at OptiRisk Systems
Inducing Probabilistic Grammars by Bayesian Model Merging
We describe a framework for inducing probabilistic grammars from corpora of
positive samples. First, samples are {\em incorporated} by adding ad-hoc rules
to a working grammar; subsequently, elements of the model (such as states or
nonterminals) are {\em merged} to achieve generalization and a more compact
representation. The choice of what to merge and when to stop is governed by the
Bayesian posterior probability of the grammar given the data, which formalizes
a trade-off between a close fit to the data and a default preference for
simpler models (`Occam's Razor'). The general scheme is illustrated using three
types of probabilistic grammars: Hidden Markov models, class-based -grams,
and stochastic context-free grammars.Comment: To appear in Grammatical Inference and Applications, Second
International Colloquium on Grammatical Inference; Springer Verlag, 1994. 13
page
Implementing EM and Viterbi algorithms for Hidden Markov Model in linear memory
<p>Abstract</p> <p>Background</p> <p>The Baum-Welch learning procedure for Hidden Markov Models (HMMs) provides a powerful tool for tailoring HMM topologies to data for use in knowledge discovery and clustering. A linear memory procedure recently proposed by <it>Miklós, I. and Meyer, I.M. </it>describes a memory sparse version of the Baum-Welch algorithm with modifications to the original probabilistic table topologies to make memory use independent of sequence length (and linearly dependent on state number). The original description of the technique has some errors that we amend. We then compare the corrected implementation on a variety of data sets with conventional and checkpointing implementations.</p> <p>Results</p> <p>We provide a correct recurrence relation for the emission parameter estimate and extend it to parameter estimates of the Normal distribution. To accelerate estimation of the prior state probabilities, and decrease memory use, we reverse the originally proposed forward sweep. We describe different scaling strategies necessary in all real implementations of the algorithm to prevent underflow. In this paper we also describe our approach to a linear memory implementation of the Viterbi decoding algorithm (with linearity in the sequence length, while memory use is approximately independent of state number). We demonstrate the use of the linear memory implementation on an extended Duration Hidden Markov Model (DHMM) and on an HMM with a spike detection topology. Comparing the various implementations of the Baum-Welch procedure we find that the checkpointing algorithm produces the best overall tradeoff between memory use and speed. In cases where sequence length is very large (for Baum-Welch), or state number is very large (for Viterbi), the linear memory methods outlined may offer some utility.</p> <p>Conclusion</p> <p>Our performance-optimized Java implementations of Baum-Welch algorithm are available at <url>http://logos.cs.uno.edu/~achurban</url>. The described method and implementations will aid sequence alignment, gene structure prediction, HMM profile training, nanopore ionic flow blockades analysis and many other domains that require efficient HMM training with EM.</p
FIBS: A Generic Framework for Classifying Interval-based Temporal Sequences
We study the problem of classifying interval-based temporal sequences
(IBTSs). Since common classification algorithms cannot be directly applied to
IBTSs, the main challenge is to define a set of features that effectively
represents the data such that classifiers can be applied. Most prior work
utilizes frequent pattern mining to define a feature set based on discovered
patterns. However, frequent pattern mining is computationally expensive and
often discovers many irrelevant patterns. To address this shortcoming, we
propose the FIBS framework for classifying IBTSs. FIBS extracts features
relevant to classification from IBTSs based on relative frequency and temporal
relations. To avoid selecting irrelevant features, a filter-based selection
strategy is incorporated into FIBS. Our empirical evaluation on eight
real-world datasets demonstrates the effectiveness of our methods in practice.
The results provide evidence that FIBS effectively represents IBTSs for
classification algorithms, which contributes to similar or significantly better
accuracy compared to state-of-the-art competitors. It also suggests that the
feature selection strategy is beneficial to FIBS's performance.Comment: In: Big Data Analytics and Knowledge Discovery. DaWaK 2020. Springer,
Cha
Computational identification of adaptive mutants using the VERT system
<p/> <p>Background</p> <p>Evolutionary dynamics of microbial organisms can now be visualized using the Visualizing Evolution in Real Time (VERT) system, in which several isogenic strains expressing different fluorescent proteins compete during adaptive evolution and are tracked using fluorescent cell sorting to construct a population history over time. Mutations conferring enhanced growth rates can be detected by observing changes in the fluorescent population proportions.</p> <p>Results</p> <p>Using data obtained from several VERT experiments, we construct a hidden Markov-derived model to detect these adaptive events in VERT experiments without external intervention beyond initial training. Analysis of annotated data revealed that the model achieves consensus with human annotation for 85-93% of the data points when detecting adaptive events. A method to determine the optimal time point to isolate adaptive mutants is also introduced.</p> <p>Conclusions</p> <p>The developed model offers a new way to monitor adaptive evolution experiments without the need for external intervention, thereby simplifying adaptive evolution efforts relying on population tracking. Future efforts to construct a fully automated system to isolate adaptive mutants may find the algorithm a useful tool.</p
Duration learning for analysis of nanopore ionic current blockades
<p>Abstract</p> <p>Background</p> <p>Ionic current blockade signal processing, for use in nanopore detection, offers a promising new way to analyze single molecule properties, with potential implications for DNA sequencing. The alpha-Hemolysin transmembrane channel interacts with a translocating molecule in a nontrivial way, frequently evidenced by a complex ionic flow blockade pattern. Typically, recorded current blockade signals have several levels of blockade, with various durations, all obeying a fixed statistical profile for a given molecule. Hidden Markov Model (HMM) based duration learning experiments on artificial two-level Gaussian blockade signals helped us to identify proper modeling framework. We then apply our framework to the real multi-level DNA hairpin blockade signal.</p> <p>Results</p> <p>The identified upper level blockade state is observed with durations that are geometrically distributed (consistent with an a physical decay process for remaining in any given state). We show that mixture of convolution chains of geometrically distributed states is better for presenting multimodal long-tailed duration phenomena. Based on learned HMM profiles we are able to classify 9 base-pair DNA hairpins with accuracy up to 99.5% on signals from same-day experiments.</p> <p>Conclusion</p> <p>We have demonstrated several implementations for <it>de novo </it>estimation of duration distribution probability density function with HMM framework and applied our model topology to the real data. The proposed design could be handy in molecular analysis based on nanopore current blockade signal.</p
A Method of Hidden Markov Model Optimization for Use with Geophysical Data Sets
Abstract. Geophysics research has been faced with a growing need for automated techniques with which to process large quantities of data. A successful tool must meet a number of requirements: it should be consistent, require minimal parameter tuning, and produce scienti¯cally meaningful results in reasonable time. We introduce a hidden Markov model (HMM)-based method for analysis of geophysical data sets that attempts to address these issues. Our method improves on standard HMM methods and is based on the systematic analysis of structural local maxima of the HMM objective function. Preliminary results of the method as applied to geodetic and seismic records are presented.
- …