17,677 research outputs found
Modal density function and number of propagating modes in ducts
The question of the number of propagating modes within a small range of mode cut off ratio was raised. The population density of modes were shown to be greatest near cut off and least for the well propagating modes. It was shown that modes of nearly the same cut off ratio behave nearly the same in a sound absorbing duct as well as in the way they propagate to the far. Handling all of the propagating modes individually, they can be grouped into several cut off ratio ranges. It is important to know the modal density function to estimate acoustic power distribution
Optimal client recommendation for market makers in illiquid financial products
The process of liquidity provision in financial markets can result in
prolonged exposure to illiquid instruments for market makers. In this case,
where a proprietary position is not desired, pro-actively targeting the right
client who is likely to be interested can be an effective means to offset this
position, rather than relying on commensurate interest arising through natural
demand. In this paper, we consider the inference of a client profile for the
purpose of corporate bond recommendation, based on typical recorded information
available to the market maker. Given a historical record of corporate bond
transactions and bond meta-data, we use a topic-modelling analogy to develop a
probabilistic technique for compiling a curated list of client recommendations
for a particular bond that needs to be traded, ranked by probability of
interest. We show that a model based on Latent Dirichlet Allocation offers
promising performance to deliver relevant recommendations for sales traders.Comment: 12 pages, 3 figures, 1 tabl
Part of Speech Based Term Weighting for Information Retrieval
Automatic language processing tools typically assign to terms so-called
weights corresponding to the contribution of terms to information content.
Traditionally, term weights are computed from lexical statistics, e.g., term
frequencies. We propose a new type of term weight that is computed from part of
speech (POS) n-gram statistics. The proposed POS-based term weight represents
how informative a term is in general, based on the POS contexts in which it
generally occurs in language. We suggest five different computations of
POS-based term weights by extending existing statistical approximations of term
information measures. We apply these POS-based term weights to information
retrieval, by integrating them into the model that matches documents to
queries. Experiments with two TREC collections and 300 queries, using TF-IDF &
BM25 as baselines, show that integrating our POS-based term weights to
retrieval always leads to gains (up to +33.7% from the baseline). Additional
experiments with a different retrieval model as baseline (Language Model with
Dirichlet priors smoothing) and our best performing POS-based term weight, show
retrieval gains always and consistently across the whole smoothing range of the
baseline
Probabilistic Archetypal Analysis
Archetypal analysis represents a set of observations as convex combinations
of pure patterns, or archetypes. The original geometric formulation of finding
archetypes by approximating the convex hull of the observations assumes them to
be real valued. This, unfortunately, is not compatible with many practical
situations. In this paper we revisit archetypal analysis from the basic
principles, and propose a probabilistic framework that accommodates other
observation types such as integers, binary, and probability vectors. We
corroborate the proposed methodology with convincing real-world applications on
finding archetypal winter tourists based on binary survey data, archetypal
disaster-affected countries based on disaster count data, and document
archetypes based on term-frequency data. We also present an appropriate
visualization tool to summarize archetypal analysis solution better.Comment: 24 pages; added literature review and visualizatio
Cursive script recognition using wildcards and multiple experts
Variability in handwriting styles suggests that many letter recognition engines cannot correctly identify some hand-written letters of poor quality at reasonable computational cost. Methods that are capable of searching the resulting sparse graph of letter candidates are therefore required. The method presented here employs âwildcardsâ to represent missing letter candidates. Multiple experts are used to represent different aspects of handwriting. Each expert evaluates closeness of match and indicates its confidence. Explanation experts determine the degree to which the word alternative under consideration explains extraneous letter candidates. Schemata for normalisation and combination of scores are investigated and their performance compared. Hill climbing yields near-optimal combination weights that outperform comparable methods on identical dynamic handwriting data
- âŠ