98,294 research outputs found
An Algorithm for Matching Heterogeneous Financial Databases: a Case Study for COMPUSTAT/CRSP and I/B/E/S Databases
Rigorous and proper linking of financial databases is a necessary step to test trading strategies incorporating multimodal sources of information. This paper proposes a machine learning solution to match companies in heterogeneous financial databases. Our method, named Financial Attribute Selection Distance (FASD), has two stages, each of them corresponding to one of the two interrelated tasks commonly involved in heterogeneous database matching problems: schema matching and entity matching. FASD's schema matching procedure is based on the Kullback-Leibler divergence of string and numeric attributes. FASD's entity matching solution relies on learning a company distance flexible enough to deal with the numeric and string attribute links found by the schema matching algorithm and incorporate different string matching approaches such as edit-based and token-based metrics. The parameters of the distance are optimized using the F-score as cost function. FASD is able to match the joint Compustat/CRSP and Institutional Brokers' Estimate System (I/B/E/S) databases with an F-score over 0.94 using only a hundred of manually labeled company links
Improved bounds for testing Dyck languages
In this paper we consider the problem of deciding membership in Dyck
languages, a fundamental family of context-free languages, comprised of
well-balanced strings of parentheses. In this problem we are given a string of
length in the alphabet of parentheses of types and must decide if it is
well-balanced. We consider this problem in the property testing setting, where
one would like to make the decision while querying as few characters of the
input as possible.
Property testing of strings for Dyck language membership for , with a
number of queries independent of the input size , was provided in [Alon,
Krivelevich, Newman and Szegedy, SICOMP 2001]. Property testing of strings for
Dyck language membership for was first investigated in [Parnas, Ron
and Rubinfeld, RSA 2003]. They showed an upper bound and a lower bound for
distinguishing strings belonging to the language from strings that are far (in
terms of the Hamming distance) from the language, which are respectively (up to
polylogarithmic factors) the power and the power of the input size
.
Here we improve the power of in both bounds. For the upper bound, we
introduce a recursion technique, that together with a refinement of the methods
in the original work provides a test for any power of larger than .
For the lower bound, we introduce a new problem called Truestring Equivalence,
which is easily reducible to the -type Dyck language property testing
problem. For this new problem, we show a lower bound of to the power of
Automated census record linking: a machine learning approach
Thanks to the availability of new historical census sources and advances in record linking technology, economic historians are becoming big data genealogists. Linking individuals over time and between databases has opened up new avenues for research into intergenerational mobility, assimilation, discrimination, and the returns to education. To take advantage of these new research opportunities, scholars need to be able to accurately and efficiently match historical records and produce an unbiased dataset of links for downstream analysis. I detail a standard and transparent census matching technique for constructing linked samples that can be replicated across a variety of cases. The procedure applies insights from machine learning classification and text comparison to the well known problem of record linkage, but with a focus on the sorts of costs and benefits of working with historical data. I begin by extracting a subset of possible matches for each record, and then use training data to tune a matching algorithm that attempts to minimize both false positives and false negatives, taking into account the inherent noise in historical records. To make the procedure precise, I trace its application to an example from my own work, linking children from the 1915 Iowa State Census to their adult-selves in the 1940 Federal Census. In addition, I provide guidance on a number of practical questions, including how large the training data needs to be relative to the sample.This research has been
supported by the NSF-IGERT Multidisciplinary Program in Inequality & Social Policy at Harvard
University (Grant No. 0333403)
Critical couplings and string tensions via lattice matching of RG decimations
We calculate critical couplings and string tensions in SU(2) and SU(3) pure
lattice gauge theory by a simple and inexpensive technique of two-lattice
matching of RG block transformations. The transformations are potential moving
decimations generating plaquette actions with large number of group characters
and exhibit rapid approach to a unique renormalized trajectory. Fixing the
critical coupling at one value of temporal lattice length
by MC simulation, the critical couplings for any other value of
are then obtained by lattice matching of the block decimations. We
obtain values over the range and find
agreement with MC simulation results to within a few percent in all cases. A
similar procedure allows the calculation of string tensions with similarly good
agreement with MC data.Comment: 12 pages, Latex, 1 figur
Near-Linear Time Insertion-Deletion Codes and (1+)-Approximating Edit Distance via Indexing
We introduce fast-decodable indexing schemes for edit distance which can be
used to speed up edit distance computations to near-linear time if one of the
strings is indexed by an indexing string . In particular, for every length
and every , one can in near linear time construct a string
with , such that, indexing
any string , symbol-by-symbol, with results in a string where for which edit
distance computations are easy, i.e., one can compute a
-approximation of the edit distance between and any other
string in time.
Our indexing schemes can be used to improve the decoding complexity of
state-of-the-art error correcting codes for insertions and deletions. In
particular, they lead to near-linear time decoding algorithms for the
insertion-deletion codes of [Haeupler, Shahrasbi; STOC `17] and faster decoding
algorithms for list-decodable insertion-deletion codes of [Haeupler, Shahrasbi,
Sudan; ICALP `18]. Interestingly, the latter codes are a crucial ingredient in
the construction of fast-decodable indexing schemes
Finding approximate palindromes in strings
We introduce a novel definition of approximate palindromes in strings, and
provide an algorithm to find all maximal approximate palindromes in a string
with up to errors. Our definition is based on the usual edit operations of
approximate pattern matching, and the algorithm we give, for a string of size
on a fixed alphabet, runs in time. We also discuss two
implementation-related improvements to the algorithm, and demonstrate their
efficacy in practice by means of both experiments and an average-case analysis
- …