1,589 research outputs found
Enron versus EUSES: A Comparison of Two Spreadsheet Corpora
Spreadsheets are widely used within companies and often form the basis for
business decisions. Numerous cases are known where incorrect information in
spreadsheets has lead to incorrect decisions. Such cases underline the
relevance of research on the professional use of spreadsheets.
Recently a new dataset became available for research, containing over 15.000
business spreadsheets that were extracted from the Enron E-mail Archive. With
this dataset, we 1) aim to obtain a thorough understanding of the
characteristics of spreadsheets used within companies, and 2) compare the
characteristics of the Enron spreadsheets with the EUSES corpus which is the
existing state of the art set of spreadsheets that is frequently used in
spreadsheet studies.
Our analysis shows that 1) the majority of spreadsheets are not large in
terms of worksheets and formulas, do not have a high degree of coupling, and
their formulas are relatively simple; 2) the spreadsheets from the EUSES corpus
are, with respect to the measured characteristics, quite similar to the Enron
spreadsheets.Comment: In Proceedings of the 2nd Workshop on Software Engineering Methods in
Spreadsheet
Combining Spreadsheet Smells for Improved Fault Prediction
Spreadsheets are commonly used in organizations as a programming tool for
business-related calculations and decision making. Since faults in spreadsheets
can have severe business impacts, a number of approaches from general software
engineering have been applied to spreadsheets in recent years, among them the
concept of code smells. Smells can in particular be used for the task of fault
prediction. An analysis of existing spreadsheet smells, however, revealed that
the predictive power of individual smells can be limited. In this work we
therefore propose a machine learning based approach which combines the
predictions of individual smells by using an AdaBoost ensemble classifier.
Experiments on two public datasets containing real-world spreadsheet faults
show significant improvements in terms of fault prediction accuracy.Comment: 4 pages, 1 figure, to be published in 40th International Conference
on Software Engineering: New Ideas and Emerging Results Trac
Recommended from our members
Identifying idiolect in forensic authorship attribution: an n-gram textbite approach
Forensic authorship attribution is concerned with identifying authors of disputed or anonymous documents, which are potentially evidential in legal cases, through the analysis of linguistic clues left behind by writers. The forensic linguist “approaches this problem of questioned authorship from the theoretical position that every native speaker has their own distinct and individual version of the language [. . . ], their own idiolect” (Coulthard, 2004: 31). However, given the diXculty in empirically substantiating a theory of idiolect, there is growing concern in the Veld that it remains too abstract to be of practical use (Kredens, 2002; Grant, 2010; Turell, 2010). Stylistic, corpus, and computational approaches to text, however, are able to identify repeated collocational patterns, or n-grams, two to six word chunks of language, similar to the popular notion of soundbites: small segments of no more than a few seconds of speech that journalists are able to recognise as having news value and which characterise the important moments of talk. The soundbite oUers an intriguing parallel for authorship attribution studies, with the following question arising: looking at any set of texts by any author, is it possible to identify ‘n-gram textbites’, small textual segments that characterise that author’s writing, providing DNA-like chunks of identifying material
Memory-Efficient Topic Modeling
As one of the simplest probabilistic topic modeling techniques, latent
Dirichlet allocation (LDA) has found many important applications in text
mining, computer vision and computational biology. Recent training algorithms
for LDA can be interpreted within a unified message passing framework. However,
message passing requires storing previous messages with a large amount of
memory space, increasing linearly with the number of documents or the number of
topics. Therefore, the high memory usage is often a major problem for topic
modeling of massive corpora containing a large number of topics. To reduce the
space complexity, we propose a novel algorithm without storing previous
messages for training LDA: tiny belief propagation (TBP). The basic idea of TBP
relates the message passing algorithms with the non-negative matrix
factorization (NMF) algorithms, which absorb the message updating into the
message passing process, and thus avoid storing previous messages. Experimental
results on four large data sets confirm that TBP performs comparably well or
even better than current state-of-the-art training algorithms for LDA but with
a much less memory consumption. TBP can do topic modeling when massive corpora
cannot fit in the computer memory, for example, extracting thematic topics from
7 GB PUBMED corpora on a common desktop computer with 2GB memory.Comment: 20 pages, 7 figure
Sparse Partially Collapsed MCMC for Parallel Inference in Topic Models
Topic models, and more specifically the class of Latent Dirichlet Allocation
(LDA), are widely used for probabilistic modeling of text. MCMC sampling from
the posterior distribution is typically performed using a collapsed Gibbs
sampler. We propose a parallel sparse partially collapsed Gibbs sampler and
compare its speed and efficiency to state-of-the-art samplers for topic models
on five well-known text corpora of differing sizes and properties. In
particular, we propose and compare two different strategies for sampling the
parameter block with latent topic indicators. The experiments show that the
increase in statistical inefficiency from only partial collapsing is smaller
than commonly assumed, and can be more than compensated by the speedup from
parallelization and sparsity on larger corpora. We also prove that the
partially collapsed samplers scale well with the size of the corpus. The
proposed algorithm is fast, efficient, exact, and can be used in more modeling
situations than the ordinary collapsed sampler.Comment: Accepted for publication in Journal of Computational and Graphical
Statistic
- …