2,536 research outputs found
Content-Based Book Recommending Using Learning for Text Categorization
Recommender systems improve access to relevant products and information by
making personalized suggestions based on previous examples of a user's likes
and dislikes. Most existing recommender systems use social filtering methods
that base recommendations on other users' preferences. By contrast,
content-based methods use information about an item itself to make suggestions.
This approach has the advantage of being able to recommended previously unrated
items to users with unique interests and to provide explanations for its
recommendations. We describe a content-based book recommending system that
utilizes information extraction and a machine-learning algorithm for text
categorization. Initial experimental results demonstrate that this approach can
produce accurate recommendations.Comment: 8 pages, 3 figures, Submission to Fourth ACM Conference on Digital
Librarie
Crime Analysis Using Self Learning
An unsupervised algorithm for event extraction is proposed . Some small number of seed examples and corpus of text documents are used as inputs. Here, we are interested in finding out relationships which may be spanned over the entire length of the document. The goal is to extract relations among mention that lie across sentences. These mention relations can be binary, ternary or even quaternary relations. For this paper our algorithm concentrates on picking out a specific binary relation in a tagged data set. We are using co reference resolution to solve the problem of relation extraction. Earlier approaches co - refer identity relations while our approach co - refers independent mention pairs based on feature rules. This paper proposes an approach for coreference resolution which uses the EM (Expectation Maximization) algorithm as a reference to train data and co relate entities inter sentential
Methods for the de-identification of electronic health records for genomic research
Electronic health records are increasingly being linked to DNA repositories and used as a source of clinical information for genomic research. Privacy legislation in many jurisdictions, and most research ethics boards, require that either personal health information is de-identified or that patient consent or authorization is sought before the data are disclosed for secondary purposes. Here, I discuss how de-identification has been applied in current genomic research projects. Recent metrics and methods that can be used to ensure that the risk of re-identification is low and that disclosures are compliant with privacy legislation and regulations (such as the Health Insurance Portability and Accountability Act Privacy Rule) are reviewed. Although these methods can protect against the known approaches for re-identification, residual risks and specific challenges for genomic research are also discussed
Syntactic Topic Models
The syntactic topic model (STM) is a Bayesian nonparametric model of language
that discovers latent distributions of words (topics) that are both
semantically and syntactically coherent. The STM models dependency parsed
corpora where sentences are grouped into documents. It assumes that each word
is drawn from a latent topic chosen by combining document-level features and
the local syntactic context. Each document has a distribution over latent
topics, as in topic models, which provides the semantic consistency. Each
element in the dependency parse tree also has a distribution over the topics of
its children, as in latent-state syntax models, which provides the syntactic
consistency. These distributions are convolved so that the topic of each word
is likely under both its document and syntactic context. We derive a fast
posterior inference algorithm based on variational methods. We report
qualitative and quantitative studies on both synthetic data and hand-parsed
documents. We show that the STM is a more predictive model of language than
current models based only on syntax or only on topics
Embedding Web-based Statistical Translation Models in Cross-Language Information Retrieval
Although more and more language pairs are covered by machine translation
services, there are still many pairs that lack translation resources.
Cross-language information retrieval (CLIR) is an application which needs
translation functionality of a relatively low level of sophistication since
current models for information retrieval (IR) are still based on a
bag-of-words. The Web provides a vast resource for the automatic construction
of parallel corpora which can be used to train statistical translation models
automatically. The resulting translation models can be embedded in several ways
in a retrieval model. In this paper, we will investigate the problem of
automatically mining parallel texts from the Web and different ways of
integrating the translation models within the retrieval process. Our
experiments on standard test collections for CLIR show that the Web-based
translation models can surpass commercial MT systems in CLIR tasks. These
results open the perspective of constructing a fully automatic query
translation device for CLIR at a very low cost.Comment: 37 page
Generating natural language specifications from UML class diagrams
Early phases of software development are known to be problematic, difficult to manage and errors occurring during these phases are expensive to correct. Many systems have been developed to aid the transition from informal Natural Language requirements to semistructured or formal specifications. Furthermore, consistency checking is seen by many software engineers as the solution to reduce the number of errors occurring during the software development life cycle and allow early verification and validation of software systems. However, this is confined to the models developed during analysis and design and fails to include the early Natural Language requirements. This excludes proper user involvement and creates a gap between the original requirements and the updated and modified models and implementations of the system. To improve this process, we propose a system that generates Natural Language specifications from UML class diagrams. We first investigate the variation of the input language used in naming the components of a class diagram based on the study of a large number of examples from the literature and then develop rules for removing ambiguities in the subset of Natural Language used within UML. We use WordNet,a linguistic ontology, to disambiguate the lexical structures of the UML string names and generate semantically sound sentences. Our system is developed in Java and is tested on an independent though academic case study
- …