1,971 research outputs found
Extending, trimming and fusing WordNet for technical documents
This paper describes a tool for the automatic
extension and trimming of a multilingual
WordNet database for cross-lingual retrieval
and multilingual ontology building in
intranets and domain-specific document
collections. Hierarchies, built from
automatically extracted terms and combined
with the WordNet relations, are trimmed
with a disambiguation method based on the
document salience of the words in the
glosses. The disambiguation is tested in a
cross-lingual retrieval task, showing
considerable improvement (7%-11%). The
condensed hierarchies can be used as
browse-interfaces to the documents
complementary to retrieval
Cost Accounting and Pricing Improvement at Helmond Print: Using Xeikon Digital Colour Printing Equipment: A Case study
Helmond Print B.V., a (fictional) Dutch print provider, is facing competitive problems. The student is expected to step into the role of an independent expert advising Helmond Print''s owner and manager. The first objective is to let the student find out, from a piece of qualitative and quantitative information about Xeikon N.V. machines, that the cost structure is much different than currently assumed. The student should try to improve the cost calculations, which will require linear and both non-linear regression analysis. A second objective is to make the student realize that the incorrect cost calculations affected Helmond Print''s pricing policy and may have lead to the competitive problems faced now. The student should therefore link their investigations to pricing, search for the weak spots in the current pricing policy and make suggestions for improvement. (Note: a solution to the case can be obtained by simple request)management and organization theory ;
Towards a universal index of meaning
The Inter-Lingual-Index (ILI) in the EuroWordNet
architecture is an initially unstructured fund of concepts which functions as the link between the various language wordnets.The ILI concepts originate from WordNet1.5, and have been restructured on the basis of aspects of the internal structure of Word-Net,links between WordNet and other resources,and multilingual mapping between the wordnets.
This leads to a differentiation of the status of ILI concepts,a reduction of the Wordnet polysemy,and a greater connectivity between the wordnets. The restructured ILI represents the first step towards a
standardized set of word meanings,is a working platform for further development and testing,and can be put to use in NLP tasks such as (multilingual)information retrieval
Topologically non-trivial quantum layers
Given a complete non-compact surface embedded in R^3, we consider the
Dirichlet Laplacian in a layer of constant width about the surface. Using an
intrinsic approach to the layer geometry, we generalise the spectral results of
an original paper by Duclos et al. to the situation when the surface does not
possess poles. This enables us to consider topologically more complicated
layers and state new spectral results. In particular, we are interested in
layers built over surfaces with handles or several cylindrically symmetric
ends. We also discuss more general regions obtained by compact deformations of
certain layers.Comment: 15 pages, 6 figure
Automatic sense clustering in EuroWordNet
This paper addresses ways in which we envisage to reduce the fine-grainedness of WordNet and express in a more systematic way the relations between its numerous sense distinctions. In the EuroWordNet project, we have distinguished various automatic methods for grouping senses into more coarse-grained sense groups. These resulting clusters reflect aspects of lexical organization, displaying a variety of semantic regularities or generalizations. In this way, the compatibility of the language-specific wordnets in the EuroWordNet multilingual knowledge base is increased
Improving Graph-to-Text Generation Using Cycle Training
Natural Language Generation (NLG) from graph structured data is an important step for a number of tasks, including e.g. generating explanations, automated reporting, and conversational interfaces. Large generative language models are currently the state of the art for open ended NLG for graph data. However, these models can produce erroneous text (termed hallucinations). In this paper, we investigate the application of {\em cycle training} in order to reduce these errors. Cycle training involves alternating the generation of text from an input graph with the extraction of a knowledge graph where the model should ensure consistency between the extracted graph and the input graph. Our results show that cycle training improves performance on evaluation metrics (e.g., METEOR, DAE) that consider syntactic and semantic relations, and more in generally, that cycle training is useful to reduce erroneous output when generating text from graphs
- …