10,528 research outputs found
Indonesian Language Term Extraction using Multi-Task Neural Network
The rapidly expanding size of data makes it difficult to extricate information and store it as computerized knowledge. Relation extraction and term extraction play a crucial role in resolving this issue. Automatically finding a concealed relationship between terms that appear in the text can help people build computer-based knowledge more quickly. Term extraction is required as one of the components because identifying terms that play a significant role in the text is the essential step before determining their relationship. We propose an end-to-end system capable of extracting terms from text to address this Indonesian language issue. Our method combines two multilayer perceptron neural networks to perform Part-of-Speech (PoS) labeling and Noun Phrase Chunking. Our models were trained as a joint model to solve this problem. Our proposed method, with an f-score of 86.80%, can be considered a state-of-the-art algorithm for performing term extraction in the Indonesian Language using noun phrase chunking
New Perspectives in Sinographic Language Processing Through the Use of Character Structure
Chinese characters have a complex and hierarchical graphical structure
carrying both semantic and phonetic information. We use this structure to
enhance the text model and obtain better results in standard NLP operations.
First of all, to tackle the problem of graphical variation we define
allographic classes of characters. Next, the relation of inclusion of a
subcharacter in a characters, provides us with a directed graph of allographic
classes. We provide this graph with two weights: semanticity (semantic relation
between subcharacter and character) and phoneticity (phonetic relation) and
calculate "most semantic subcharacter paths" for each character. Finally,
adding the information contained in these paths to unigrams we claim to
increase the efficiency of text mining methods. We evaluate our method on a
text classification task on two corpora (Chinese and Japanese) of a total of 18
million characters and get an improvement of 3% on an already high baseline of
89.6% precision, obtained by a linear SVM classifier. Other possible
applications and perspectives of the system are discussed.Comment: 17 pages, 5 figures, presented at CICLing 201
- …