3 research outputs found

    低資源言語としてのベンガル語に対するオントロジーに基づく機械翻訳

    Get PDF
    In this research we propose ontology based Machine Translation with the help of WordNetand UNL Ontology. Example-Based Machine Translation (EBMT) for low resource language,like Bengali, has low-coverage issues. Due to the lack of parallel corpus, it has highprobability of handling unknown words. We have implemented an EBMT system for lowresourcelanguage pair. The EBMT architecture use chunk-string templates (CSTs) andunknown word translation mechanism. CSTs consist of a chunk in source-language, a stringin target-language, and word alignment information. CSTs are prepared automatically fromaligned parallel corpus and WordNet by using English chunker. For unknown wordtranslation, we used WordNet hypernym tree and English-Bengali dictionary. Proposedsystem first tries to find semantically related English words from WordNet for the unknownword. From these related words, we choose the semantically closest related word whoseBangla translation exists in English-Bangla dictionary. If no Bangla translation exists, thesystem uses IPA-based-transliteration. For proper nouns, the system uses Akkhortransliteration mechanism. CSTs improved the wide-coverage by 57 points and quality by48.81 points in human evaluation. Currently 64.29% of the test-set translations by the systemwere acceptable. The combined solutions of CSTs and unknown words generated 67.85%acceptable translations from the test-set. Unknown words mechanism improved translationquality by 3.56 points in human evaluation. This research also proposed the way to autogenerate the explanation of each concept using the semantic backgrounds provided by UNLOntology. These explanations are useful for improving translation quality of unknown words.Ontology Based Machine Translation for Bengali as Low-resource Language.本研究では、WordNet と UNL オントロジーを用いた、オントロジーに基づく機械翻訳を提案する。ベンガル語のような低資源言語 (low-resource language)に対しては、具体例に基づく機械翻訳 (EBMT)は、あまり有効ではない。パラレル・コーパスの欠如のために、多数の未知語を扱わなければならなくなるためである。我々は、低資源言語間の EBMT システムを実装した。実装したEBMT アーキテクチャでは、chunk-string templates (CSTs)と、未知語翻訳メカニズムを用いている。CST は、起点言語のチャンク、目的言語の文字列と、単語アラメント情報から成る。CST は、英語チャンカーを用いて、アラインメント済みのパラレル・コーパスとWordNet から、自動的に生成される。最初に、起点言語のチャンクが OpenNLP チャンカーを用いて自動生成される。そして、初期CST が、各起点言語のチャンクに対して生成され、すべての目的文に対するCSTアラインメントがパラレル・コーパスを用いて生成される。その後、システムは、単語アラインメント情報を用いて、CSTの組合せを生成する。最後に、WordNet を用いて、広い適用範囲を得るためにCST を一般化する。未知語翻訳に対しては、WordNet hypernym treeと、英語・ベンガル語辞書を用いる。提案システムは、最初に、未知語に対して、WordNet から意味的に関連した英単語を発見しようと試みる。これらの関連語から、英語・ベンガル語辞書にベンガル語の翻訳が存在する、意味的に最も近い語を選ぶ。もし、ベンガル語の翻訳が存在しなければ、システムはIPA-based翻訳を行う。固有名詞に対しては、システムは、Akkhor 翻訳メカニズムを用いる。CST は57 ポイントの広い適用範囲を持つように改善され、その際の人間による訳文の評価も 48.81 ポイントを得た。現在、システムのよって、64.29%のテストケースの翻訳が行える。未知語メカニズムは、人間に評価において 3.56 ポイント、翻訳の質を改善した。CST と未知語の組合せよる解法は、テストケースにおいて、67.85%の許容可能な翻訳を生成した。また、本研究では、UNL オントロジーが提供するsemantic background を用いて、各概念に対する説明を自動生成する方法も提案した。このシステムに対する入力は、1つのユニバーサル・ワード(UN)であり、システムの出力はその UN の英語や日本語による説明文である。与えられたUN に対して、システムは、最初に、SemanticWordMap を発見するが、それは、1つの特定のUN に対する、UNL オントロジーからのすべての直接的、間接的参照関係を含む。したがって、このステップの入力は、1つのUN であり、出力はWordMapグラフである。次のステップで、変換規則を用いて、WordMap グラフをUNL に変換する。この変換規則は、ユーザの要求に応じて、“From UWs only”や “From UNL Ontology”と指定できる。したがって、このステップの入力はWordMap グラフであり、出力はUNL表現である。最終ステップでは、UNL DeConverter を用いてUNL 表現を変換し、自然言語を用いて記述する。これらの表現は、未知語に対する翻訳の質の向上に有効であることがわかった。電気通信大学201

    Robots in Nursing - False Rhetoric or Future Reality?: How might robots contribute to hospital nursing in the future? A qualitative study of the perspectives of roboticists and nurses

    Get PDF
    Introduction. The challenge of the global nursing shortage coupled with a rising healthcare demand prompts consideration of technology as a potential solution. Technology in the form of robots is being developed for healthcare applications but the potential role in nursing has not been researched in the UK. Methods A three-phased qualitative study was undertaken: interviews with 5 robotic developers (Phase 1); nine focus groups /interviews with 25 hospital Registered Nurses (RN) in Phase 2, and 12 nurse leaders in four focus groups (Phase 3). Data was analysed using framework analysis for Phase 1 and reflexive thematic analysis for Phase 2 and 3 data based on the Fundamentals of Care framework. Results Roboticist interviews confirmed that a taxonomy of potential robotic automation was a useful tool for discussing the role of robots. In Phase 2, RNs described activities that robots might undertake and commented on those which they should not. RNs more readily agreed that robots could assist with physical activities than relational activities. Six potential roles that robots might undertake in future nursing practice were identified from the data and which have been labelled as advanced machine, social companion, responsive runner, helpful co-worker, proxy nurse bot, and feared substitute. Three cross-cutting themes were identified: • a fear of the future; • a negotiated reality and • a positive opportunity. In phase 3, nurse leaders considered the RN results and four themes were identified from their discussions: • First impressions of robot in nursing; • The essence of nursing; • We must do something and • Reframing the future. Conclusions Robots will be a future reality in nursing, playing an assistive role. Nursing must become technically proficient and engage with the development and testing of robots. Nurse leaders must lead policy development and reframe the narrative from substitution to assistance. A number of navigational tools have been developed including a taxonomy of nursing automation and the six robotic roles which may be useful to inform future debate in nursing

    Entity-based coherence in statistical machine translation: a modelling and evaluation perspective

    Get PDF
    Natural language documents exhibit coherence and cohesion by means of interrelated structures both within and across sentences. Sentences do not stand in isolation from each other and only a coherent structure makes them understandable and sound natural to humans. In Statistical Machine Translation (SMT) only little research exists on translating a document from a source language into a coherent document in the target language. The dominant paradigm is still one that considers sentences independently from each other. There is both a need for a deeper understanding of how to handle specific discourse phenomena, and for automatic evaluation of how well these phenomena are handled in SMT. In this thesis we explore an approach how to treat sentences as dependent on each other by focussing on the problem of pronoun translation as an instance of a discourse-related non-local phenomenon. We direct our attention to pronoun translation in the form of cross-lingual pronoun prediction (CLPP) and develop a model to tackle this problem. We obtain state-of-the-art results exhibiting the benefit of having access to the antecedent of a pronoun for predicting the right translation of that pronoun. Experiments also showed that features from the target side are more informative than features from the source side, confirming linguistic knowledge that referential pronouns need to agree in gender and number with their target-side antecedent. We show our approach to be applicable across the two language pairs English-French and English-German. The experimental setting for CLPP is artificially restricted, both to enable automatic evaluation and to provide a controlled environment. This is a limitation which does not yet allow us to test the full potential of CLPP systems within a more realistic setting that is closer to a full SMT scenario. We provide an annotation scheme, a tool and a corpus that enable evaluation of pronoun prediction in a more realistic setting. The annotated corpus consists of parallel documents translated by a state-of-the-art neural machine translation (NMT) system, where the appropriate target-side pronouns have been chosen by annotators. With this corpus, we exhibit a weakness of our current CLPP systems in that they are outperformed by a state-of-the-art NMT system in this more realistic context. This corpus provides a basis for future CLPP shared tasks and allows the research community to further understand and test their methods. The lack of appropriate evaluation metrics that explicitly capture non-local phenomena is one of the main reasons why handling non-local phenomena has not yet been widely adopted in SMT. To overcome this obstacle and evaluate the coherence of translated documents, we define a bilingual model of entity-based coherence, inspired by work on monolingual coherence modelling, and frame it as a learning-to-rank problem. We first evaluate this model on a corpus where we artificially introduce coherence errors based on typical errors CLPP systems make. This allows us to assess the quality of the model in a controlled environment with automatically provided gold coherence rankings. Results show that this model can distinguish with high accuracy between a human-authored translation and one with coherence errors, that it can also distinguish between document pairs from two corpora with different degrees of coherence errors, and that the learnt model can be successfully applied when the test set distribution of errors comes from a different one than the one from the training data, showing its generalization potentials. To test our bilingual model of coherence as a discourse-aware SMT evaluation metric, we apply it to more realistic data. We use it to evaluate a state-of-the-art NMT system against post-editing systems with pronouns corrected by our CLPP systems. For verifying our metric, we reuse our annotated parallel corpus and consider the pronoun annotations as proxy for human document-level coherence judgements. Experiments show far lower accuracy in ranking translations according to their entity-based coherence than on the artificial corpus, suggesting that the metric has difficulties generalizing to a more realistic setting. Analysis reveals that the system translations in our test corpus do not differ in their pronoun translations in almost half of the document pairs. To circumvent this data sparsity issue, and to remove the need for parameter learning, we define a score-based SMT evaluation metric which directly uses features from our bilingual coherence model
    corecore