97 research outputs found
Exploiting alignment techniques in MATREX: the DCU machine translation system for IWSLT 2008
In this paper, we give a description of the machine translation (MT) system developed at DCU that was used for our third participation in the evaluation campaign of the International Workshop on Spoken Language Translation (IWSLT 2008). In this participation, we focus on various techniques for word and phrase alignment to improve system quality. Specifically, we try out our word packing and syntax-enhanced word alignment techniques for the Chinese–English task and for the English–Chinese task for the first time. For all translation tasks except Arabic–English, we exploit linguistically motivated bilingual phrase pairs extracted from parallel treebanks. We smooth our translation tables with out-of-domain word translations for the Arabic–English and Chinese–English tasks in order to solve the problem of the high number of out of vocabulary items. We also carried out experiments combining both in-domain and out-of-domain data to improve system performance and, finally, we deploy a majority voting procedure combining a language model based method and a translation-based method for case and punctuation restoration. We participated in all the translation
tasks and translated both the single-best ASR hypotheses and
the correct recognition results. The translation results confirm that our new word and phrase alignment techniques are often helpful in improving translation quality, and the data combination method we proposed can significantly improve system performance
Fine-Grained Linguistic Soft Constraints on Statistical Natural Language Processing Models
This dissertation focuses on effective combination of data-driven natural language processing (NLP) approaches with linguistic knowledge sources that are based on manual text annotation or word grouping according to semantic commonalities. I gainfully apply fine-grained linguistic soft constraints -- of syntactic or semantic nature -- on statistical NLP models, evaluated in end-to-end state-of-the-art statistical machine translation (SMT) systems. The introduction of semantic soft constraints involves intrinsic evaluation on word-pair similarity ranking tasks, extension from words to phrases, application in a novel distributional paraphrase generation technique, and an introduction of a generalized framework of which these soft semantic and syntactic constraints can be viewed as instances, and in which they can be potentially combined.
Fine granularity is key in the successful combination of these soft constraints, in many cases. I show how to softly constrain SMT models by adding fine-grained weighted features, each preferring translation of only a specific syntactic constituent. Previous attempts using coarse-grained features yielded negative results. I also show how to softly constrain corpus-based semantic models of words (“distributional profiles”) to effectively create word-sense-aware models, by using semantic word grouping information found in a manually compiled thesaurus. Previous attempts, using hard constraints and resulting in aggregated, coarse-grained models, yielded lower gains.
A novel paraphrase generation technique incorporating these soft semantic constraints is then also evaluated in a SMT system. This paraphrasing technique is based on the Distributional Hypothesis. The main advantage of this novel technique over current “pivoting” techniques for paraphrasing is the independence from parallel texts, which are a limited resource. The evaluation is done by augmenting translation models with paraphrase-based translation rules, where fine-grained scoring of paraphrase-based rules yields significantly higher gains.
The model augmentation includes a novel semantic reinforcement component:
In many cases there are alternative paths of generating a paraphrase-based translation rule. Each of these paths reinforces a dedicated score for the “goodness” of the new translation rule. This augmented score is then used as a soft constraint, in a weighted log-linear feature, letting the translation model learn how much to “trust” the paraphrase-based translation rules.
The work reported here is the first to use distributional semantic similarity measures to improve performance of an end-to-end phrase-based SMT system. The unified framework for statistical NLP models with soft linguistic constraints enables, in principle, the combination of both semantic and syntactic constraints -- and potentially other constraints, too -- in a single SMT model
Unsupervised neural machine translation between the Portuguese language and the Chinese and Korean languages
Tese de Mestrado, Informática, 2023, Universidade de Lisboa, Faculdade de CiênciasO propósito desta dissertação é apresentar um estudo comparativo e de reprodução sobre técnicas de Tradução Automática Neuronal Não-Supervisionada (Unsupervised Neural Machine Translation) para o par de línguas Português (PT) →Chinês (ZH) e Português (PT) → Coreano (KR) tirando partido de ferramentas e
recursos online. A escolha destes pares de línguas prende-se com duas grandes razões. A primeira
refere-se à importância no panorama global das línguas asiáticas, nomeadamente do chinês, e também pela infuência que a língua portuguesa desempenha no mundo especialmente no hemisfério sul. A segunda razão é puramente académica. Como há escassez de estudos na área de Processamento Natural de Linguagem (NLP) com línguas não-germânicas (devido à hegemonia da língua inglesa), procurou-se desenvolver um trabalho que estude a infuência das técnicas de tradução não supervisionada em par de línguas poucos estudadas, a fm de testar a sua robustez. Falada por um quarto da população mundial, a língua chinesa é o“Ás”no baralho de cartas da China. De acordo com o International Chinese Language Education Week, em 2020 estimava-se que 200 milhões pessoas não-nativas já tinham aprendido chinês e que no ano corrente se encontravam mais de 25 milhões a estudá-la. Com a infuência que a língua chinesa desempenha, torna-se imperativo desenvolver ferramentas que preencham as falhas de comunicação. Assim, nesta conjuntura global surge a tradução automática como ponte de comunicação entre várias culturas e a China. A Coreia do Sul, também conhecida como um dos quatro tigres asiáticos, concretizou um feito extraordinário ao levantar-se da pobreza extrema para ser um dos países mais desenvolvidos do mundo em duas gerações. Apesar de não possuir a hegemonia económica da China, a Coreia do Sul exerce bastante infuência devido ao seu soft power na área de entretenimento, designado por hallyu. Esta“onda”de cultura pop coreana atraí multidões para a aprendizagem da cultura. De forma a desvanecer a barreira comunicativa entre os amantes da cultura coreana e os nativos, a tradução automática é um forte aliado porque permite a interação entre pessoas instantaneamente sem a necessidade de aprender uma língua nova. Apesar de Portugal não ter ligações culturais com a Coreia, há uma forte ligação com a região administrativa especial de Macau (RAEM) onde o português é uma das línguas ofciais, sendo que a Tradução Automática entre ambas as línguas ofciais é
uma das áreas estratégicas do governo local tendo sido estabelecido um laboratório de Tradução Automática no Instituto Politécnico de Macau que visa construir um sistema que possa ser usado na função pública de auxílio aos tradutores. Neste trabalho foram realizadas duas abordagens: (i) Tradução Automática
Neuronal Não Supervisionada (Unsupervised Neural Machine Translation) e; (ii) abordagem pivô (pivot approach). Como o foco da dissertação é em técnicas nãosupervisionadas, nenhuma das arquiteturas fez uso de dados paralelos entre os pares de línguas em questão. Nomeadamente, na primeira abordagem usou-se dados monolingues. Na segunda introduziu-se uma terceira língua pivô que é utilizada para estabelecer a ponte entre a língua de partida e a de chegada. Esta abordagem à tradução automática surgiu com a necessidade de criar sistemas de tradução para pares de línguas onde existem poucos ou nenhuns dados
paralelos. Como demonstrado por Koehn and Knowles [2017a], a tradução automática neuronal precisa de grandes quantidades de dados a fm de ter um desempenho melhor que a Tradução Automática Estatística (SMT). No entanto, em pares de línguas com poucos recursos linguísticos isso não é exequível. Para tal, a arquitetura de tradução automática não supervisionada somente requer dados monolingues. A
implementação escolhida foi a de Artetxe et al. [2018d] que é constituída por uma arquitetura encoder-decoder. Como contém um double-encoder, para esta abordagem foram consideradas ambas direções: Português ↔ Chinês e Português ↔ Coreano. Para além da reprodução para línguas dissimilares com poucos recursos, também foi elaborado um estudo de replicação do artigo original usando os dados de um dos
pares de línguas estudados pelos autores: Inglês ↔ Francês. Outra alternativa para a falta de corpora paralelos é a abordagem pivô. Nesta abordagem, o sistema faz uso de uma terceira língua, designada por pivô, que liga a língua de partida à de chegada. Esta opção é tida em conta quando há existência de dados paralelos em abundância entre as duas línguas. A motivação deste método é fazer jus ao desempenho que as redes neuronais têm quando são alimentadas com grandes volumes de dados. Com a existência de grandes quantidades de corpora paralelos entre todas as línguas em questão e a pivô, o desempenho das redes compensa a propagação de erro introduzida pela língua intermediária. No nosso caso, a língua pivô escolhida foi o inglês pela forte presença de dados paralelos entre o pivô e as restantes três línguas. O sistema começa por traduzir de português para inglês e depois traduz a pivô para coreano ou chinês. Ao contrário da primeira abordagem, só foi considerada uma direção de Português → Chinês e Português → Coreano. Para
implementar esta abordagem foi considerada a framework OpenNMT desenvolvida por [Klein et al., 2017].
Os resultados foram avaliados usando a métrica BLEU [Papineni et al., 2002b]. Com esta métrica foi possível comparar o desempenho entre as duas arquiteturas e aferir qual é o método mais efcaz para pares de línguas dissimilares com poucos recursos. Na direção Português → Chinês e Português → Coreano a abordagem pivô
foi superior tendo obtido um BLEU de 13,37 pontos para a direção Português → Chinês e um BLEU de 17,28 pontos na direção Português → Coreano. Já com a abordagem de tradução automática neural não
supervisionada o valor mais alto obtido na direção Português → Coreano foi de um BLEU de 0,69, enquanto na direção de Português → Chinês foi de 0,32 BLEU (num total de 100). Os valores da tradução não supervisionada vão estão alinhados com os obtidos por [Guzmán et al., 2019], [Kim et al., 2020]. A explicação dada para estes valores baixos prende-se com a qualidade dos cross-lingual embeddings. O desempenho dos
cross-lingual embeddings tende a degradar-se quando mapeia pares de línguas distantes e, sendo que modelo de tradução automática não supervisionado é inicializado com os cross-lingual embeddings, caso estes sejam de baixa qualidade, o modelo não converge para um ótimo local, resultando nos valores obtidos na dissertação. Dos dois métodos testados, verifica-se que a abordagem pivô é a que tem melhor
performance. Tal como foi possível averiguar pela literatura corrente e também pelos resultados obtidos nesta dissertação, o método neuronal não-supervisionado proposto por Artetxe et al. [2018d] não é sufcientemente robusto para inicializar um sistema de tradução suportado por textos monolingues em línguas distantes.
Porém é uma abordagem promissora porque permitiria colmatar uma das grandes lacunas na área de Tradução Automática que se cinge à falta de dados paralelos de boa qualidade. No entanto seria necessário dar mais atenção ao problema dos cross-lingual embeddings em mapear línguas distantes. Este trabalho fornece uma visão sobre o estudo de técnicas não supervisionadas para pares de línguas distantes e providencia uma solução para a construção de sistemas de tradução automática para os pares de língua português-chinês e português-coreano usando dados monolingues.This dissertation presents a comparative and reproduction study on Unsupervised Neural Machine Translation techniques in the pair of languages Portuguese (PT) → Chinese (ZH) and Portuguese (PT) → Korean(KR).
We chose these language-pairs for two main reasons. The frst one refers to the importance that Asian languages play in the global panorama and the infuence that Portuguese has in the southern hemisphere. The second reason is purely academic. Since there is a lack of studies in the area of Natural Language Processing (NLP) regarding non-Germanic languages, we focused on studying the infuence of nonsupervised techniques in under-studied languages. In this dissertation, we worked on two approaches: (i) Unsupervised Neural
Machine Translation; (ii) the Pivot approach. The frst approach uses only monolingual corpora. As for the second, it uses parallel corpora between the pivot and the non-pivot languages. The unsupervised approach was devised to mitigate the problem of low-resource languages where training traditional Neural Machine Translations was unfeasible due to requiring large amounts of data to achieve promising results. As such, the
unsupervised machine translation only requires monolingual corpora. In this dissertation we chose the mplementation of Artetxe et al. [2018d] to develop our work. Another alternative to the lack of parallel corpora is the pivot approach. In this approach, the system uses a third language (called pivot) that connects the source
language to the target language. The reasoning behind this is to take advantage of the performance of the neural networks when being fed with large amounts of data, making it enough to counterbalance the error propagation which is introduced when adding a third language. The results were evaluated using the BLEU metric and showed that for both language pairs Portuguese → Chinese and Portuguese → Korean, the pivot approach had a better performance making it a more suitable choice for these dissimilar low resource language pairs
Recommended from our members
Pivot-based Statistical Machine Translation for Morphologically Rich Languages
This thesis describes the research efforts on pivot-based statistical machine translation (SMT) for morphologically rich languages (MRL). We provide a framework to translate to and from morphologically rich languages especially in the context of having little or no parallel corpora between the source and the target languages. We basically address three main challenges. The first one is the sparsity of data as a result of morphological richness. The second one is maximizing the precision and recall of the pivoting process itself. And the last one is making use of any parallel data between the source and the target languages. To address the challenge of data sparsity, we explored a space of tokenization schemes and normalization options. We also examined a set of six detokenization techniques to evaluate detokenized and orthographically corrected (enriched) output. We provide a recipe of the best settings to translate to one of the most challenging languages, namely Arabic. Our best model improves the translation quality over the baseline by 1.3 BLEU points. We also investigated the idea of separation between translation and morphology generation. We compared three methods of modeling morphological features. Features can be modeled as part of the core translation. Alternatively these features can be generated using target monolingual context. Finally, the features can be predicted using both source and target information. In our experimental results, we outperform the vanilla factored translation model. In order to decide on which features to translate, generate or predict, a detailed error analysis should be provided on the system output. As a result, we present AMEANA, an open-source tool for error analysis of natural language processing tasks, targeting morphologically rich languages. The second challenge we are concerned with is the pivoting process itself. We discuss several techniques to improve the precision and recall of the pivot matching. One technique to improve the recall works on the level of the word alignment as an optimization process for pivoting driven by generating phrase pairs between source and target languages. Despite the fact that improving the recall of the pivot matching improves the overall translation quality, we also need to increase the precision of the pivot quality. To achieve this, we introduce quality constraints scores to determine the quality of the pivot phrase pairs between source and target languages. We show positive results for different language pairs which shows the consistency of our approaches. In one of our best models we reach an improvement of 1.2 BLEU points. The third challenge we are concerned with is how to make use of any parallel data between the source and the target languages. We build on the approach of improving the precision of the pivoting process and the methods of combination between the pivot system and the direct system built from the parallel data. In one of the approaches, we introduce morphology constraint scores which are added to the log linear space of features in order to determine the quality of the pivot phrase pairs. We compare two methods of generating the morphology constraints. One method is based on hand-crafted rules relying on our knowledge of the source and target languages; while in the other method, the morphology constraints are induced from available parallel data between the source and target languages which we also use to build a direct translation model. We then combine both the pivot and direct models to achieve better coverage and overall translation quality. Using induced morphology constraints outperformed the handcrafted rules and improved over our best model from all previous approaches by 0.6 BLEU points (7.2/6.7 BLEU points from the direct and pivot baselines respectively). Finally, we introduce applying smart techniques to combine pivot and direct models. We show that smart selective combination can lead to a large reduction of the pivot model without affecting the performance and in some cases improving it
Improving statistical machine translation through adaptation and learning
With the arrival of free on-line machine translation (MT) systems, came the possibility to improve automatic translations with the help of daily users. One of the methods to achieve such improvements is to ask to users themselves for a better translation. It is possible that the system had made a mistake and if the user is able to detect it, it would be a valuable help to let the user teach the system where it made the mistake so it does not make it again if it finds a similar situation. Most of the translation systems you can find on-line provide a text area for users to suggest a better translation (like Google translator) or a ranking system for them to use (like Microsoft's).
In 2009, as part of the Seventh Framework Programme of the European Commission, the FAUST project started with the goal of developing "machine translation (MT) systems which respond rapidly and intelligently to user feedback". Specifically, one of the project objective was to "develop mechanisms for instantaneously incorporating user feedback into the MT engines that are used in production environments, ...". As a member of the FAUST project, this thesis focused on developing one such mechanism.
Formally, the general objective of this work was to design and implement a strategy to improve the translation quality of an already trained Statistical Machine Translation (SMT) system, using translations of input sentences that are corrections of the system's attempt to translate them.
To address this problem we divided it in three specific objectives:
1. Define a relation between the words of a correction sentence and the words in the system's translation, in order to detect the errors that the former is aiming to solve.
2. Include the error corrections in the original system, so it learns how to solve them in case a similar situation occurs.
3. Test the strategy in different scenarios and with different data, in order to validate the applications of the proposed methodology.
The main contributions made to the SMT field that can be found in this Ph.D. thesis are:
- We defined a similarity function that compares an MT system output with a translation reference for that output and align the errors made by the system with the correct translations found in the reference. This information is then used to compute an alignment between the original input sentence and the reference.
- We defined a method to perform domain adaptation based on the alignment mentioned before. Using this alignment with an in-domain parallel corpus, we extract new translation units that correspond both to units found in the system and were correctly chosen during translation and new units that include the correct translations found in the reference. These new units are then scored and combined with the units in the original system in order to improve its quality in terms of both human an automatic metrics.
- We succesfully applied the method in a new task: to improve a SMT translation quality using post-editions provided by real users of the system. In this case, the alignment was computed over a parallel corpus build with post-editions, extracting translation units that correspond both to units found in the system and were correctly chosen during translation and new units that include the corrections found in the feedback provided.
- The method proposed in this dissertation is able to achieve significant improvements in translation quality with a small learning material, corresponding to a 0.5% of the training material used to build the original system. Results from our evaluations also indicate that the improvement achieved with the domain adaptation strategy is measurable by both automatic a human-based evaluation metrics.Esta tesis propone un nuevo método para mejorar un sistema de Traducción Automática Estadística (SMT por sus siglas en inglés) utilizando post-ediciones de sus traducciones automáticas. La estrategia puede asociarse con la adaptación de dominio, considerando las post-ediciones obtenidas a través de usuarios reales del sistema de traducción como el material del dominio a adaptar. El método compara las post-ediciones con las traducciones automáticas con la finalidad de detectar automáticamente los lugares en los que el traductor cometió algún error, para poder aprender de ello. Una vez los errores han sido detectados se realiza un alineado a nivel de palabras entre las oraciones originales y las postediciones, para extraer unidades de traducción que son luego incorporadas al sistema base de manera que se corrijan los errores en futuras traducciones. Nuestros resultados muestran mejoras estadísticamente significativas a partir de un conjunto de datos que representa en tamaño un 0, 5% del material utilizado durante el entrenamiento. Junto con las medidas automáticas de calidad, también presentamos un análisis cualitativo del sistema para validar los resultados. Las mejoras en la traducción se observan en su mayoría en el léxico y el reordenamiento de palabras, seguido de correcciones morfológicas. La estrategia, que introduce los conceptos de corpus aumentado, función de similaridad y unidades de
traducción derivadas, es probada con dos paradigmas de SMT (traducción basada en N-gramas y en frases), con dos pares de lengua (Catalán-Español e Inglés-Español) y en diferentes escenarios de adaptación de dominio, incluyendo un dominio abierto en el cual el sistema fue adaptado a través de peticiones recogidas por usuarios reales a través de internet, obteniendo resultados similares durante todas las pruebas. Los resultados de esta investigación forman parte del projecto FAUST (en inglés, Feedback Analysis for User adaptive Statistical Translation), un proyecto del Séptimo Programa Marco de la Comisión Europea
A study on the impact of neural architectures for Unsupervised Machine Translation
[ES] La cuestión del uso de corpus monolingües para el entrenamiento de sistemas de traducción automática no supervisados es un asunto de notable relevancia en este mundo en continua globalización en que vivimos, debido principalmente a la escasez de corpus bilingües para la gran mayoría de pares de idiomas y a las limitaciones que esto presenta para el entrenamiento de sistemas de traducción automática.
Este TFM parte de los sistemas de traducción neuronal no supervisada creados por Artetxe et al. llamados Undreamt y Monoses, y aspira a explorar el uso de diversas arquitecturas neuronales cercanas al actual estado de la cuestión en el marco de dicho sistemas.
Se utilizarán para ello diversos de los corpus monolingües provenientes de la tarea de traducción WMT 2014, midiendo la calidad de las traducciones obtenidas mediante la métrica BLEU y buscando las mejores configuraciones para diversos pares de idiomas, comparándolas tanto como con el estado de la cuestión como con las métricas reportadas por Artetxe et al.[EN] The use of monolingual corpora for training Unsupervised Machine Translation systems is a matter of notorious relevance in this wold in continuous globalization we live in, mainly due to the scarcity of bilingual corpora for the great majority of language pairs and the serious limitation this represents for the training of Machine Translation systems.
This TFM takes as a starting point the unsupervised Neural Machine Translation systems created by Artetxe et al., named Undreamt and Monoses, and aims to explore, within the frame of said systems, the use of neural architectures that stand close to the current state of the art.
To do that the corpora used will be monolingual corpora from the WMT 2014 translation task, measuring the quality of the translations achieved using the BLEU metric and looking for the best configurations for various language pairs, comparing these both with the state of the art and with the metrics reported by Artetxe et al.Sanz Rodríguez, A. (2021). A study on the impact of neural architectures for Unsupervised Machine Translation. Universitat Politècnica de València. http://hdl.handle.net/10251/174572TFG
Multilingual Lexicon Extraction under Resource-Poor Language Pairs
In general, bilingual and multilingual lexicons are important resources in many natural language processing fields such as information retrieval and machine translation. Such lexicons are usually extracted from bilingual (e.g., parallel or comparable) corpora with external seed dictionaries. However, few such corpora and bilingual seed dictionaries are publicly available for many language pairs such as Korean–French. It is important that such resources for these language pairs be publicly available or easily accessible when a monolingual resource is considered.
This thesis presents efficient approaches for extracting bilingual single-/multi-word lexicons for resource-poor language pairs such as Korean–French and Korean–Spanish. The goal of this thesis is to present several efficient methods of extracting translated single-/multi-words from bilingual corpora based on a statistical method.
Three approaches for single words and one approach for multi-words are proposed. The first approach is the pivot context-based approach (PCA). The PCA uses a pivot language to connect source and target languages. It builds context vectors from two parallel corpora sharing one pivot language and calculates their similarity scores to choose the best translation equivalents. The approach can reduce the effort required when using a seed dictionary for translation by using parallel corpora rather than comparable corpora. The second approach is the extended pivot context-based approach (EPCA). This approach gathers similar context vectors for each source word to augment its context. The approach assumes that similar vectors can enrich contexts. For example, young and youth can augment the context of baby. In the investigation described here, such similar vectors were collected by similarity measures such as cosine similarity. The third approach for single words uses a competitive neural network algorithm (i.e., self-organizing mapsSOM). The SOM-based approach (SA) uses synonym vectors rather than context vectors to train two different SOMs (i.e., source and target SOMs) in different ways. A source SOM is trained in an unsupervised way, while a target SOM is trained in a supervised way.
The fourth approach is the constituent-based approach (CTA), which deals with multi-word expressions (MWEs). This approach reinforces the PCA for multi-words (PCAM). It extracts bilingual MWEs taking all constituents of the source MWEs into consideration. The PCAM
2
identifies MWE candidates by pointwise mutual information first and then adds them to input data as single units in order to use the PCA directly.
The experimental results show that the proposed approaches generally perform well for resource-poor language pairs, particularly Korean and French–Spanish. The PCA and SA have demonstrated good performance for such language pairs. The EPCA would not have shown a stronger performance than expected. The CTA performs well even when word contexts are insufficient. Overall, the experimental results show that the CTA significantly outperforms the PCAM.
In the future, homonyms (i.e., homographs such as lead or tear) should be considered. In particular, the domains of bilingual corpora should be identified. In addition, more parts of speech such as verbs, adjectives, or adverbs could be tested. In this thesis, only nouns are discussed for simplicity. Finally, thorough error analysis should also be conducted.Abstract
List of Abbreviations
List of Tables
List of Figures
Acknowledgement
Chapter 1 Introduction
1.1 Multilingual Lexicon Extraction
1.2 Motivations and Goals
1.3 Organization
Chapter 2 Background and Literature Review
2.1 Extraction of Bilingual Translations of Single-words
2.1.1 Context-based approach
2.1.2 Extended approach
2.1.3 Pivot-based approach
2.2 Extractiong of Bilingual Translations of Multi-Word Expressions
2.2.1 MWE identification
2.2.2 MWE alignment
2.3 Self-Organizing Maps
2.4 Evaluation Measures
Chapter 3 Pivot Context-Based Approach
3.1 Concept of Pivot-Based Approach
3.2 Experiments
3.2.1 Resources
3.2.2 Results
3.3 Summary
Chapter 4 Extended Pivot Context-Based Approach
4.1 Concept of Extended Pivot Context-Based Approach
4.2 Experiments
4.2.1 Resources
4.2.2 Results
4.3 Summary
Chapter 5 SOM-Based Approach
5.1 Concept of SOM-Based Approach
5.2 Experiments
5.2.1 Resources
5.2.2 Results
5.3 Summary
Chapter 6 Constituent-Based Approach
6.1 Concept of Constituent-Based Approach
6.2 Experiments
6.2.1 Resources
6.2.2 Results
6.3 Summary
Chapter 7 Conclusions and Future Work
7.1 Conclusions
7.2 Future Work
Reference
- …