20,696 research outputs found
A puzzle about enkratic reasoning
Enkratic reasoning—reasoning from believing that you ought to do something to an intention to do that thing—seems good. But there is a puzzle about how it could be. Good reasoning preserves correctness, other things equal. But enkratic reasoning does not preserve correctness. This is because what you ought to do depends on your epistemic position, but what it is correct to intend does not. In this paper, I motivate these claims and thus show that there is a puzzle. I then argue that the best solution is to deny that correctness is always independent of your epistemic position. As I explain, a notable upshot is that a central epistemic norm directs us to believe, not simply what is true, but what we are in a position to know
Creditworthiness and Matching Principles
You are creditworthy for φ-ing only if φ-ing is the right thing to do. Famously though, further conditions are needed too – Kant’s shopkeeper did the right thing, but is not creditworthy for doing so. This case shows that creditworthiness requires that there be a certain kind of explanation of why you did the right thing. The reasons for which you act – your motivating reasons – must meet some further conditions. In this paper, I defend a new account of these conditions. On this account, creditworthiness requires that your motivating reasons be normative reasons, and that the principles from which you act match normative principles
Edge functionalisation of graphene nanoribbons with a boron dipyrrin complex : a thesis presented in partial fulfilment of the requirements for the degree of Doctor of Philosophy in Nanoscience at Massey University, Manawatū, New Zealand
Chemical modification can be used to tune the properties of graphene and
graphene nanoribbons, making them promising candidates for carbon-based
electronics. The control of edge chemistry provides a route to controlling the
properties of graphene nanoribbons, and their self-assembly into larger structures.
Mechanically fractured graphene nanoribbons are assumed to contain oxygen
functionalities, which enable chemical modification at the nanoribbon edge.
The development of graphene nanoribbon edge chemistry is difficult using
traditional techniques due to limitations on the characterisation of graphene
materials. Through the use of a chromophore with well-defined chemistry, the
reactivity of the edges has been investigated. Small aromatic systems were used to
understand the reactivity of the boron dipyrrin Cl-BODIPY, and with the aid of
spectroscopic and computational methods, the substitution mechanism and
properties of the compounds have been investigated.
The synthetic procedure was then applied to graphene nanoribbons. Results
from infrared and Raman spectroscopy studies show that edge-functionalisation of
graphene nanoribbons with BODIPY was successful, and no modifications to the
basal plane have been observed
Hybridity in MT: experiments on the Europarl corpus
(Way & Gough, 2005) demonstrate that their Marker-based EBMT system is capable of outperforming a word-based
SMT system trained on reasonably large data sets. (Groves & Way, 2005) take this a stage further and demonstrate that
while the EBMT system also outperforms a phrase-based SMT (PBSMT) system, a hybrid 'example-based SMT' system incorporating marker chunks and SMT sub-sentential alignments is capable of outperforming both baseline translation models for French{English translation.
In this paper, we show that similar gains are to be had from constructing a hybrid 'statistical EBMT' system capable
of outperforming the baseline system of (Way & Gough, 2005). Using the Europarl (Koehn, 2005) training and test
sets we show that this time around, although all 'hybrid' variants of the EBMT system fall short of the quality achieved by the baseline PBSMT system, merging
elements of the marker-based and SMT data, as in (Groves & Way, 2005), to create a hybrid 'example-based SMT' system, outperforms the baseline SMT and EBMT systems from which it is derived.
Furthermore, we provide further evidence in favour of hybrid systems by adding an SMT target language model to all EBMT system variants and demonstrate that this too has a positive e®ect on translation quality
What Level of Quality can Neural Machine Translation Attain on Literary Text?
Given the rise of a new approach to MT, Neural MT (NMT), and its promising
performance on different text types, we assess the translation quality it can
attain on what is perceived to be the greatest challenge for MT: literary text.
Specifically, we target novels, arguably the most popular type of literary
text. We build a literary-adapted NMT system for the English-to-Catalan
translation direction and evaluate it against a system pertaining to the
previous dominant paradigm in MT: statistical phrase-based MT (PBSMT). To this
end, for the first time we train MT systems, both NMT and PBSMT, on large
amounts of literary text (over 100 million words) and evaluate them on a set of
twelve widely known novels spanning from the the 1920s to the present day.
According to the BLEU automatic evaluation metric, NMT is significantly better
than PBSMT (p < 0.01) on all the novels considered. Overall, NMT results in a
11% relative improvement (3 points absolute) over PBSMT. A complementary human
evaluation on three of the books shows that between 17% and 34% of the
translations, depending on the book, produced by NMT (versus 8% and 20% with
PBSMT) are perceived by native speakers of the target language to be of
equivalent quality to translations produced by a professional human translator.Comment: Chapter for the forthcoming book "Translation Quality Assessment:
From Principles to Practice" (Springer
Lost in translation: the problems of using mainstream MT evaluation metrics for sign language translation
In this paper we consider the problems of applying corpus-based techniques to minority languages that are neither politically recognised nor have a formally accepted writing system, namely sign languages. We discuss the adoption of an annotated form of sign language data as a suitable corpus for the development of a data-driven machine translation (MT) system, and deal with issues that arise from its use. Useful software tools that facilitate easy annotation of video data are also discussed. Furthermore, we address the problems of using traditional MT evaluation metrics for sign language translation. Based on the candidate translations produced from our example-based machine translation system, we discuss why standard metrics fall short of providing an accurate evaluation and suggest more suitable evaluation methods
Disambiguation strategies for data-oriented translation
The Data-Oriented Translation (DOT) model { originally proposed in (Poutsma, 1998, 2003) and based on Data-Oriented Parsing (DOP) (e.g. (Bod, Scha, & Sima'an, 2003)) { is best described as a hybrid model of
translation as it combines examples, linguistic information and a statistical translation model. Although theoretically interesting, it inherits the computational complexity associated with DOP. In this paper, we focus on
one computational challenge for this model: efficiently selecting the `best' translation to output. We present four different disambiguation strategies in terms of how they are implemented in our DOT system, along with experiments
which investigate how they compare in terms of accuracy and
efficiency
Example-based controlled translation
The first research on integrating controlled language data in an Example-Based Machine Translation (EBMT) system was published in [Gough & Way, 2003]. We improve on their sub-sentential alignment algorithm to populate the system’s databases with more than six times as many potentially useful fragments. Together with two simple novel improvements—correcting mistranslations in the lexicon, and allowing multiple translations in the lexicon—translation quality improves considerably when target language
translations are constrained. We also develop the first EBMT system which attempts to filter the source language data using controlled language specifications. We provide
detailed automatic and human evaluations of a number of experiments carried out to test the quality of the system. We observe that our system outperforms Logomedia in a number of tests. Finally, despite conflicting results from different automatic evaluation metrics, we observe a preference for controlling the source data rather than the target translations
Teaching machine translation and translation technology: a contrastive study
The Machine Translation course at Dublin City University is taught to undergraduate students in Applied Computational
Linguistics, while Computer-Assisted Translation is taught on two translator-training programmes, one undergraduate and
one postgraduate. Given the differing backgrounds of these sets of students, the course material, methods of teaching and assessment all differ. We report here on our experiences of teaching these courses over a number of years, which we hope will be of interest to lecturers of similar existing courses, as well as providing a reference point for others who may be considering the introduction of such material
- …
