20,696 research outputs found
A puzzle about enkratic reasoning
Enkratic reasoning—reasoning from believing that you ought to do something to an intention to do that thing—seems good. But there is a puzzle about how it could be. Good reasoning preserves correctness, other things equal. But enkratic reasoning does not preserve correctness. This is because what you ought to do depends on your epistemic position, but what it is correct to intend does not. In this paper, I motivate these claims and thus show that there is a puzzle. I then argue that the best solution is to deny that correctness is always independent of your epistemic position. As I explain, a notable upshot is that a central epistemic norm directs us to believe, not simply what is true, but what we are in a position to know
Creditworthiness and Matching Principles
You are creditworthy for φ-ing only if φ-ing is the right thing to do. Famously though, further conditions are needed too – Kant’s shopkeeper did the right thing, but is not creditworthy for doing so. This case shows that creditworthiness requires that there be a certain kind of explanation of why you did the right thing. The reasons for which you act – your motivating reasons – must meet some further conditions. In this paper, I defend a new account of these conditions. On this account, creditworthiness requires that your motivating reasons be normative reasons, and that the principles from which you act match normative principles
Edge functionalisation of graphene nanoribbons with a boron dipyrrin complex : a thesis presented in partial fulfilment of the requirements for the degree of Doctor of Philosophy in Nanoscience at Massey University, Manawatū, New Zealand
Chemical modification can be used to tune the properties of graphene and
graphene nanoribbons, making them promising candidates for carbon-based
electronics. The control of edge chemistry provides a route to controlling the
properties of graphene nanoribbons, and their self-assembly into larger structures.
Mechanically fractured graphene nanoribbons are assumed to contain oxygen
functionalities, which enable chemical modification at the nanoribbon edge.
The development of graphene nanoribbon edge chemistry is difficult using
traditional techniques due to limitations on the characterisation of graphene
materials. Through the use of a chromophore with well-defined chemistry, the
reactivity of the edges has been investigated. Small aromatic systems were used to
understand the reactivity of the boron dipyrrin Cl-BODIPY, and with the aid of
spectroscopic and computational methods, the substitution mechanism and
properties of the compounds have been investigated.
The synthetic procedure was then applied to graphene nanoribbons. Results
from infrared and Raman spectroscopy studies show that edge-functionalisation of
graphene nanoribbons with BODIPY was successful, and no modifications to the
basal plane have been observed
Hybridity in MT: experiments on the Europarl corpus
(Way & Gough, 2005) demonstrate that their Marker-based EBMT system is capable of outperforming a word-based
SMT system trained on reasonably large data sets. (Groves & Way, 2005) take this a stage further and demonstrate that
while the EBMT system also outperforms a phrase-based SMT (PBSMT) system, a hybrid 'example-based SMT' system incorporating marker chunks and SMT sub-sentential alignments is capable of outperforming both baseline translation models for French{English translation.
In this paper, we show that similar gains are to be had from constructing a hybrid 'statistical EBMT' system capable
of outperforming the baseline system of (Way & Gough, 2005). Using the Europarl (Koehn, 2005) training and test
sets we show that this time around, although all 'hybrid' variants of the EBMT system fall short of the quality achieved by the baseline PBSMT system, merging
elements of the marker-based and SMT data, as in (Groves & Way, 2005), to create a hybrid 'example-based SMT' system, outperforms the baseline SMT and EBMT systems from which it is derived.
Furthermore, we provide further evidence in favour of hybrid systems by adding an SMT target language model to all EBMT system variants and demonstrate that this too has a positive e®ect on translation quality
What Level of Quality can Neural Machine Translation Attain on Literary Text?
Given the rise of a new approach to MT, Neural MT (NMT), and its promising
performance on different text types, we assess the translation quality it can
attain on what is perceived to be the greatest challenge for MT: literary text.
Specifically, we target novels, arguably the most popular type of literary
text. We build a literary-adapted NMT system for the English-to-Catalan
translation direction and evaluate it against a system pertaining to the
previous dominant paradigm in MT: statistical phrase-based MT (PBSMT). To this
end, for the first time we train MT systems, both NMT and PBSMT, on large
amounts of literary text (over 100 million words) and evaluate them on a set of
twelve widely known novels spanning from the the 1920s to the present day.
According to the BLEU automatic evaluation metric, NMT is significantly better
than PBSMT (p < 0.01) on all the novels considered. Overall, NMT results in a
11% relative improvement (3 points absolute) over PBSMT. A complementary human
evaluation on three of the books shows that between 17% and 34% of the
translations, depending on the book, produced by NMT (versus 8% and 20% with
PBSMT) are perceived by native speakers of the target language to be of
equivalent quality to translations produced by a professional human translator.Comment: Chapter for the forthcoming book "Translation Quality Assessment:
From Principles to Practice" (Springer
Lost in translation: the problems of using mainstream MT evaluation metrics for sign language translation
In this paper we consider the problems of applying corpus-based techniques to minority languages that are neither politically recognised nor have a formally accepted writing system, namely sign languages. We discuss the adoption of an annotated form of sign language data as a suitable corpus for the development of a data-driven machine translation (MT) system, and deal with issues that arise from its use. Useful software tools that facilitate easy annotation of video data are also discussed. Furthermore, we address the problems of using traditional MT evaluation metrics for sign language translation. Based on the candidate translations produced from our example-based machine translation system, we discuss why standard metrics fall short of providing an accurate evaluation and suggest more suitable evaluation methods
Bilingually motivated domain-adapted word segmentation for statistical machine translation
We introduce a word segmentation approach to languages where word boundaries are not orthographically marked,
with application to Phrase-Based Statistical Machine Translation (PB-SMT). Instead of using manually segmented monolingual domain-specific corpora to train segmenters, we make use of bilingual corpora and statistical word alignment techniques. First of all, our approach is
adapted for the specific translation task at hand by taking the corresponding source (target) language into account. Secondly, this approach does not rely on manually segmented training data so that it can be automatically adapted for different domains. We evaluate the performance of our segmentation approach on PB-SMT tasks from two domains and
demonstrate that our approach scores consistently among the best results across different data conditions
Hybrid example-based SMT: the best of both worlds?
(Way and Gough, 2005) provide an indepth comparison of their Example-Based Machine Translation (EBMT) system with
a Statistical Machine Translation (SMT) system constructed from freely available tools. According to a wide variety of automatic evaluation metrics, they demonstrated
that their EBMT system outperformed the SMT system by a factor of two to one.
Nevertheless, they did not test their EBMT system against a phrase-based SMT system. Obtaining their training and test
data for English–French, we carry out a number of experiments using the Pharaoh SMT Decoder. While better results are seen when Pharaoh is seeded with Giza++
word- and phrase-based data compared to EBMT sub-sentential alignments, in general better results are obtained when combinations of this 'hybrid' data is used to construct the translation and probability models. While for the most part the EBMT system of (Gough & Way, 2004b) outperforms any flavour of the phrasebased SMT systems constructed in our
experiments, combining the data sets automatically induced by both Giza++ and their EBMT system leads to a hybrid system which improves on the EBMT system per se for French–English
- …
