21,253 research outputs found

    Formulation of linguistic regression model based on natural words

    Get PDF
    When human experts express their ideas and thoughts, human words are basically employed in these expressions. That is, the experts with much professional experiences are capable of making assessment using their intuition and experiences. The measurements and interpretation of characteristics are taken with uncertainty, because most measured characteristics, analytical result, and field data can be interpreted only intuitively by experts. In such cases, judgments may be expressed using linguistic terms by experts. The difficulty in the direct measurement of certain characteristics makes the estimation of these characteristics imprecise. Such measurements may be dealt with the use of fuzzy set theory. As Professor L. A. Zadeh has placed the stress on the importance of the computation with words, fuzzy sets can take a central role in handling words [12, 13]. In this perspective fuzzy logic approach is offten thought as the main and only useful tool to deal with human words. In this paper we intend to present another approach to handle human words instead of fuzzy reasoning. That is, fuzzy regression analysis enables us treat the computation with words. In order to process linguistic variables, we define the vocabulary translation and vocabulary matching which convert linguistic expressions into membership functions on the interval [0–1] on the basis of a linguistic dictionary, and vice versa. We employ fuzzy regression analysis in order to deal with the assessment process of experts from linguistic variables of features and characteristics of an objective into the linguistic expression of the total assessment. The presented process consists of four portions: (1) vocabulary translation, (2) estimation, (3) vocabulary matching and (4) dictionary. We employed fuzzy quantification theory type 2 for estimating the total assessment in terms of linguistic structural attributes which are obtained from an expert

    Improving the translation environment for professional translators

    Get PDF
    When using computer-aided translation systems in a typical, professional translation workflow, there are several stages at which there is room for improvement. The SCATE (Smart Computer-Aided Translation Environment) project investigated several of these aspects, both from a human-computer interaction point of view, as well as from a purely technological side. This paper describes the SCATE research with respect to improved fuzzy matching, parallel treebanks, the integration of translation memories with machine translation, quality estimation, terminology extraction from comparable texts, the use of speech recognition in the translation process, and human computer interaction and interface design for the professional translation environment. For each of these topics, we describe the experiments we performed and the conclusions drawn, providing an overview of the highlights of the entire SCATE project

    An annotation scheme and gold standard for Dutch-English word alignment

    Get PDF
    The importance of sentence-aligned parallel corpora has been widely acknowledged. Reference corpora in which sub-sentential translational correspondences are indicated manually are more labour-intensive to create, and hence less wide-spread. Such manually created reference alignments - also called Gold Standards - have been used in research projects to develop or test automatic word alignment systems. In most translations, translational correspondences are rather complex; for example word-by-word correspondences can be found only for a limited number of words. A reference corpus in which those complex translational correspondences are aligned manually is therefore also a useful resource for the development of translation tools and for translation studies. In this paper, we describe how we created a Gold Standard for the Dutch-English language pair. We present the annotation scheme, annotation guidelines, annotation tool and inter-annotator results. To cover a wide range of syntactic and stylistic phenomena that emerge from different writing and translation styles, our Gold Standard data set contains texts from different text types. The Gold Standard will be publicly available as part of the Dutch Parallel Corpus

    ADJECTIVISH INDONESIAN VERBS: A COGNITIVE SEMANTICS PERSPECTIVE

    Get PDF
    There has been a deeply rooted belief that parts of speech can be discretely categorized. It is somethingwidely accepted in linguistics. There is a tendency of taking for granted of such an academic beliefTherefore it happens from time to time without being thought critically the degree of its empirical truthThose studying linguistics will sooner or later read many linguistics text books stating that once a word hasits own category, there will be no potential of the word to have another word category. Most people learninglinguistics considered it as something necessary to occur. This linguistic phenomenon is not just taken tobe true, yet it comes to be taken as something conclusive. Factually, there are Indonesian verbs behavingadjectivishly. They are, to some extent, verbs, yet to another one, they are adjectives. It is evidenced by thefact that they have the properties of adjective. These linguistic phenomena demonstrate that there are Indonesian verbs that have stronger quality of their verbness. It means that there are Indonesian verbs thaare verbier than others. Based on the data found, Indonesian transitive verbs have higher potential to behaveadjectivishly than the Indonesian intransitive ones. A certain kind of Indonesian transitive verbs can betreated adjectivishly. This finding shows that the degree of word category discreteness, particularly verb, isnot something clear and cut. There are possibilities to emerge that word categories can, to some extent, be fuzzy. The fuzzy quality can be referred to the attributions of adjective to the Indonesian transitive verbs. Imeans that categorizing word class is not as simple as we thought before

    Knowledge discovery for friction stir welding via data driven approaches: Part 2 – multiobjective modelling using fuzzy rule based systems

    Get PDF
    In this final part of this extensive study, a new systematic data-driven fuzzy modelling approach has been developed, taking into account both the modelling accuracy and its interpretability (transparency) as attributes. For the first time, a data-driven modelling framework has been proposed designed and implemented in order to model the intricate FSW behaviours relating to AA5083 aluminium alloy, consisting of the grain size, mechanical properties, as well as internal process properties. As a result, ‘Pareto-optimal’ predictive models have been successfully elicited which, through validations on real data for the aluminium alloy AA5083, have been shown to be accurate, transparent and generic despite the conservative number of data points used for model training and testing. Compared with analytically based methods, the proposed data-driven modelling approach provides a more effective way to construct prediction models for FSW when there is an apparent lack of fundamental process knowledge

    Temporal fuzzy association rule mining with 2-tuple linguistic representation

    Get PDF
    This paper reports on an approach that contributes towards the problem of discovering fuzzy association rules that exhibit a temporal pattern. The novel application of the 2-tuple linguistic representation identifies fuzzy association rules in a temporal context, whilst maintaining the interpretability of linguistic terms. Iterative Rule Learning (IRL) with a Genetic Algorithm (GA) simultaneously induces rules and tunes the membership functions. The discovered rules were compared with those from a traditional method of discovering fuzzy association rules and results demonstrate how the traditional method can loose information because rules occur at the intersection of membership function boundaries. New information can be mined from the proposed approach by improving upon rules discovered with the traditional method and by discovering new rules

    A framework for the selection of the right nuclear power plant

    Get PDF
    Civil nuclear reactors are used for the production of electrical energy. In the nuclear industry vendors propose several nuclear reactor designs with a size from 35–45 MWe up to 1600–1700 MWe. The choice of the right design is a multidimensional problem since a utility has to include not only financial factors as levelised cost of electricity (LCOE) and internal rate of return (IRR), but also the so called “external factors” like the required spinning reserve, the impact on local industry and the social acceptability. Therefore it is necessary to balance advantages and disadvantages of each design during the entire life cycle of the plant, usually 40–60 years. In the scientific literature there are several techniques for solving this multidimensional problem. Unfortunately it does not seem possible to apply these methodologies as they are, since the problem is too complex and it is difficult to provide consistent and trustworthy expert judgments. This paper fills the gap, proposing a two-step framework to choosing the best nuclear reactor at the pre-feasibility study phase. The paper shows in detail how to use the methodology, comparing the choice of a small-medium reactor (SMR) with a large reactor (LR), characterised, according to the International Atomic Energy Agency (2006), by an electrical output respectively lower and higher than 700 MWe
    corecore