28,784 research outputs found
Improving the efficacy of the lean index through the quantification of qualitative lean metrics
Multiple lean metrics representing performance for various aspects of lean can be consolidated into one holistic measure for lean, called the lean index, of which there are two types. In this article it was established that the qualitative based lean index are subjective while the quantitative types lack scope. Subsequently, an appraisal is done on techniques for quantifying qualitative lean metrics so that the lean index is a hybrid of both, increasing the confidence in the information derived using the lean index. This ensures every detail of lean within a system is quantified, allowing daily tracking of lean. The techniques are demonstrated in a print packaging manufacturing case
Replication issues in syntax-based aspect extraction for opinion mining
Reproducing experiments is an important instrument to validate previous work
and build upon existing approaches. It has been tackled numerous times in
different areas of science. In this paper, we introduce an empirical
replicability study of three well-known algorithms for syntactic centric
aspect-based opinion mining. We show that reproducing results continues to be a
difficult endeavor, mainly due to the lack of details regarding preprocessing
and parameter setting, as well as due to the absence of available
implementations that clarify these details. We consider these are important
threats to validity of the research on the field, specifically when compared to
other problems in NLP where public datasets and code availability are critical
validity components. We conclude by encouraging code-based research, which we
think has a key role in helping researchers to understand the meaning of the
state-of-the-art better and to generate continuous advances.Comment: Accepted in the EACL 2017 SR
Improving the translation environment for professional translators
When using computer-aided translation systems in a typical, professional translation workflow, there are several stages at which there is room for improvement. The SCATE (Smart Computer-Aided Translation Environment) project investigated several of these aspects, both from a human-computer interaction point of view, as well as from a purely technological side.
This paper describes the SCATE research with respect to improved fuzzy matching, parallel treebanks, the integration of translation memories with machine translation, quality estimation, terminology extraction from comparable texts, the use of speech recognition in the translation process, and human computer interaction and interface design for the professional translation environment. For each of these topics, we describe the experiments we performed and the conclusions drawn, providing an overview of the highlights of the entire SCATE project
Fuzzy Logic in Clinical Practice Decision Support Systems
Computerized clinical guidelines can provide significant benefits to health outcomes and costs, however, their effective implementation presents significant problems. Vagueness and ambiguity inherent in natural (textual) clinical guidelines is not readily amenable to formulating automated alerts or advice. Fuzzy logic allows us to formalize the treatment of vagueness in a decision support architecture. This paper discusses sources of fuzziness in clinical practice guidelines. We consider how fuzzy logic can be applied and give a set of heuristics for the clinical guideline knowledge engineer for addressing uncertainty in practice guidelines. We describe the specific applicability of fuzzy logic to the decision support behavior of Care Plan On-Line, an intranet-based chronic care planning system for General Practitioners
Translation Memory Retrieval Methods
Translation Memory (TM) systems are one of the most widely used translation
technologies. An important part of TM systems is the matching algorithm that
determines what translations get retrieved from the bank of available
translations to assist the human translator. Although detailed accounts of the
matching algorithms used in commercial systems can't be found in the
literature, it is widely believed that edit distance algorithms are used. This
paper investigates and evaluates the use of several matching algorithms,
including the edit distance algorithm that is believed to be at the heart of
most modern commercial TM systems. This paper presents results showing how well
various matching algorithms correlate with human judgments of helpfulness
(collected via crowdsourcing with Amazon's Mechanical Turk). A new algorithm
based on weighted n-gram precision that can be adjusted for translator length
preferences consistently returns translations judged to be most helpful by
translators for multiple domains and language pairs.Comment: 9 pages, 6 tables, 3 figures; appeared in Proceedings of the 14th
Conference of the European Chapter of the Association for Computational
Linguistics, April 201
- …