4,629 research outputs found

    An overview of decision table literature 1982-1995.

    Get PDF
    This report gives an overview of the literature on decision tables over the past 15 years. As much as possible, for each reference, an author supplied abstract, a number of keywords and a classification are provided. In some cases own comments are added. The purpose of these comments is to show where, how and why decision tables are used. The literature is classified according to application area, theoretical versus practical character, year of publication, country or origin (not necessarily country of publication) and the language of the document. After a description of the scope of the interview, classification results and the classification by topic are presented. The main body of the paper is the ordered list of publications with abstract, classification and comments.

    Recognition of Japanese handwritten characters with Machine learning techniques

    Get PDF
    The recognition of Japanese handwritten characters has always been a challenge for researchers. A large number of classes, their graphic complexity, and the existence of three different writing systems make this problem particularly difficult compared to Western writing. For decades, attempts have been made to address the problem using traditional OCR (Optical Character Recognition) techniques, with mixed results. With the recent popularization of machine learning techniques through neural networks, this research has been revitalized, bringing new approaches to the problem. These new results achieve performance levels comparable to human recognition. Furthermore, these new techniques have allowed collaboration with very different disciplines, such as the Humanities or East Asian studies, achieving advances in them that would not have been possible without this interdisciplinary work. In this thesis, these techniques are explored until reaching a sufficient level of understanding that allows us to carry out our own experiments, training neural network models with public datasets of Japanese characters. However, the scarcity of public datasets makes the task of researchers remarkably difficult. Our proposal to minimize this problem is the development of a web application that allows researchers to easily collect samples of Japanese characters through the collaboration of any user. Once the application is fully operational, the examples collected until that point will be used to create a new dataset in a specific format. Finally, we can use the new data to carry out comparative experiments with the previous neural network models

    μŒμ„±μ–Έμ–΄ μ΄ν•΄μ—μ„œμ˜ μ€‘μ˜μ„± ν•΄μ†Œ

    Get PDF
    ν•™μœ„λ…Όλ¬Έ(박사) -- μ„œμšΈλŒ€ν•™κ΅λŒ€ν•™μ› : κ³΅κ³ΌλŒ€ν•™ 전기·정보곡학뢀, 2022. 8. κΉ€λ‚¨μˆ˜.μ–Έμ–΄μ˜ μ€‘μ˜μ„±μ€ 필연적이닀. 그것은 μ–Έμ–΄κ°€ μ˜μ‚¬ μ†Œν†΅μ˜ μˆ˜λ‹¨μ΄μ§€λ§Œ, λͺ¨λ“  μ‚¬λžŒμ΄ μƒκ°ν•˜λŠ” μ–΄λ–€ κ°œλ…μ΄ μ™„λ²½νžˆ λ™μΌν•˜κ²Œ 전달될 수 μ—†λŠ” 것에 κΈ°μΈν•œλ‹€. μ΄λŠ” 필연적인 μš”μ†Œμ΄κΈ°λ„ ν•˜μ§€λ§Œ, μ–Έμ–΄ μ΄ν•΄μ—μ„œ μ€‘μ˜μ„±μ€ μ’…μ’… μ˜μ‚¬ μ†Œν†΅μ˜ λ‹¨μ ˆμ΄λ‚˜ μ‹€νŒ¨λ₯Ό κ°€μ Έμ˜€κΈ°λ„ ν•œλ‹€. μ–Έμ–΄μ˜ μ€‘μ˜μ„±μ—λŠ” λ‹€μ–‘ν•œ μΈ΅μœ„κ°€ μ‘΄μž¬ν•œλ‹€. ν•˜μ§€λ§Œ, λͺ¨λ“  μƒν™©μ—μ„œ μ€‘μ˜μ„±μ΄ ν•΄μ†Œλ  ν•„μš”λŠ” μ—†λ‹€. νƒœμŠ€ν¬λ§ˆλ‹€, λ„λ©”μΈλ§ˆλ‹€ λ‹€λ₯Έ μ–‘μƒμ˜ μ€‘μ˜μ„±μ΄ μ‘΄μž¬ν•˜λ©°, 이λ₯Ό 잘 μ •μ˜ν•˜κ³  ν•΄μ†Œλ  수 μžˆλŠ” μ€‘μ˜μ„±μž„μ„ νŒŒμ•…ν•œ ν›„ μ€‘μ˜μ μΈ λΆ€λΆ„ κ°„μ˜ 경계λ₯Ό 잘 μ •ν•˜λŠ” 것이 μ€‘μš”ν•˜λ‹€. λ³Έκ³ μ—μ„œλŠ” μŒμ„± μ–Έμ–΄ 처리, 특히 μ˜λ„ 이해에 μžˆμ–΄ μ–΄λ–€ μ–‘μƒμ˜ μ€‘μ˜μ„±μ΄ λ°œμƒν•  수 μžˆλŠ”μ§€ μ•Œμ•„λ³΄κ³ , 이λ₯Ό ν•΄μ†Œν•˜κΈ° μœ„ν•œ 연ꡬλ₯Ό μ§„ν–‰ν•œλ‹€. μ΄λŸ¬ν•œ ν˜„μƒμ€ λ‹€μ–‘ν•œ μ–Έμ–΄μ—μ„œ λ°œμƒν•˜μ§€λ§Œ, κ·Έ 정도 및 양상은 언어에 λ”°λΌμ„œ λ‹€λ₯΄κ²Œ λ‚˜νƒ€λ‚˜λŠ” κ²½μš°κ°€ λ§Žλ‹€. 우리의 μ—°κ΅¬μ—μ„œ μ£Όλͺ©ν•˜λŠ” 뢀뢄은, μŒμ„± 언어에 λ‹΄κΈ΄ μ •λ³΄λŸ‰κ³Ό 문자 μ–Έμ–΄μ˜ μ •λ³΄λŸ‰ 차이둜 인해 μ€‘μ˜μ„±μ΄ λ°œμƒν•˜λŠ” κ²½μš°λ“€μ΄λ‹€. λ³Έ μ—°κ΅¬λŠ” 운율(prosody)에 따라 λ¬Έμž₯ ν˜•μ‹ 및 μ˜λ„κ°€ λ‹€λ₯΄κ²Œ ν‘œν˜„λ˜λŠ” κ²½μš°κ°€ λ§Žμ€ ν•œκ΅­μ–΄λ₯Ό λŒ€μƒμœΌλ‘œ μ§„ν–‰λœλ‹€. ν•œκ΅­μ–΄μ—μ„œλŠ” λ‹€μ–‘ν•œ κΈ°λŠ₯이 μžˆλŠ”(multi-functionalν•œ) μ’…κ²°μ–΄λ―Έ(sentence ender), λΉˆλ²ˆν•œ νƒˆλ½ ν˜„μƒ(pro-drop), μ˜λ¬Έμ‚¬ κ°„μ„­(wh-intervention) λ“±μœΌλ‘œ 인해, 같은 ν…μŠ€νŠΈκ°€ μ—¬λŸ¬ μ˜λ„λ‘œ μ½νžˆλŠ” ν˜„μƒμ΄ λ°œμƒν•˜κ³€ ν•œλ‹€. 이것이 μ˜λ„ 이해에 ν˜Όμ„ μ„ κ°€μ Έμ˜¬ 수 μžˆλ‹€λŠ” 데에 μ°©μ•ˆν•˜μ—¬, λ³Έ μ—°κ΅¬μ—μ„œλŠ” μ΄λŸ¬ν•œ μ€‘μ˜μ„±μ„ λ¨Όμ € μ •μ˜ν•˜κ³ , μ€‘μ˜μ μΈ λ¬Έμž₯듀을 감지할 수 μžˆλ„λ‘ λ§λ­‰μΉ˜λ₯Ό κ΅¬μΆ•ν•œλ‹€. μ˜λ„ 이해λ₯Ό μœ„ν•œ λ§λ­‰μΉ˜λ₯Ό κ΅¬μΆ•ν•˜λŠ” κ³Όμ •μ—μ„œ λ¬Έμž₯의 지ν–₯μ„±(directivity)κ³Ό μˆ˜μ‚¬μ„±(rhetoricalness)이 κ³ λ €λœλ‹€. 이것은 μŒμ„± μ–Έμ–΄μ˜ μ˜λ„λ₯Ό μ„œμˆ , 질문, λͺ…λ Ή, μˆ˜μ‚¬μ˜λ¬Έλ¬Έ, 그리고 μˆ˜μ‚¬λͺ…λ Ήλ¬ΈμœΌλ‘œ κ΅¬λΆ„ν•˜κ²Œ ν•˜λŠ” 기쀀이 λœλ‹€. λ³Έ μ—°κ΅¬μ—μ„œλŠ” 기둝된 μŒμ„± μ–Έμ–΄(spoken language)λ₯Ό μΆ©λΆ„νžˆ 높은 μΌμΉ˜λ„(kappa = 0.85)둜 μ£Όμ„ν•œ λ§λ­‰μΉ˜λ₯Ό μ΄μš©ν•΄, μŒμ„±μ΄ 주어지지 μ•Šμ€ μƒν™©μ—μ„œ μ€‘μ˜μ μΈ ν…μŠ€νŠΈλ₯Ό κ°μ§€ν•˜λŠ” 데에 μ–΄λ–€ μ „λž΅ ν˜Ήμ€ μ–Έμ–΄ λͺ¨λΈμ΄ νš¨κ³Όμ μΈκ°€λ₯Ό 보이고, ν•΄λ‹Ή νƒœμŠ€ν¬μ˜ νŠΉμ§•μ„ μ •μ„±μ μœΌλ‘œ λΆ„μ„ν•œλ‹€. λ˜ν•œ, μš°λ¦¬λŠ” ν…μŠ€νŠΈ μΈ΅μœ„μ—μ„œλ§Œ μ€‘μ˜μ„±μ— μ ‘κ·Όν•˜μ§€ μ•Šκ³ , μ‹€μ œλ‘œ μŒμ„±μ΄ 주어진 μƒν™©μ—μ„œ μ€‘μ˜μ„± ν•΄μ†Œ(disambiguation)κ°€ κ°€λŠ₯ν•œμ§€λ₯Ό μ•Œμ•„λ³΄κΈ° μœ„ν•΄, ν…μŠ€νŠΈκ°€ μ€‘μ˜μ μΈ λ°œν™”λ“€λ§ŒμœΌλ‘œ κ΅¬μ„±λœ 인곡적인 μŒμ„± λ§λ­‰μΉ˜λ₯Ό μ„€κ³„ν•˜κ³  λ‹€μ–‘ν•œ 집쀑(attention) 기반 신경망(neural network) λͺ¨λΈλ“€μ„ μ΄μš©ν•΄ μ€‘μ˜μ„±μ„ ν•΄μ†Œν•œλ‹€. 이 κ³Όμ •μ—μ„œ λͺ¨λΈ 기반 톡사적/의미적 μ€‘μ˜μ„± ν•΄μ†Œκ°€ μ–΄λ– ν•œ κ²½μš°μ— κ°€μž₯ νš¨κ³Όμ μΈμ§€ κ΄€μ°°ν•˜κ³ , μΈκ°„μ˜ μ–Έμ–΄ μ²˜λ¦¬μ™€ μ–΄λ–€ 연관이 μžˆλŠ”μ§€μ— λŒ€ν•œ 관점을 μ œμ‹œν•œλ‹€. λ³Έ μ—°κ΅¬μ—μ„œλŠ” λ§ˆμ§€λ§‰μœΌλ‘œ, μœ„μ™€ 같은 절차둜 μ˜λ„ 이해 κ³Όμ •μ—μ„œμ˜ μ€‘μ˜μ„±μ΄ ν•΄μ†Œλ˜μ—ˆμ„ 경우, 이λ₯Ό μ–΄λ–»κ²Œ 산업계 ν˜Ήμ€ 연ꡬ λ‹¨μ—μ„œ ν™œμš©ν•  수 μžˆλŠ”κ°€μ— λŒ€ν•œ κ°„λž΅ν•œ λ‘œλ“œλ§΅μ„ μ œμ‹œν•œλ‹€. ν…μŠ€νŠΈμ— κΈ°λ°˜ν•œ μ€‘μ˜μ„± νŒŒμ•…κ³Ό μŒμ„± 기반의 μ˜λ„ 이해 λͺ¨λ“ˆμ„ ν†΅ν•©ν•œλ‹€λ©΄, 였λ₯˜μ˜ μ „νŒŒλ₯Ό μ€„μ΄λ©΄μ„œλ„ 효율적으둜 μ€‘μ˜μ„±μ„ λ‹€λ£° 수 μžˆλŠ” μ‹œμŠ€ν…œμ„ λ§Œλ“€ 수 μžˆμ„ 것이닀. μ΄λŸ¬ν•œ μ‹œμŠ€ν…œμ€ λŒ€ν™” λ§€λ‹ˆμ €(dialogue manager)와 ν†΅ν•©λ˜μ–΄ κ°„λ‹¨ν•œ λŒ€ν™”(chit-chat)κ°€ κ°€λŠ₯ν•œ λͺ©μ  지ν–₯ λŒ€ν™” μ‹œμŠ€ν…œ(task-oriented dialogue system)을 ꡬ좕할 μˆ˜λ„ 있고, 단일 μ–Έμ–΄ 쑰건(monolingual condition)을 λ„˜μ–΄ μŒμ„± λ²ˆμ—­μ—μ„œμ˜ μ—λŸ¬λ₯Ό μ€„μ΄λŠ” 데에 ν™œμš©λ  μˆ˜λ„ μžˆλ‹€. μš°λ¦¬λŠ” λ³Έκ³ λ₯Ό 톡해, μš΄μœ¨μ— λ―Όκ°ν•œ(prosody-sensitive) μ–Έμ–΄μ—μ„œ μ˜λ„ 이해λ₯Ό μœ„ν•œ μ€‘μ˜μ„± ν•΄μ†Œκ°€ κ°€λŠ₯ν•˜λ©°, 이λ₯Ό μ‚°μ—… 및 연ꡬ λ‹¨μ—μ„œ ν™œμš©ν•  수 μžˆμŒμ„ 보이고자 ν•œλ‹€. λ³Έ 연ꡬ가 λ‹€λ₯Έ μ–Έμ–΄ 및 λ„λ©”μΈμ—μ„œλ„ 고질적인 μ€‘μ˜μ„± 문제λ₯Ό ν•΄μ†Œν•˜λŠ” 데에 도움이 되길 바라며, 이λ₯Ό μœ„ν•΄ 연ꡬλ₯Ό μ§„ν–‰ν•˜λŠ” 데에 ν™œμš©λœ λ¦¬μ†ŒμŠ€, κ²°κ³Όλ¬Ό 및 μ½”λ“œλ“€μ„ κ³΅μœ ν•¨μœΌλ‘œμ¨ ν•™κ³„μ˜ λ°œμ „μ— μ΄λ°”μ§€ν•˜κ³ μž ν•œλ‹€.Ambiguity in the language is inevitable. It is because, albeit language is a means of communication, a particular concept that everyone thinks of cannot be conveyed in a perfectly identical manner. As this is an inevitable factor, ambiguity in language understanding often leads to breakdown or failure of communication. There are various hierarchies of language ambiguity. However, not all ambiguity needs to be resolved. Different aspects of ambiguity exist for each domain and task, and it is crucial to define the boundary after recognizing the ambiguity that can be well-defined and resolved. In this dissertation, we investigate the types of ambiguity that appear in spoken language processing, especially in intention understanding, and conduct research to define and resolve it. Although this phenomenon occurs in various languages, its degree and aspect depend on the language investigated. The factor we focus on is cases where the ambiguity comes from the gap between the amount of information in the spoken language and the text. Here, we study the Korean language, which often shows different sentence structures and intentions depending on the prosody. In the Korean language, a text is often read with multiple intentions due to multi-functional sentence enders, frequent pro-drop, wh-intervention, etc. We first define this type of ambiguity and construct a corpus that helps detect ambiguous sentences, given that such utterances can be problematic for intention understanding. In constructing a corpus for intention understanding, we consider the directivity and rhetoricalness of a sentence. They make up a criterion for classifying the intention of spoken language into a statement, question, command, rhetorical question, and rhetorical command. Using the corpus annotated with sufficiently high agreement on a spoken language corpus, we show that colloquial corpus-based language models are effective in classifying ambiguous text given only textual data, and qualitatively analyze the characteristics of the task. We do not handle ambiguity only at the text level. To find out whether actual disambiguation is possible given a speech input, we design an artificial spoken language corpus composed only of ambiguous sentences, and resolve ambiguity with various attention-based neural network architectures. In this process, we observe that the ambiguity resolution is most effective when both textual and acoustic input co-attends each feature, especially when the audio processing module conveys attention information to the text module in a multi-hop manner. Finally, assuming the case that the ambiguity of intention understanding is resolved by proposed strategies, we present a brief roadmap of how the results can be utilized at the industry or research level. By integrating text-based ambiguity detection and speech-based intention understanding module, we can build a system that handles ambiguity efficiently while reducing error propagation. Such a system can be integrated with dialogue managers to make up a task-oriented dialogue system capable of chit-chat, or it can be used for error reduction in multilingual circumstances such as speech translation, beyond merely monolingual conditions. Throughout the dissertation, we want to show that ambiguity resolution for intention understanding in prosody-sensitive language can be achieved and can be utilized at the industry or research level. We hope that this study helps tackle chronic ambiguity issues in other languages ​​or other domains, linking linguistic science and engineering approaches.1 Introduction 1 1.1 Motivation 2 1.2 Research Goal 4 1.3 Outline of the Dissertation 5 2 Related Work 6 2.1 Spoken Language Understanding 6 2.2 Speech Act and Intention 8 2.2.1 Performatives and statements 8 2.2.2 Illocutionary act and speech act 9 2.2.3 Formal semantic approaches 11 2.3 Ambiguity of Intention Understanding in Korean 14 2.3.1 Ambiguities in language 14 2.3.2 Speech act and intention understanding in Korean 16 3 Ambiguity in Intention Understanding of Spoken Language 20 3.1 Intention Understanding and Ambiguity 20 3.2 Annotation Protocol 23 3.2.1 Fragments 24 3.2.2 Clear-cut cases 26 3.2.3 Intonation-dependent utterances 28 3.3 Data Construction . 32 3.3.1 Source scripts 32 3.3.2 Agreement 32 3.3.3 Augmentation 33 3.3.4 Train split 33 3.4 Experiments and Results 34 3.4.1 Models 34 3.4.2 Implementation 36 3.4.3 Results 37 3.5 Findings and Summary 44 3.5.1 Findings 44 3.5.2 Summary 45 4 Disambiguation of Speech Intention 47 4.1 Ambiguity Resolution 47 4.1.1 Prosody and syntax 48 4.1.2 Disambiguation with prosody 50 4.1.3 Approaches in SLU 50 4.2 Dataset Construction 51 4.2.1 Script generation 52 4.2.2 Label tagging 54 4.2.3 Recording 56 4.3 Experiments and Results 57 4.3.1 Models 57 4.3.2 Results 60 4.4 Summary 63 5 System Integration and Application 65 5.1 System Integration for Intention Identification 65 5.1.1 Proof of concept 65 5.1.2 Preliminary study 69 5.2 Application to Spoken Dialogue System 75 5.2.1 What is 'Free-running' 76 5.2.2 Omakase chatbot 76 5.3 Beyond Monolingual Approaches 84 5.3.1 Spoken language translation 85 5.3.2 Dataset 87 5.3.3 Analysis 94 5.3.4 Discussion 95 5.4 Summary 100 6 Conclusion and Future Work 103 Bibliography 105 Abstract (In Korean) 124 Acknowledgment 126λ°•

    Proceedings of the Fifth Italian Conference on Computational Linguistics CLiC-it 2018 : 10-12 December 2018, Torino

    Get PDF
    On behalf of the Program Committee, a very warm welcome to the Fifth Italian Conference on Computational Linguistics (CLiC-­‐it 2018). This edition of the conference is held in Torino. The conference is locally organised by the University of Torino and hosted into its prestigious main lecture hall β€œCavallerizza Reale”. The CLiC-­‐it conference series is an initiative of the Italian Association for Computational Linguistics (AILC) which, after five years of activity, has clearly established itself as the premier national forum for research and development in the fields of Computational Linguistics and Natural Language Processing, where leading researchers and practitioners from academia and industry meet to share their research results, experiences, and challenges

    Advanced document data extraction techniques to improve supply chain performance

    Get PDF
    In this thesis, a novel machine learning technique to extract text-based information from scanned images has been developed. This information extraction is performed in the context of scanned invoices and bills used in financial transactions. These financial transactions contain a considerable amount of data that must be extracted, refined, and stored digitally before it can be used for analysis. Converting this data into a digital format is often a time-consuming process. Automation and data optimisation show promise as methods for reducing the time required and the cost of Supply Chain Management (SCM) processes, especially Supplier Invoice Management (SIM), Financial Supply Chain Management (FSCM) and Supply Chain procurement processes. This thesis uses a cross-disciplinary approach involving Computer Science and Operational Management to explore the benefit of automated invoice data extraction in business and its impact on SCM. The study adopts a multimethod approach based on empirical research, surveys, and interviews performed on selected companies.The expert system developed in this thesis focuses on two distinct areas of research: Text/Object Detection and Text Extraction. For Text/Object Detection, the Faster R-CNN model was analysed. While this model yields outstanding results in terms of object detection, it is limited by poor performance when image quality is low. The Generative Adversarial Network (GAN) model is proposed in response to this limitation. The GAN model is a generator network that is implemented with the help of the Faster R-CNN model and a discriminator that relies on PatchGAN. The output of the GAN model is text data with bonding boxes. For text extraction from the bounding box, a novel data extraction framework consisting of various processes including XML processing in case of existing OCR engine, bounding box pre-processing, text clean up, OCR error correction, spell check, type check, pattern-based matching, and finally, a learning mechanism for automatizing future data extraction was designed. Whichever fields the system can extract successfully are provided in key-value format.The efficiency of the proposed system was validated using existing datasets such as SROIE and VATI. Real-time data was validated using invoices that were collected by two companies that provide invoice automation services in various countries. Currently, these scanned invoices are sent to an OCR system such as OmniPage, Tesseract, or ABBYY FRE to extract text blocks and later, a rule-based engine is used to extract relevant data. While the system’s methodology is robust, the companies surveyed were not satisfied with its accuracy. Thus, they sought out new, optimized solutions. To confirm the results, the engines were used to return XML-based files with text and metadata identified. The output XML data was then fed into this new system for information extraction. This system uses the existing OCR engine and a novel, self-adaptive, learning-based OCR engine. This new engine is based on the GAN model for better text identification. Experiments were conducted on various invoice formats to further test and refine its extraction capabilities. For cost optimisation and the analysis of spend classification, additional data were provided by another company in London that holds expertise in reducing their clients' procurement costs. This data was fed into our system to get a deeper level of spend classification and categorisation. This helped the company to reduce its reliance on human effort and allowed for greater efficiency in comparison with the process of performing similar tasks manually using excel sheets and Business Intelligence (BI) tools.The intention behind the development of this novel methodology was twofold. First, to test and develop a novel solution that does not depend on any specific OCR technology. Second, to increase the information extraction accuracy factor over that of existing methodologies. Finally, it evaluates the real-world need for the system and the impact it would have on SCM. This newly developed method is generic and can extract text from any given invoice, making it a valuable tool for optimizing SCM. In addition, the system uses a template-matching approach to ensure the quality of the extracted information
    • …
    corecore