23 research outputs found

    Taguchi based Design of Sequential Convolution Neural Network for Classification of Defective Fasteners

    Full text link
    Fasteners play a critical role in securing various parts of machinery. Deformations such as dents, cracks, and scratches on the surface of fasteners are caused by material properties and incorrect handling of equipment during production processes. As a result, quality control is required to ensure safe and reliable operations. The existing defect inspection method relies on manual examination, which consumes a significant amount of time, money, and other resources; also, accuracy cannot be guaranteed due to human error. Automatic defect detection systems have proven impactful over the manual inspection technique for defect analysis. However, computational techniques such as convolutional neural networks (CNN) and deep learning-based approaches are evolutionary methods. By carefully selecting the design parameter values, the full potential of CNN can be realised. Using Taguchi-based design of experiments and analysis, an attempt has been made to develop a robust automatic system in this study. The dataset used to train the system has been created manually for M14 size nuts having two labeled classes: Defective and Non-defective. There are a total of 264 images in the dataset. The proposed sequential CNN comes up with a 96.3% validation accuracy, 0.277 validation loss at 0.001 learning rate.Comment: 13 pages, 6 figure

    A literature review of Artificial Intelligence applications in railway systems

    Get PDF
    Nowadays it is widely accepted that Artificial Intelligence (AI) is significantly influencing a large number of domains, including railways. In this paper, we present a systematic literature review of the current state-of-the-art of AI in railway transport. In particular, we analysed and discussed papers from a holistic railway perspective, covering sub-domains such as maintenance and inspection, planning and management, safety and security, autonomous driving and control, revenue management, transport policy, and passenger mobility. This review makes an initial step towards shaping the role of AI in future railways and provides a summary of the current focuses of AI research connected to rail transport. We reviewed about 139 scientific papers covering the period from 2010 to December 2020. We found that the major research efforts have been put in AI for rail maintenance and inspection, while very limited or no research has been found on AI for rail transport policy and revenue management. The remaining sub-domains received mild to moderate attention. AI applications are promising and tend to act as a game-changer in tackling multiple railway challenges. However, at the moment, AI research in railways is still mostly at its early stages. Future research can be expected towards developing advanced combined AI applications (e.g. with optimization), using AI in decision making, dealing with uncertainty and tackling newly rising cybersecurity challenges

    Automatic Multi-Label Image Classification Model for Construction Site Images

    Get PDF
    ํ•™์œ„๋…ผ๋ฌธ(์„์‚ฌ)--์„œ์šธ๋Œ€ํ•™๊ต ๋Œ€ํ•™์› :๊ณต๊ณผ๋Œ€ํ•™ ๊ฑด์ถ•ํ•™๊ณผ,2019. 8. ๋ฐ•๋ฌธ์„œ.์ตœ๊ทผ ์ด๋ฏธ์ง€ ๋ถ„์„ ๊ธฐ์ˆ ์ด ๋ฐœ์ „ํ•จ์— ๋”ฐ๋ผ ๊ฑด์„ค ํ˜„์žฅ์—์„œ ๋‹ค์–‘ํ•œ ๋ฐฉ๋ฉด์—์„œ ํ˜„์žฅ์—์„œ ์ˆ˜์ง‘๋œ ์‚ฌ์ง„์„ ํ™œ์šฉํ•˜์—ฌ ๊ฑด์„ค ํ”„๋กœ์ ํŠธ๋ฅผ ๊ด€๋ฆฌํ•˜๊ณ ์ž ํ•˜๋Š” ์‹œ๋„๊ฐ€ ์ด๋ฃจ์–ด์ง€๊ณ  ์žˆ๋‹ค. ํŠนํžˆ ์ดฌ์˜ ์žฅ๋น„์˜ ๋ฐœ์ „๋˜์ž ๊ฑด์„ค ํ˜„์žฅ์—์„œ ์ƒ์‚ฐ๋˜๋Š” ์‚ฌ์ง„์˜ ์ˆ˜๊ฐ€ ๊ธ‰์ฆํ•˜์—ฌ ๊ฑด์„ค ํ˜„์žฅ ์‚ฌ์ง„์˜ ์ž ์žฌ์ ์ธ ํ™œ์šฉ๋„๋Š” ๋”์šฑ ๋” ๋†’์•„์ง€๊ณ  ์žˆ๋‹ค. ํ•˜์ง€๋งŒ ์ด๋ ‡๊ฒŒ ์ƒ์‚ฐ๋˜๋Š” ๋งŽ์€ ์–‘์˜ ์‚ฌ์ง„์€ ๋Œ€๋ถ€๋ถ„ ์ œ๋Œ€๋กœ ๋ถ„๋ฅ˜๋˜์ง€ ์•Š์€ ์ƒํƒœ๋กœ ๋ณด๊ด€๋˜๊ณ  ์žˆ๊ธฐ ๋•Œ๋ฌธ์— ํ˜„์žฅ ์‚ฌ์ง„์œผ๋กœ๋ถ€ํ„ฐ ํ•„์š”ํ•œ ํ”„๋กœ์ ํŠธ ์ •๋ณด๋ฅผ ์ถ”์ถœํ•˜๋Š” ๊ฒƒ์€ ๋งค์šฐ ์–ด๋ ค์šด ์‹ค์ •์ด๋‹ค. ํ˜„์žฌ ํ˜„์žฅ์—์„œ ์‚ฌ์ง„์„ ๋ถ„๋ฅ˜ํ•˜๋Š” ๋ฐฉ์‹์€ ์‚ฌ์šฉ์ž๊ฐ€ ์ง์ ‘ ๊ฐœ๋ณ„ ์‚ฌ์ง„์„ ๊ฒ€ํ† ํ•œ ๋’ค ๋ถ„๋ฅ˜ํ•˜๊ธฐ ๋•Œ๋ฌธ์— ๋งŽ์€ ์‹œ๊ฐ„๊ณผ ๋…ธ๋ ฅ์ด ์š”๊ตฌ๋˜๊ณ , ์ด๋ฏธ์ง€ ๋ถ„๋ฅ˜๋ฅผ ์œ„ํ•œ ํŠน์ง•์„ ์ง์ ‘์ ์œผ๋กœ ์ถ”์ถœํ•˜๋Š” ๊ธฐ์กด์˜ ์ด๋ฏธ์ง€ ๋ถ„์„ ๊ธฐ์ˆ  ์—ญ์‹œ ๋ณต์žกํ•œ ๊ฑด์„ค ํ˜„์žฅ ์‚ฌ์ง„์˜ ํŠน์ง•์„ ๋ฒ”์šฉ์ ์œผ๋กœ ํ•™์Šตํ•˜๋Š” ๋ฐ๋Š” ํ•œ๊ณ„๊ฐ€ ์žˆ๋‹ค. ์ด์— ๋ณธ ์—ฐ๊ตฌ์—์„œ๋Š” ๊ฑด์„ค ํ˜„์žฅ ์‚ฌ์ง„์˜ ๋ชจ์Šต์ด ๋งค์šฐ ๋‹ค์–‘ํ•˜๊ณ , ๋™์ ์œผ๋กœ ๋ณ€ํ•˜๋Š” ๊ฒƒ์— ๋Œ€์‘ํ•˜๊ธฐ ์œ„ํ•ด ์ด๋ฏธ์ง€ ๋ถ„๋ฅ˜์—์„œ ๋†’์€ ์„ฑ๋Šฅ์„ ๋ณด์ด๊ณ  ํ•ฉ์„ฑ๊ณฑ ์‹ ๊ฒฝ๋ง(Deep Convolutional Neural Network) ์•Œ๊ณ ๋ฆฌ์ฆ˜์„ ์ ์šฉํ•˜์—ฌ ๊ฐœ๋ณ„ ๊ฑด์„ค ํ˜„์žฅ ์‚ฌ์ง„์— ์ ํ•ฉํ•œ ํ‚ค์›Œ๋“œ๋ฅผ ์ž๋™์œผ๋กœ ํ• ๋‹นํ•  ์ˆ˜ ์žˆ๋Š” ๋ชจ๋ธ์„ ๊ฐœ๋ฐœํ•˜๊ณ ์ž ํ•œ๋‹ค. ํ•ฉ์„ฑ๊ณฑ ์‹ ๊ฒฝ๋ง ๋ชจ๋ธ์€ ๋ชจ๋ธ ๊ตฌ์กฐ๊ฐ€ ๊นŠ์–ด์ง์— ๋”ฐ๋ผ ๋†’์€ ์ฐจ์›์˜ ํ•ญ์ƒ์„ฑ(invariant) ํŠน์ง•๋„ ํšจ๊ณผ์ ์œผ๋กœ ํ•™์Šตํ•  ์ˆ˜ ์žˆ๋Š” ํŠน์ง•์ด ์žˆ๊ธฐ ๋•Œ๋ฌธ์— ๋ณต์žกํ•œ ๊ฑด์„ค ํ˜„์žฅ ์‚ฌ์ง„ ๋ถ„๋ฅ˜ ๋ฌธ์ œ์— ์ ํ•ฉํ•˜๋‹ค. ๋”ฐ๋ผ์„œ ๋ณธ ์—ฐ๊ตฌ์—์„œ๋Š” ํ•ฉ์„ฑ๊ณฑ ์‹ ๊ฒฝ๋ง ๋ชจ๋ธ์„ ํ† ๋Œ€๋กœ ํ˜„์žฅ์—์„œ ํ•„์š”ํ•œ ์‚ฌ์ง„์„ ๋น ๋ฅด๊ณ  ์ •ํ™•ํ•˜๊ฒŒ ์ฐพ์„ ์ˆ˜ ์žˆ๋„๋ก ๊ฐ ์‚ฌ์ง„์— ์ ํ•ฉํ•œ ํ‚ค์›Œ๋“œ๋ฅผ ์ž๋™์œผ๋กœ ํ• ๋‹นํ•˜๋Š” ๋ชจ๋ธ์„ ๊ฐœ๋ฐœํ•˜์˜€๋‹ค. ํŠนํžˆ, ๊ฑด์„ค ํ˜„์žฅ ์‚ฌ์ง„์˜ ๋Œ€๋ถ€๋ถ„์ด ํ•˜๋‚˜ ์ด์ƒ์˜ ๋ ˆ์ด๋ธ”๊ณผ ์—ฐ๊ด€์ด ์žˆ๋‹ค๋Š” ์ ์— ๊ธฐ๋ฐ˜ํ•˜์—ฌ ๋‹ค์ค‘ ๋ ˆ์ด๋ธ” ๋ถ„๋ฅ˜ ๋ชจ๋ธ์„ ์ ์šฉํ•˜์˜€๋‹ค. ์ด๋ฅผ ํ†ตํ•ด ์ผ์ฐจ์ ์œผ๋กœ๋Š” ๊ฑด์„ค ์‚ฌ์ง„์—์„œ ํ”„๋กœ์ ํŠธ์™€ ๊ด€๋ จ๋œ ๋‹ค์–‘ํ•œ ์ •๋ณด๋ฅผ ์ถ”์ถœํ•˜์—ฌ ๊ฑด์„ค ํ˜„์žฅ ์‚ฌ์ง„์˜ ํ™œ์šฉ๋„๋ฅผ ๊ฐœ์„ ํ•˜๊ณ , ๋‚˜์•„๊ฐ€ ์‚ฌ์ง„ ๋ฐ์ดํ„ฐ๋ฅผ ํ™œ์šฉํ•˜์—ฌ ํšจ์œจ์ ์ธ ๊ฑด์„ค ๊ด€๋ฆฌ๋ฅผ ๋„๋ชจํ•˜๊ณ ์ž ํ•œ๋‹ค. ๋ณธ ์—ฐ๊ตฌ์˜ ์ง„ํ–‰ ์ˆœ์„œ๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™๋‹ค. ์šฐ์„  ๋ชจ๋ธ์„ ํ•™์Šต์‹œํ‚ค๊ธฐ ์œ„ํ•ด์„œ ์‹ค์ œ ๊ฑด์„ค ํ˜„์žฅ ๋ฐ ์˜คํ”ˆ์†Œ์Šค ๊ฒ€์ƒ‰์—”์ง„์„ ํ†ตํ•˜์—ฌ ์ด 6๊ฐœ ๊ณต์ข…์˜ ์‚ฌ์ง„์„ ์ˆ˜์ง‘ํ•˜๊ณ , ํ•˜์œ„ ๋ถ„๋ฅ˜ ๋ฒ”์œ„๋ฅผ ํฌํ•จํ•œ ์ด 10๊ฐœ ๋ ˆ์ด๋ธ”์˜ ๋ฐ์ดํ„ฐ์…‹์„ ๊ตฌ์„ฑํ•˜์—ฌ ํ•™์Šต์„ ์ง„ํ–‰ํ–ˆ๋‹ค. ๋˜ํ•œ ๊ตฌ์ฒด์ ์ธ ๋ชจ๋ธ ์„ ํƒ์„ ์œ„ํ•ด ๋Œ€ํ‘œ์ ์ธ ํ•ฉ์„ฑ๊ณฑ ์‹ ๊ฒฝ๋ง ๋ชจ๋ธ์„ ๋น„๊ต ๊ฒ€ํ† ํ•˜์—ฌ ๊ฐ€์žฅ ์šฐ์ˆ˜ํ•œ ์„ฑ๋Šฅ์„ ๋ณด์ธ ResNet 18์„ ์ตœ์ข… ๋ชจ๋ธ๋กœ ์„ ํƒํ–ˆ๋‹ค. ์‹คํ—˜ ๊ฒฐ๊ณผ ํ‰๊ท  91%์˜ ์ •ํ™•๋„๋ฅผ ๋ณด์ด๋ฉฐ ๊ฑด์„ค ํ˜„์žฅ ์‚ฌ์ง„์„ ์ž๋™์œผ๋กœ ๋ถ„๋ฅ˜ํ•  ์ˆ˜ ์žˆ๋Š” ๊ฐ€๋Šฅ์„ฑ์„ ํ™•์ธํ•˜์˜€๋‹ค. ๋˜ํ•œ ๋ณธ ์—ฐ๊ตฌ๋Š” ์ตœ๊ทผ ํƒ€ ๋ถ„์•ผ ์ด๋ฏธ์ง€ ๋ถ„์„์—์„œ ์ข‹์€ ์„ฑ๊ณผ๋ฅผ ๋ณด์ธ ํ•ฉ์„ฑ๊ณฑ ์‹ ๊ฒฝ๋ง์„ ํ™œ์šฉํ•˜์—ฌ ๊ฑด์„ค ํ˜„์žฅ ์‚ฌ์ง„์„ ์ž๋™์œผ๋กœ ๋ถ„๋ฅ˜ํ•  ์ˆ˜ ์žˆ๋‹ค๋Š” ๊ฐ€๋Šฅ์„ฑ์„ ํ™•์ธํ–ˆ๋‹ค๋Š” ์ ๊ณผ, ๊ฑด์„ค ํ˜„์žฅ ์‚ฌ์ง„ ๋ถ„๋ฅ˜ ๋ฌธ์ œ์— ๋‹ค์ค‘ ๋ ˆ์ด๋ธ” ๋ถ„๋ฅ˜๋ฅผ ์ ์šฉํ•œ ์ฒซ ์—ฐ๊ตฌ๋ผ๋Š” ์ ์—์„œ ์˜์˜๊ฐ€ ์žˆ๋‹ค. ์‹ค์ œ ํ˜„์žฅ์—์„œ๋Š” ์‚ฌ์ง„์„ ์ž๋™์œผ๋กœ ๋ถ„๋ฅ˜ํ•  ์ˆ˜ ์žˆ๊ฒŒ ๋จ์— ๋”ฐ๋ผ ๊ธฐ์กด์— ๋ฒˆ๊ฑฐ๋กœ์šด ์ˆ˜๋™ ์‚ฌ์ง„ ๋ถ„๋ฅ˜ ์ž‘์—…์„ ์ค„์ด๊ณ , ๊ฑด์„ค ํ˜„์žฅ ์‚ฌ์ง„์˜ ํ™œ์šฉ๋„๋ฅผ ๋†’์ผ ์ˆ˜ ์žˆ์„ ๊ฒƒ์œผ๋กœ ๊ธฐ๋Œ€๋œ๋‹ค. ํ•˜์ง€๋งŒ ๋ณธ ์—ฐ๊ตฌ๋Š” ๊ฐ ๋ ˆ์ด๋ธ” ๊ฐ„์— ์—ฐ๊ด€์„ฑ์ด๋‚˜ ์˜์กด์„ฑ์„ ๊ณ ๋ คํ•˜์ง€ ์•Š๊ธฐ ๋•Œ๋ฌธ์— ์ถ”ํ›„ ์—ฐ๊ตฌ์—์„œ๋Š” ๊ฐ ์‚ฌ์ง„ ๊ฐ„์˜ ๊ณ„์ธต์  ๊ด€๊ณ„๋ฅผ ๋ชจ๋ธ์— ์ถ”๊ฐ€์ ์œผ๋กœ ํ•™์Šต์‹œ์ผœ ์ •ํ™•๋„๋ฅผ ๋†’์ด๊ณ , ํ•™์Šต ๋ ˆ์ด๋ธ”๋„ ๋” ๋‚ฎ์€ ๋‹จ๊ณ„์˜ ํ‚ค์›Œ๋“œ๊นŒ์ง€ ํฌํ•จํ•˜์—ฌ ํ˜„์žฅ ์‚ฌ์ง„์œผ๋กœ๋ถ€ํ„ฐ ๋ณด๋‹ค ๋‹ค์–‘ํ•œ ์ •๋ณด๋ฅผ ์–ป์„ ์ˆ˜ ์žˆ๋„๋ก ๋ชจ๋ธ์„ ๊ฐœ์„ ํ•˜๋Š” ๊ฒƒ์„ ๋ชฉํ‘œ๋กœ ํ•˜๊ณ  ์žˆ๋‹ค.Activity recognition in construction performs as the prerequisite step in the process for various tasks and thus is critical for successful project management. In the last several years, the computer vision community has blossomed, taking advantage of the exploding amount of construction images and deploying the visual analytics technology for cumbersome construction tasks. However, the current annotation practice itself, which is a critical preliminary step for prompt image retrieval and image understanding, is remained as both time-consuming and labor-intensive. Because previous attempts to make the process more efficient were inappropriate to handle dynamic nature of construction images and showed limited performance in classifying construction activities, this research aims to develop a model which is not only robust to a wide range of appearances but also multi-composition of construction activity images. The proposed model adopts a deep convolutional neural network model to learn high dimensional feature with less human-engineering and annotate multi-labels of semantic information in the images. The result showed that our model was capable of distinguishing different trades of activities at different stages of the activity. The average accuracy of 83% and a maximum accuracy of 91% holds promise in an actual implementation of automated activity recognition for construction operations. Ultimately, it demonstrated a potential method to provide automated and reliable procedure to monitor construction activity.Chapter 1. Introduction 1 1.1. Research Background 1 1.2. Research Objectives and Scope 5 1.3. Research Outline 7 Chapter 2. Preliminary Study 9 2.1. Challenges with Construction Activity Image Classification Task 10 2.2. Applications of Traditional Vision-based Algorithms in Construction Domain 13 2.3. Convolutional Neural Network-based Image Classification in Construction Domain 18 2.4. Summary 21 Chapter 3. Development of Construction Image Classification Model 22 3.1. Customized Construction Image Dataset Preparation 23 3.1.1. Construction Activity Classification System 23 3.1.2. Dataset Collection 24 3.1.3. Data Pre-Processing 25 3.2. Construction Image Classification Model Framework 27 3.2.1. Multi-label Image Classification 27 3.2.2. Base CNN Model Selection 28 3.2.3. Proposed ResNet Model Architecture 29 3.3. Model Training and Validation 33 3.3.1. Transfer Learning 33 3.3.2. Loss Computation and Model Optimization 33 3.3.3. Model Performance Indicator 35 3.4. Summary 37 Chapter 4. Experiment Results and Discussion 38 4.1. Experiment Results 38 4.2. Analysis of Experiment Results 42 4.3. Summary 44 Chapter 5. Conclusion 45 5.1. Research Summary 45 5.2. Research Contributions 46 5.3. Limitations and Further Study 47 References 49 Appendix 57 Abstract in Korean 63Maste

    3D Localization of Defects in Facility Inspections

    Get PDF
    Wind tunnels are crucial facilities that support the aerospace industry. However, these facilities are large, complex, and pose unique maintenance and inspection requirements. Manual inspections to identify defects such as cracks, missing fasteners, leaks, and foreign objects are important but labor and schedule intensive. The goal of this thesis is to utilize small Unmanned Aircraft Systems with onboard cameras and computer vision-based analysis to automate the inspection of the interior and exterior of NASA\u27s critical wind tunnel facilities. Missing fasteners are detected as the defect class, and existing fasteners are detected to provide potential future missing fastener sites for preventative maintenance. These detections are performed in both 2D on the images and in 3D space to provide a visual reference and real world location to facilitate repairs. A dataset was created consisting of images taken along a grid-like pattern of an interior tunnel section in the AEDC National Full-Scale Aerodynamics Complex at NASA Ames Research Center. To localize the defects, object detection was used to create image level bounding boxes of the fasteners and missing fasteners, and photogrammetry was used to create a correspondence of 3D real world locations and 2D image locations. The image level bounding boxes and the 2D to 3D correspondences are then combined to determine the 3D location of the defects. On the test data, the method was able to successfully localize all of the objects in 3D space with no false positives
    corecore