264 research outputs found

    Automated Teeth Extraction and Dental Caries Detection in Panoramic X-ray

    Get PDF
    Dental caries is one of the most chronic diseases that involves the majority of people at least once during their lifetime. This expensive disease accounts for 5-10% of the healthcare budget in developing countries. Caries lesions appear as the result of dental biofi lm metabolic activity, caused by bacteria (most prominently Streptococcus mutans) feeding on uncleaned sugars and starches in oral cavity. Also known as tooth decay, they are primarily diagnosed by general dentists solely based on clinical assessments. Since in many cases dental problems cannot be detected with simple observations, dental x-ray imaging is introduced as a standard tool for domain experts, i.e. dentists and radiologists, to distinguish dental diseases, such as proximal caries. Among different dental radiography methods, Panoramic or Orthopantomogram (OPG) images are commonly performed as the initial step toward assessment. OPG images are captured with a small dose of radiation and can depict the entire patient dentition in a single image. Dental caries can sometimes be hard to identify by general dentists relying only on their visual inspection using dental radiography. Tooth decays can easily be misinterpreted as shadows due to various reasons, such as low image quality. Besides, OPG images have poor quality and structures are not presented with strong edges due to low contrast, uneven exposure, etc. Thus, disease detection is a very challenging task using Panoramic radiography. With the recent development of Artificial Intelligence (AI) in dentistry, and with the introduction of Convolutional Neural Network (CNN) for image classification, developing medical decision support systems is becoming a topic of interest in both academia and industry. Providing more accurate decision support systems using CNNs to assist dentists can enhance their diagnosis performance, resulting in providing improved dental care assistance for patients. In the following thesis, the first automated teeth extraction system for Panoramic images, using evolutionary algorithms, is proposed. In contrast to other intraoral radiography methods, Panoramic is captured with x-ray film outside the patient mouth. Therefore, Panoramic x-rays contain regions outside of the jaw, which make teeth segmentation extremely difficult. Considering that we solely need an image of each tooth separately to build a caries detection model, segmentation of teeth from the OPG image is essential. Due to the absence of significant pixel intensity difference between different regions in OPG radiography, teeth segmentation becomes very hard to implement. Consequently, an automated system is introduced to get an OPG as input and gives images of single teeth as the output. Since only a few research studies are utilizing similar task for Panoramic radiography, there is room for improvement. A genetic algorithm is applied along with different image processing methods to perform teeth extraction by jaw extraction, jaw separation, and teeth-gap valley detection, respectively. The proposed system is compared to the state-of-the-art in teeth extraction on other image types. After teeth are segmented from each image, a model based on various untrained and pretrained CNN-based architectures is proposed to detect dental caries for each tooth. Autoencoder-based model along with famous CNN architectures are used for feature extraction, followed by capsule networks to perform classification. The dataset of Panoramic x-rays is prepared by the authors, with help from an expert radiologist to provide labels. The proposed model has demonstrated an acceptable detection rate of 86.05%, and an increase in caries detection speed. Considering the challenges of performing such task on low quality OPG images, this work is a step towards developing a fully automated efficient caries detection model to assist domain experts

    Classification of dental x-ray images

    Get PDF
    Forensic dentistry is concerned with identifying people based on their dental records. Forensic specialists have a large number of cases to investigate and hence, it has become important to automate forensic identification systems. The radiographs acquired after a person is deceased are called the Post-mortem (PM) radiographs, and the radiographs acquired while the person is alive are called the Ante-mortem (AM) radiographs. Dental biometrics automatically analyzes dental radiographs to identify the deceased individuals. While, ante mortem (AM) identification is usually possible through comparison of many biometric identifiers, postmortem (PM) identification is impossible using behavioral biometrics (e.g. speech, gait). Moreover, under severe circumstances, such as those encountered in mass disasters (e.g. airplane crashes and natural disasters such as Tsunami) most physiological biometrics may not be employed for identification, because of the decay of soft tissues of the body to unidentifiable states. Under such circumstances, the best candidates for postmortem biometric identification are the dental features because of their survivability and diversity.;In my work, I present two different techniques to classify periapical images as maxilla (upper jaw) or mandible (lower jaw) images and we show a third technique to classify dental bitewing images as horizontally flipped/rotated or horizontally un-flipped/un-rotated. In our first technique I present an algorithm to classify whether a given dental periapical image is of a maxilla (upper jaw) or a mandible (lower jaw) using texture analysis of the jaw bone. While the bone analysis method is manual, in our second technique, I propose an automated approach for the identification of dental periapical images using the crown curve detection Algorithm. The third proposed algorithm works in an automated manner for a large number of database comprised of dental bitewing images. Each dental bitewing image in the data base can be classified as a horizontally flipped or un-flipped image in a time efficient manner

    ํŒŒ๋…ธ๋ผ๋งˆ๋ฐฉ์‚ฌ์„ ์˜์ƒ์—์„œ ๋”ฅ๋Ÿฌ๋‹ ์‹ ๊ฒฝ๋ง์„ ์ด์šฉํ•œ ์น˜์„ฑ ๋‚ญ๊ณผ ์ข…์–‘์˜ ์ž๋™ ์ง„๋‹จ ๋ฐฉ๋ฒ•

    Get PDF
    ํ•™์œ„๋…ผ๋ฌธ (๋ฐ•์‚ฌ) -- ์„œ์šธ๋Œ€ํ•™๊ต ๋Œ€ํ•™์› : ์น˜์˜ํ•™๋Œ€ํ•™์› ์น˜์˜ํ•™๊ณผ, 2021. 2. ์ด์›์ง„.Objective: The purpose of this study was to automatically diagnose odontogenic cysts and tumors of the jaw on panoramic radiographs using a deep convolutional neural network. A novel framework method of deep convolutional neural network was proposed with data augmentation for detection and classification of the multiple diseases. Methods: A deep convolutional neural network modified from YOLOv3 was developed for detecting and classifying odontogenic cysts and tumors of the jaw. Our dataset of 1,282 panoramic radiographs comprised 350 dentigerous cysts, 302 periapical cysts, 300 odontogenic keratocysts, 230 ameloblastomas, and 100 normal jaw with no disease. In addition, the number of radiographs was augmented 12-fold by flip, rotation, and intensity changes. The Intersection over union threshold value of 0.5 was used to obtain performance for detection and classification. The classification performance of the developed convolutional neural network was evaluated by calculating sensitivity, specificity, accuracy, and AUC (Area under the ROC curve) for diseases of the jaw. Results: The overall classification performance for the diseases improved from 78.2% sensitivity, 93.9% specificity, 91.3% accuracy, and 0.86 AUC using the convolutional neural network with unaugmented dataset to 88.9% sensitivity, 97.2% specificity, 95.6% accuracy, and 0.94 AUC using the convolutional neural network with augmented dataset. Convolutional neural network using augmented dataset had the following sensitivities, specificities, accuracies, and AUC: 91.4%, 99.2%, 97.8%, and 0.96 for dentigerous cysts, 82.8%, 99.2%, 96.2%, and 0.92 for periapical cysts, 98.4%, 92.3%, 94.0%, and 0.97 for odontogenic keratocysts, 71.7%, 100%, 94.3%, and 0.86 for ameloblastomas, and 100.0%, 95.1%, 96.0%, and 0.94 for normal jaw, respectively. Conclusion: The novel framework convolutional neural network method was developed for automatically diagnosing odontogenic cysts and tumors of the jaw on panoramic radiographs using data augmentation. The proposed convolutional neural network model showed high sensitivity, specificity, accuracy, and AUC despite the limited number of panoramic images involved.1. ๋ชฉ ์  ๊ตฌ๊ฐ•์•…์•ˆ๋ฉด์˜์—ญ์—์„œ ๋ฐœ์ƒํ•˜๋Š” ๋‚ญ์ข… ํ˜น์€ ์ข…์–‘์„ ์กฐ๊ธฐ์— ๋ฐœ๊ฒฌํ•˜์ง€ ๋ชปํ•˜์—ฌ ์ ์ ˆํ•œ ์น˜๋ฃŒ๊ฐ€ ์ด๋ฃจ์–ด์ง€์ง€ ๋ชปํ•˜๊ณ  ์ง€์—ฐ๋˜๋Š” ๊ฒฝ์šฐ๊ฐ€ ์žˆ๋‹ค. ์ด๋Ÿฌํ•œ ๋ฌธ์ œ๋ฅผ ํ•ด๊ฒฐํ•˜๊ธฐ ์œ„ํ•˜์—ฌ ์ธ๊ณต์‹ ๊ฒฝ๋ง์„ ๊ธฐ๋ฐ˜์œผ๋กœ ํ•˜๋Š” ๊ธฐ๊ณ„ํ•™์Šต ๊ธฐ์ˆ ์ธ ๋”ฅ๋Ÿฌ๋‹์‹ ๊ฒฝ๋ง(deep convolutional neural network)์„ ์ด์šฉํ•˜๋Š” ์ปดํ“จํ„ฐ ๋ณด์กฐ์ง„๋‹จ์€ ๋ณด๋‹ค ์ •ํ™•ํ•˜๊ณ  ๋น ๋ฅธ ๊ฒฐ๊ณผ๋ฅผ ์ œ๊ณตํ•  ์ˆ˜ ์žˆ๋‹ค. ๋”ฐ๋ผ์„œ ๋ณธ ์—ฐ๊ตฌ์—์„œ๋Š” ํŒŒ๋…ธ๋ผ๋งˆ๋ฐฉ์‚ฌ์„ ์˜์ƒ์—์„œ ๋”ฅ๋Ÿฌ๋‹์‹ ๊ฒฝ๋ง์„ ์ด์šฉํ•˜์—ฌ ๊ตฌ๊ฐ•์•…์•ˆ๋ฉด์—์„œ ์ž์ฃผ ๋‚˜ํƒ€๋‚˜๋Š” 4๊ฐ€์ง€ ์งˆํ™˜(ํ•จ์น˜์„ฑ๋‚ญ, ์น˜๊ทผ๋‹จ๋‹น, ์น˜์„ฑ๊ฐํ™”๋‚ญ, ๋ฒ•๋ž‘๋ชจ์„ธํฌ์ข…)์„ ์ž๋™์œผ๋กœ ๊ฒ€์ถœ ๋ฐ ์ง„๋‹จํ•˜๋Š” ๋”ฅ๋Ÿฌ๋‹์‹ ๊ฒฝ๋ง์„ ๊ฐœ๋ฐœํ•˜๊ณ  ๊ทธ ์ •ํ™•์„ฑ์„ ํ‰๊ฐ€ํ•˜์˜€๋‹ค. 2. ๋ฐฉ ๋ฒ• ๋ณธ ์—ฐ๊ตฌ์—์„œ๋Š” ํŒŒ๋…ธ๋ผ๋งˆ๋ฐฉ์‚ฌ์„ ์˜์ƒ์—์„œ ์•…๊ณจ์— ๋ฐœ์ƒํ•œ ์น˜์„ฑ ๋‚ญ๊ณผ ์ข…์–‘์„ ๊ฒ€์ถœํ•˜๊ณ  ์ง„๋‹จํ•˜๊ธฐ ์œ„ํ•˜์—ฌ YoLoV3๋ฅผ ๊ธฐ๋ฐ˜์œผ๋กœ ํ•œ ๋”ฅ๋Ÿฌ๋‹์‹ ๊ฒฝ๋ง์„ ๊ตฌ์ถ•ํ•˜์˜€๋‹ค. 1999๋…„๋ถ€ํ„ฐ 2017๋…„๊นŒ์ง€ ์„œ์šธ๋Œ€ํ•™๊ต์น˜๊ณผ๋ณ‘์›์—์„œ ์กฐ์ง๋ณ‘๋ฆฌํ•™์ ์œผ๋กœ ํ™•์ง„๋œ ํ•จ์น˜์„ฑ๋‚ญ 350๋ก€, ์น˜๊ทผ๋‹จ๋‚ญ 302๋ก€, ์น˜์„ฑ๊ฐํ™”๋‚ญ 300๋ก€, ๋ฒ•๋ž‘๋ชจ์„ธํฌ์ข… 230๋ก€์˜ ํ™˜์ž๋กœ๋ถ€ํ„ฐ ํš๋“ํ•œ ์ด 1182๋งค ํŒŒ๋…ธ๋ผ๋งˆ๋ฐฉ์‚ฌ์„ ์˜์ƒ์„ ๋ถ„์„ํ•˜์˜€๋‹ค. ๋˜ํ•œ ๋Œ€์กฐ๊ตฐ์œผ๋กœ ์งˆํ™˜์ด ์—†๋Š” ์ •์ƒ ํŒŒ๋…ธ๋ผ๋งˆ๋ฐฉ์‚ฌ์„ ์˜์ƒ 100๋งค๋ฅผ ์„ ํƒํ•˜์˜€๋‹ค. ํŒŒ๋…ธ๋ผ๋งˆ๋ฐฉ์‚ฌ์„ ์˜์ƒ ๋ฐ์ดํ„ฐ๋Š” ๊ฐ๋งˆ, ๋ณด์ •, ํšŒ์ „, ๋’ค์ง‘๊ธฐ ๊ธฐ๋ฒ•์„ ํ†ตํ•˜์—ฌ 12๋ฐฐ ์ฆ๊ฐ•๋˜์—ˆ๋‹ค. ์ด ๋ฐ์ดํ„ฐ์˜ 60%๋Š” ํ›ˆ๋ จ์„ธํŠธ, 20%๋Š” ๊ฒ€์ฆ์„ธํŠธ, 20%๋Š” ํ…Œ์ŠคํŠธ์„ธํŠธ๋กœ ์‚ฌ์šฉํ•˜์˜€๋‹ค. ๊ฐœ๋ฐœ๋œ ๋”ฅ๋Ÿฌ๋‹์‹ ๊ฒฝ๋ง์€ 5๋ฐฐ ๊ต์ฐจ๊ฒ€์ฆ(5-fold cross validation)๊ธฐ๋ฒ•์„ ์ด์šฉํ•˜์—ฌ ํ‰๊ฐ€ํ•˜์˜€๋‹ค. ๋ณธ ์—ฐ๊ตฌ์—์„œ ๊ฐœ๋ฐœํ•œ ๋”ฅ๋Ÿฌ๋‹์‹ ๊ฒฝ๋ง์˜ ์„ฑ๋Šฅ์€ ์ •ํ™•๋„(Accuracy), ๋ฏผ๊ฐ๋„(sensitivity), ํŠน์ด๋„(specificity) ๋ฐ ROC๋ถ„์„์„ ํ†ตํ•œ AUC(area under the curve) ์ง€ํ‘œ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์ธก์ •ํ•˜์˜€๋‹ค. 3. ๊ฒฐ ๊ณผ ๋ณธ ์—ฐ๊ตฌ์—์„œ ๊ฐœ๋ฐœํ•œ ๋”ฅ๋Ÿฌ๋‹์‹ ๊ฒฝ๋ง์€ ๋ฐ์ดํ„ฐ ์ฆ๊ฐ•์„ ํ•˜์ง€ ์•Š์•˜์„ ๋•Œ 78.2% ๋ฏผ๊ฐ๋„, 93.9% ํŠน์ด๋„, 91.3% ์ •ํ™•๋„ ๋ฐ 0.86์˜ AUC ๊ฐ’์„ ๋ณด์˜€๊ณ  ๋ฐ์ดํ„ฐ ์ฆ๊ฐ•์„ ํ•˜์˜€์„ ๋•Œ์—๋Š” 88.9% ๋ฏผ๊ฐ๋„, 97.2% ํŠน์ด๋„, 95.6% ์ •ํ™•๋„ ๋ฐ 0.94 AUC์˜ ๊ฐœ์„ ๋œ ์„ฑ๋Šฅ์„ ๋ณด์—ฌ์ฃผ์—ˆ๋‹ค. ํ•จ์น˜์„ฑ๋‚ญ์€ 91.4% ๋ฏผ๊ฐ๋„, 99.2% ํŠน์ด๋„, 97.8% ์ •ํ™•๋„ ๋ฐ 0.96 AUC ๊ฐ’์„ ๋ณด์˜€๋‹ค. ์น˜๊ทผ๋‹จ๋‚ญ์€ 82.8% ๋ฏผ๊ฐ๋„, 99.2% ํŠน์ด๋„, 96.2% ์ •ํ™•๋„ ๋ฐ 0.92 AUC ๊ฐ’์„ ๋‚˜ํƒ€๋ƒˆ๋‹ค. ์น˜์„ฑ๊ฐํ™”๋‚ญ์€ 98.4% ๋ฏผ๊ฐ๋„, 92.3% ํŠน์ด๋„, 94.0% ์ •ํ™•๋„ ๋ฐ 0.97 AUC ๊ฒฐ๊ณผ๋ฅผ ๋ณด์˜€๋‹ค. ๋ฒ•๋ž‘๋ชจ์„ธํฌ์ข…์€ 71.7% ๋ฏผ๊ฐ๋„, 100% ํŠน์ด๋„, 94.3% ์ •ํ™•๋„ ๋ฐ 0.86 AUC์˜ ๊ฒฐ๊ณผ๋ฅผ ๋ณด์˜€๋‹ค. ๊ทธ๋ฆฌ๊ณ  ์ •์ƒ์ ์ธ ์•…๊ณจ์—์„œ๋Š” 100% ๋ฏผ๊ฐ๋„, 95.1% ํŠน์ด๋„, 96.0% ์ •ํ™•๋„ ๋ฐ 0.97 AUC๊ฐ’์„ ๊ฐ๊ฐ ๋ณด์˜€๋‹ค. 4. ๊ฒฐ ๋ก  ๋ณธ ์—ฐ๊ตฌ์—์„œ๋Š” ํŒŒ๋…ธ๋ผ๋งˆ๋ฐฉ์‚ฌ์„ ์˜์ƒ์—์„œ ์น˜์„ฑ ๋‚ญ๊ณผ ์ข…์–‘์„ ์ž๋™์œผ๋กœ ๊ฒ€์ถœํ•˜๊ณ  ์ง„๋‹จํ•˜๋Š” ๋”ฅ๋Ÿฌ๋‹์‹ ๊ฒฝ๋ง์„ ๊ฐœ๋ฐœํ•˜์˜€๋‹ค. ๋ณธ ์—ฐ๊ตฌ๋Š” ํŒŒ๋…ธ๋ผ๋งˆ๋ฐฉ์‚ฌ์„ ์˜์ƒ์˜ ์ˆ˜๊ฐ€ ์ถฉ๋ถ„ํ•˜์ง€ ์•Š์•˜์Œ์—๋„ ๋ถˆ๊ตฌํ•˜๊ณ  ๋ฐ์ดํ„ฐ ์ฆ๊ฐ• ๊ธฐ๋ฒ•์„ ์ด์šฉํ•˜์—ฌ ์šฐ์ˆ˜ํ•œ ๋ฏผ๊ฐ๋„, ํŠน์ด๋„ ๋ฐ ์ •ํ™•๋„ ๊ฒฐ๊ณผ๋ฅผ ๋ณด์˜€๋‹ค. ๋ณธ ์—ฐ๊ตฌ๊ฒฐ๊ณผ๋ฅผ ํ†ตํ•˜์—ฌ ๊ฐœ๋ฐœ๋œ ์‹œ์Šคํ…œ์€ ํ™˜์ž์˜ ์ƒ๊ธฐ ์งˆํ™˜์„ ์กฐ๊ธฐ์— ์ง„๋‹จํ•˜๊ณ  ์ ์ ˆํ•œ ์‹œ๊ธฐ์— ์น˜๋ฃŒํ•˜๋Š”๋ฐ ์œ ์šฉํ•˜๋‹ค.Contents Abstract i Tables v Figure legends vi Introduction ๏ผ‘ Materials and Methods ๏ผ• Data preparation and augmentation of panoramic radiographs ๏ผ• A deep convolutional neural network model for detection and classification of multiple diseases YOLOv3 ๏ผ™ Evaluation of detection and classification performance of the deep convolutional neural network model ๏ผ‘๏ผ“ Results ๏ผ‘๏ผ• Discussion ๏ผ’๏ผ˜ Conclusion ๏ผ“๏ผ— Acknowledgments ๏ผ“๏ผ˜ References ๏ผ“๏ผ™ ์š”์•ฝ(๊ตญ๋ฌธ์ดˆ๋ก) ๏ผ”๏ผ˜Docto
    • โ€ฆ
    corecore