7,950 research outputs found

    Ultrasonographic evaluation of submucosal thickness in oral submucous fibrosis patients : a cross-sectional study

    Get PDF
    Purpose: To evaluate the role of ultrasonography in oral submucous fibrosis (OSMF) patients. Material and methods: A total of 150 subjects were divided equally into six groups (Group I: 25 healthy subjects; Group II: 25 healthy subjects with habit; Group III: 25 OSMF stage I; Group IV: 25 OSMF stage II; Group V: 25 OSMF stage III; and Group VI: 25 stage OSMF IVA). The grading of OSMF were done according the clinical classification given by Khanna and Andrade (2005). After fulfilling inclusion and exclusion criteria each subject underwent extraoral ultrasonographic evaluation of submucosal thickness and vascularity in terms of peak systolic velocity (PSV), bilaterally on buccal and labial mucosa. Furthermore, statistical comparison of each group was done, and sensitivity and specificity of USG measurements was obtained in comparison with clinical diagnosis. The statistical analysis was performed using SPSS ver. 20.0. Results: A statistically significant increase in mean submucosal thickness was shown, and a decrease in PSV with the advancement of severity of the OSMF. In ultrasonographic diagnosis of OSMF, the reported submucosal thickness had a sensitivity, specificity, PPV, NPV, and accuracy was 80%, 100%, 100%, 71.4%, and 87%, respectively, but PSV failed to classify the lesion. Conclusions: Because the severity of the disease showed a direct relationship with submucosal thickness and an inverse relationship with PSV, habit-induced mucosal alteration in submucosal thickness can be seen on USG, which cannot be appreciated on clinical examination. Hence, USG can be a promising tool for early diagnosis, assessment of the severity, and evaluation of prognosis of OSMF

    Quantitative Screening of Cervical Cancers for Low-Resource Settings: Pilot Study of Smartphone-Based Endoscopic Visual Inspection After Acetic Acid Using Machine Learning Techniques

    Get PDF
    Background: Approximately 90% of global cervical cancer (CC) is mostly found in low- and middle-income countries. In most cases, CC can be detected early through routine screening programs, including a cytology-based test. However, it is logistically difficult to offer this program in low-resource settings due to limited resources and infrastructure, and few trained experts. A visual inspection following the application of acetic acid (VIA) has been widely promoted and is routinely recommended as a viable form of CC screening in resource-constrained countries. Digital images of the cervix have been acquired during VIA procedure with better quality assurance and visualization, leading to higher diagnostic accuracy and reduction of the variability of detection rate. However, a colposcope is bulky, expensive, electricity-dependent, and needs routine maintenance, and to confirm the grade of abnormality through its images, a specialist must be present. Recently, smartphone-based imaging systems have made a significant impact on the practice of medicine by offering a cost-effective, rapid, and noninvasive method of evaluation. Furthermore, computer-aided analyses, including image processing-based methods and machine learning techniques, have also shown great potential for a high impact on medicinal evaluations

    SCALING ARTIFICIAL INTELLIGENCE IN ENDOSCOPY: FROM MODEL DEVELOPMENT TO MACHINE LEARNING OPERATIONS FRAMEWORKS

    Get PDF
    Questa tesi esplora l'integrazione dell'intelligenza artificiale (IA) in Otorinolaringoiatria – Chirurgia di Testa e Collo, concentrandosi sui progressi della computer vision per l’endoscopia e le procedure chirurgiche. La ricerca inizia con una revisione completa dello stato dell’arte dell'IA e della computer vision in questo campo, identificando aree per ulteriori sviluppi. L'obiettivo principale è stato quello di sviluppare un sistema di computer vision per l'analisi di immagini e video endoscopici. La ricerca ha coinvolto la progettazione di strumenti per la rilevazione e segmentazione di neoplasie nelle vie aerodigestive superiori (VADS) e la valutazione della motilità delle corde vocali, cruciale nella stadiazione del carcinoma laringeo. Inoltre, lo studio si è focalizzato sul potenziale dei foundation vision models, vision transformers basati su self-supervised learning, per ridurre la necessità di annotazione da parte di esperti, approccio particolarmente vantaggioso in campi con dati limitati. Inoltre, la ricerca ha incluso lo sviluppo di un'applicazione web per migliorare e velocizzare il processo di annotazione in endoscopia delle VADS, nell’ambito generale delle tecniche di MLOps. La tesi copre varie fasi della ricerca, a partire dalla definizione del quadro concettuale e della metodologia, denominata "Videomics". Include una revisione della letteratura sull'IA in endoscopia clinica, focalizzata sulla Narrow Band Imaging (NBI) e sulle reti neurali convoluzionali (CNN). Lo studio progredisce attraverso diverse fasi, dalla valutazione della qualità delle immagini endoscopiche alla caratterizzazione approfondita delle lesioni neoplastiche. Si affronta anche la necessità di standard nel reporting degli studi di computer vision in ambito medico e si valuta l'applicazione dell'IA in setting dinamici come la motilità delle corde vocali. Una parte significativa della ricerca indaga l'uso di algoritmi di computer vision generalizzati (“foundation models”) e la “commoditization” degli algoritmi di machine learning, utilizzando polipi nasali e il carcinoma orofaringeo come casi studio. Infine, la tesi discute lo sviluppo di ENDO-CLOUD, un sistema basato su cloud per l’analisi della videolaringoscopia, evidenziando le sfide e le soluzioni nella gestione dei dati e l’utilizzo su larga scala di modelli di IA nell'imaging medico.This thesis explores the integration of artificial intelligence (AI) in Otolaryngology – Head and Neck Surgery, focusing on advancements in computer vision for endoscopy and surgical procedures. It begins with a comprehensive review of AI and computer vision advancements in this field, identifying areas for further exploration. The primary aim was to develop a computer vision system for endoscopy analysis. The research involved designing tools for detecting and segmenting neoplasms in the upper aerodigestive tract (UADT) and assessing vocal fold motility, crucial in laryngeal cancer staging. Further, the study delves into the potential of vision foundation models, like vision transformers trained via self-supervision, to reduce the need for expert annotations, particularly beneficial in fields with limited cases. Additionally, the research includes the development of a web application for enhancing and speeding up the annotation process in UADT endoscopy, under the umbrella of Machine Learning Operations (MLOps). The thesis covers various phases of research, starting with defining the conceptual framework and methodology, termed "Videomics". It includes a literature review on AI in clinical endoscopy, focusing on Narrow Band Imaging (NBI) and convolutional neural networks (CNNs). The research progresses through different stages, from quality assessment of endoscopic images to in-depth characterization of neoplastic lesions. It also addresses the need for standards in medical computer vision study reporting and evaluates the application of AI in dynamic vision scenarios like vocal fold motility. A significant part of the research investigates the use of "general purpose" vision algorithms and the commoditization of machine learning algorithms, using nasal polyps and oropharyngeal cancer as case studies. Finally, the thesis discusses the development of ENDO-CLOUD, a cloud-based system for videolaryngoscopy, highlighting the challenges and solutions in data management and the large-scale deployment of AI models in medical imaging

    Use of texture feature maps for the refinement of Information derived from digital Intraoral radiographs of lytic and sclerotic lesions

    Get PDF
    The aim of this study was to examine whether additional digital intraoral radiography (DIR) image preprocessing based on textural description methods improves the recognition and differentiation of periapical lesions. (1) DIR image analysis protocols incorporating clustering with the k-means approach (CLU), texture features derived from co-occurrence matrices, first-order features (FOF), gray-tone difference matrices, run-length matrices (RLM), and local binary patterns, were used to transform DIR images derived from 161 input images into textural feature maps. These maps were used to determine the capacity of the DIR representation technique to yield information about the shape of a structure, its pattern, and adequate tissue contrast. The effectiveness of the textural feature maps with regard to detection of lesions was revealed by two radiologists independently with consecutive interrater agreement. (2) High sensitivity and specificity in the recognition of radiological features of lytic lesions, i.e., radiodensity, border definition, and tissue contrast, was accomplished by CLU, FOF energy, and RLM. Detection of sclerotic lesions was refined with the use of RLM. FOF texture contributed substantially to the high sensitivity of diagnosis of sclerotic lesions. (3) Specific DIR texture-based methods markedly increased the sensitivity of the DIR technique. Therefore, application of textural feature mapping constitutes a promising diagnostic tool for improving recognition of dimension and possibly internal structure of the periapical lesions

    Mesh-to-raster based non-rigid registration of multi-modal images

    Full text link
    Region of interest (ROI) alignment in medical images plays a crucial role in diagnostics, procedure planning, treatment, and follow-up. Frequently, a model is represented as triangulated mesh while the patient data is provided from CAT scanners as pixel or voxel data. Previously, we presented a 2D method for curve-to-pixel registration. This paper contributes (i) a general mesh-to-raster (M2R) framework to register ROIs in multi-modal images; (ii) a 3D surface-to-voxel application, and (iii) a comprehensive quantitative evaluation in 2D using ground truth provided by the simultaneous truth and performance level estimation (STAPLE) method. The registration is formulated as a minimization problem where the objective consists of a data term, which involves the signed distance function of the ROI from the reference image, and a higher order elastic regularizer for the deformation. The evaluation is based on quantitative light-induced fluoroscopy (QLF) and digital photography (DP) of decalcified teeth. STAPLE is computed on 150 image pairs from 32 subjects, each showing one corresponding tooth in both modalities. The ROI in each image is manually marked by three experts (900 curves in total). In the QLF-DP setting, our approach significantly outperforms the mutual information-based registration algorithm implemented with the Insight Segmentation and Registration Toolkit (ITK) and Elastix

    Machine Learning/Deep Learning in Medical Image Processing

    Get PDF
    Many recent studies on medical image processing have involved the use of machine learning (ML) and deep learning (DL). This special issue, “Machine Learning/Deep Learning in Medical Image Processing”, has been launched to provide an opportunity for researchers in the area of medical image processing to highlight recent developments made in their fields with ML/DL. Seven excellent papers that cover a wide variety of medical/clinical aspects are selected in this special issue
    • 

    corecore