2 research outputs found

    Progressive Hartley image secret sharing for high-quality image recovery

    No full text
    AbstractThis paper introduces a highly efficient approach for progressive image secret sharing through the Discrete Hartley Transform (DHT), offering distinct advantages over existing methods. Leveraging DHT in the color domain, our approach achieves progressive sharing of color images while maintaining robustness and minimal visual impact. By optimizing quantization table selection, our method focuses on maximizing image recovery fidelity. Our extensive experiments showcase the efficacy of our proposed method, highlighting its potential for secure image sharing and superior image quality enabled by the real-number DHT transformation. Notably, our approach demonstrates strong security robustness and high-quality image reconstruction through histogram analysis. Objective assessment values of [Formula: see text] (PSNR), [Formula: see text] (NCC), [Formula: see text] (NAE), and [Formula: see text] (SSIM) underscore the superiority of our method. These results, coupled with its unique contributions and key factors, position our work as a promising solution in the field of image security and distribution, motivating further exploration and inspiring related research endeavors

    OTONet: Deep Neural Network for Precise Otoscopy Image Classification

    No full text
    Otoscopy is a diagnostic procedure to visualize the external ear canal and eardrum, facilitating the detection of various ear pathologies and conditions. Timely otoscopy image classification offers significant advantages, including early detection, reduced patient anxiety, and personalized treatment plans. This paper introduces a novel OTONet framework specifically tailored for otoscopy image classification. It leverages octave 3D convolution and a combination of feature and region-focus modules to create an accurate and robust classification system capable of distinguishing between various otoscopic conditions. This architecture is designed to efficiently capture and process the spatial and feature information present in otoscopy images. Using a public otoscopy dataset, OTONet has reached a classification accuracy of 99.3% and an F1 score of 99.4% across 11 classes of ear conditions. A comparative analysis demonstrates that OTONet surpasses other established machine learning models, including ResNet50, ResNet50v2, VGG16, Dense-Net169, and ConvNeXtTiny, across various evaluation metrics. The research’s contribution to improved diagnostic accuracy reduced human error, expedited diagnostics, and its potential for telemedicine applications
    corecore