4 research outputs found

    Artistic Style Recognition: Combining Deep and Shallow Neural Networks for Painting Classification

    No full text
    This study’s main goal is to create a useful software application for finding and classifying fine art photos in museums and art galleries. There is an increasing need for tools to swiftly analyze and arrange art collections based on their artistic styles as a result of the digitization of art collections. To increase the accuracy of the style categorization, the suggested technique involves two parts. The input image is split into five sub-patches in the first stage. A DCNN that has been particularly trained for this task is then used to classify each patch individually. A decision-making module using a shallow neural network is part of the second phase. Probability vectors acquired from the first-phase classifier are used to train this network. The results from each of the five patches are combined in this phase to deduce the final style classification for the input image. One key advantage of this approach is employing probability vectors rather than images, and the second phase is trained separately from the first. This helps compensate for any potential errors made during the first phase, improving accuracy in the final classification. To evaluate the proposed method, six various already-trained CNN models, namely AlexNet, VGG-16, VGG-19, GoogLeNet, ResNet-50, and InceptionV3, were employed as the first-phase classifiers. The second-phase classifier was implemented as a shallow neural network. By using four representative art datasets, experimental trials were conducted using the Australian Native Art dataset, the WikiArt dataset, ILSVRC, and Pandora 18k. The findings showed that the recommended strategy greatly surpassed existing methods in terms of style categorization accuracy and precision. Overall, the study assists in creating efficient software systems for analyzing and categorizing fine art images, making them more accessible to the general public through digital platforms. Using pre-trained models, we were able to attain an accuracy of 90.7. Our model performed better with a higher accuracy of 96.5 as a result of fine-tuning and transfer learning

    Enhancing Workplace Safety: PPE_Swin—A Robust Swin Transformer Approach for Automated Personal Protective Equipment Detection

    No full text
    Accidents occur in the construction industry as a result of non-compliance with personal protective equipment (PPE). As a result of diverse environments, it is difficult to detect PPE automatically. Traditional image detection models like convolutional neural network (CNN) and vision transformer (ViT) struggle to capture both local and global features in construction safety. This study introduces a new approach for automating the detection of personal protective equipment (PPE) in the construction industry, called PPE_Swin. By combining global and local feature extraction using the self-attention mechanism based on Swin-Unet, we address challenges related to accurate segmentation, robustness to image variations, and generalization across different environments. In order to train and evaluate our system, we have compiled a new dataset, which provides more reliable and accurate detection of personal protective equipment (PPE) in diverse construction scenarios. Our approach achieves a remarkable 97% accuracy in detecting workers with and without PPE, surpassing existing state-of-the-art methods. This research presents an effective solution for enhancing worker safety on construction sites by automating PPE compliance detection

    A Deep Learning-based Privacy-Preserving Model for Smart Healthcare in Internet of Medical Things using Fog Computing

    Get PDF
    With the emergence of COVID-19, smart healthcare, the Internet of Medical Things, and big data-driven medical applications have become even more important. The biomedical data produced is highly confidential and private. Unfortunately, conventional health systems cannot support such a colossal amount of biomedical data. Hence, data is typically stored and shared through the cloud. The shared data is then used for different purposes, such as research and discovery of unprecedented facts. Typically, biomedical data appear in textual form (e.g., test reports, prescriptions, and diagnosis). Unfortunately, such data is prone to several security threats and attacks, for example, privacy and confidentiality breach. Although significant progress has been made on securing biomedical data, most existing approaches yield long delays and cannot accommodate real-time responses. This paper proposes a novel fog-enabled privacy-preserving model called [Formula: see text] sanitizer, which uses deep learning to improve the healthcare system. The proposed model is based on a Convolutional Neural Network with Bidirectional-LSTM and effectively performs Medical Entity Recognition. The experimental results show that [Formula: see text] sanitizer outperforms the state-of-the-art models with 91.14% recall, 92.63% in precision, and 92% F1-score. The sanitization model shows 28.77% improved utility preservation as compared to the state-of-the-art
    corecore