14 research outputs found

    Medical Image Segmentation by Marker-Controlled Watershed and

    Get PDF
    ABSTRAC

    Advanced approach for Moroccan administrative documents digitization using pre-trained models CNN-based: character recognition

    Get PDF
    In the digital age, efficient digitization of administrative documents is a real challenge, particularly for languages with complex scripts such as those used in Moroccan documents. The subject matter of this article is the digitization of Moroccan administrative documents using pre-trained convolutional neural networks (CNNs) for advanced character recognition. This research aims to address the unique challenges of accurately digitizing various Moroccan scripts and layouts, which are crucial in the digital transformation of administrative processes. Our goal was to develop an efficient and highly accurate character recognition system specifically tailored for Moroccan administrative texts. The tasks involved comprehensive analysis and customization of pre-trained CNN models and rigorous performance testing against a diverse dataset of Moroccan administrative documents. The methodology entailed a detailed evaluation of different CNN architectures trained on a dataset representative of various types of characters used in Moroccan administrative documents. This ensured the adaptability of the models to real-world scenarios, with a focus on accuracy and efficiency in character recognition. The results were remarkable. DenseNet121 achieved a 95.78% accuracy rate on the Alphabet dataset, whereas VGG16 recorded a 99.24% accuracy on the Digits dataset. DenseNet169 demonstrated 94.00% accuracy on the Arabic dataset, 99.9% accuracy on the Tifinagh dataset, and 96.24% accuracy on the French Special Characters dataset. Furthermore, DenseNet169 attained 99.14% accuracy on the Symbols dataset. In addition, ResNet50 achieved 99.90% accuracy on the Character Type dataset, enabling accurate determination of the dataset to which a character belongs. In conclusion, this study signifies a substantial advancement in the field of Moroccan administrative document digitization. The CNN-based approach showcased in this study significantly outperforms traditional character recognition methods. These findings not only contribute to the digital processing and management of documents but also open new avenues for future research in adapting this technology to other languages and document types

    Convolutional neural network-based skin cancer classification with transfer learning models

    Get PDF
    Skin cancer is a medical condition characterized by abnormal growth of skin cells. This occurs when the DNA within these skin cells becomes damaged. In addition, it is a prevalent form of cancer that can result in fatalities if not identified in its early stages. A skin biopsy is a necessary step in determining the presence of skin cancer. However, this procedure requires time and expertise. In recent times, artificial intelligence and deep learning algorithms have exhibited superior performance compared with humans in visual tasks. This result can be attributed to improved processing capabilities and the availability of vast datasets. Automated classification driven by these advancements has the potential to facilitate the early identification of skin cancer. Traditional diagnostic methods might overlook certain cases, whereas artificial intelligence-powered approaches offer a broader perspective. Transfer learning is a widely used technique in deep learning, involving the use of pre-trained models. These models are extensively implemented in healthcare, especially in diagnosing and studying skin lesions. Similarly, convolutional neural networks (CNNs) have recently established themselves as highly robust autonomous feature extractors that can achieve excellent accuracy in skin cancer detection because of their high potential. The primary goal of this study was to build deep-learning models designed to perform binary classification of skin cancer into benign and malignant categories. The tasks to resolve are as follows: partitioning the database, allocating 80% of the images to the training set, assigning the remaining 20% to the test set, and applying a preprocessing procedure to the images, aiming to optimize their suitability for our analysis. This involved augmenting the dataset and resizing the images to align them with the specific requirements of each model used in our research; finally, building deep learning models to enable them to perform the classification task. The methods used are a CNNs model and two transfer learning models, i.e., Visual Geometry Group 16 (VGG16) and Visual Geometry Group 19 (VGG19). They are applied to dermoscopic images from the International Skin Image Collaboration Archive (ISIC) dataset to classify skin lesions into two classes and to conduct a comparative analysis. Our results indicated that the VGG16 model outperformed the others, achieving an accuracy of 87% and a loss of 38%. Additionally, the VGG16 model demonstrated the best recall, precision, and F1- score. Comparatively, the VGG16 and VGG19 models displayed superior performance in this classification task compared with the CNN model. Conclusions. The significance of this study stems from the fact that deep learning-based clinical decision support systems have proven to be highly beneficial, offering valuable recommendations to dermatologists during their diagnostic procedures

    Multilingual character recognition dataset for Moroccan official documents

    No full text
    This article focuses on the construction of a dataset for multilingual character recognition in Moroccan official documents. The dataset covers languages such as Arabic, French, and Tamazight and are built programmatically to ensure data diversity. It consists of sub-datasets such as Uppercase alphabet (26 classes), Lowercase alphabet (26 classes), Digits (9 classes), Arabic (28 classes), Tifinagh letters (33 classes), Symbols (14 classes), and French special characters (16 classes). The dataset construction process involves collecting representative fonts and generating multiple character images using a Python script, presenting a comprehensive variety essential for robust recognition models. Moreover, this dataset contributes to the digitization of these diverse official documents and archival papers, essential for preserving cultural heritage and enabling advanced text recognition technologies. The need for this work arises from the advancements in character recognition techniques and the significance of large-scale annotated datasets. The proposed dataset contributes to the development of robust character recognition models for practical applications

    Design and analysis of a recommendation system based on collaborative filtering techniques for big data

    Get PDF
    Online search has become very popular, and users can easily search for any movie title; however, to easily search for moving titles, users have to select a title that suits their taste. Otherwise, people will have difficulty choosing the film they want to watch. The process of choosing or searching for a film in a large film database is currently time-consuming and tedious. Users spend extensive time on the internet or on several movie viewing sites without success until they find a film that matches their taste. This happens especially because humans are confused about choosing things and quickly change their minds. Hence, the recommendation system becomes critical. This study aims to reduce user effort and facilitate the movie research task. Further, we used the root mean square error scale to evaluate and compare different models adopted in this paper. These models were employed with the aim of developing a classification model for predicting movies. Thus, we tested and evaluated several cooperative filtering techniques. We used four approaches to implement sparse matrix completion algorithms: k-nearest neighbors, matrix factorization, co-clustering, and slope-one

    Adapted Active Contours for Catadioptric Images using a Non-euclidean Metrics

    No full text
    International audienc

    IDAGEmb : An Incremental Data Alignment Based on Graph Embedding

    No full text
    International audienceIn dynamic information systems, data alignment addresses challenges like data heterogeneity, integration, and interoperability by connecting diverse datasets. To ensure the stability and effectiveness of these alignments over time, an incremental process may be required, allowing the alignments to be updated as the data evolves. While embeddingbased methods are valuable for handling incremental data in the graph learning field, they are underexplored in data alignment. However, before implementing such an approach, it is essential to verify the stability of the embeddings in order to guarantee their reliability and temporal consistency. So, we study the most promising model (i.e. Node2Vec) that exhibits favourable stability in embeddings, particularly with respect to the stability of node embeddings. Despite potential variability in pairwise similarities, the idea of an incremental approach remains reliable, especially with a fixed model. Implementing such an approach can efficiently manage data dynamics in information systems with reduced resource needs. By applying this incremental process to data alignment, it will be possible to efficiently manage heterogeneous data in dynamic information system environments, while minimising resource requirements

    Overview on Data Ingestion and Schema Matching

    No full text
    International audienceThis overview traced the evolution of data management, transitioning from traditional ETL processes to addressing contemporary challenges in Big Data, with a particular emphasis on data ingestion and schema matching. It explored the classification of data ingestion into batch, real-time, and hybrid processing, underscoring the challenges associated with data quality and heterogeneity. Central to the discussion was the role of schema mapping in data alignment, proving indispensable for linking diverse data sources. Recent advancements, notably the adoption of machine learning techniques, were significantly reshaping the landscape. The paper also addressed current challenges, including the integration of new technologies and the necessity for effective schema matching solutions, highlighting the continuously evolving nature of schema matching in the context of Big Data

    Node2Vec Stability: Preliminary Study to Ensure the Compatibility of Embeddings with Incremental Data Alignment

    No full text
    International audienceIn dynamic information systems, data alignment addresses challenges like data heterogeneity, integration, and interoperability by connecting diverse datasets. To ensure the stability and effectiveness of these alignments over time, an incremental process may be required, allowing the alignments to be updated as the data evolves. While embedding-based methods are valuable for handling incremental data in the graph learning field, they are underexplored in data alignment. However, before implementing such an approach, it is essential to verify the stability of the embeddings in order to guarantee their reliability and temporal consistency. So, we study the most promising model (i.e. Node2Vec) that exhibits favourable stability in embeddings, particularly with respect to the stability of node embeddings. Despite potential variability in pairwise similarities, the idea of an incremental approach remains reliable, especially with a fixed model. Implementing such an approach can efficiently manage data dynamics in information systems with reduced resource needs. By applying this incremental process to data alignment, it will be possible to efficiently manage heterogeneous data in dynamic information system environments, while minimising resource requirements
    corecore