8 research outputs found

    Automated Detection of Dental Caries from Oral Images using Deep Convolutional Neural Networks

    Get PDF
    The urgent demand for accurate and efficient diagnostic methods to combat oral diseases, particularly dental caries, has led to the exploration of advanced techniques. Dental caries, caused by bacterial activities that weaken tooth enamel, can result in severe cavities and infections if not promptly treated. Despite existing imaging techniques, consistent and early diagnoses remain challenging. Traditional approaches, such as visual and tactile examinations, are prone to variations in expertise, necessitating more objective diagnostic tools. This study leverages deep learning to propose an explainable methodology for automated dental caries detection in images. Utilizing pre-trained convolutional neural networks (CNNs) including VGG-16, VGG-19, DenseNet-121, and Inception V3, we investigate different models and preprocessing techniques, such as histogram equalization and Sobel edge detection, to enhance the detection process. Our comprehensive experiments on a dataset of 884 oral images demonstrate the efficacy of the proposed approach in achieving accurate caries detection. Notably, the VGG-16 model achieves the best accuracy of 98.3% using the stochastic gradient descent (SGD) optimizer with Nesterov’s momentum. This research contributes to the field by introducing an interpretable deep learning-based solution for automated dental caries detection, enhancing diagnostic accuracy, and offering potential insights for dental health assessment

    Real-time Twitter Sentiment Analysis for Moroccan Universities using Machine Learning and Big Data Technologies

    No full text
    In recent years, sentiment analysis (SA) has raised the interest of researchers in several domains, including higher education. It can be applied to measure the quality of the services supplied by the higher education institution and construct a university ranking mechanism from social media like Twitter. Hence, this study presents a novel system for Twitter sentiment prediction on Moroccan public universities in real-time. It consists of two phases: offline sentiment analysis phase and real-time prediction phase. In the offline phase, the collected French tweets about twelve Moroccan universities were classified according to their sentiment into ‘positive’, ‘negative’, or ‘neutral’ using six machine learning algorithms (random forest, multinomial Naive Bayes classifier, logistic regression, decision tree, linear support vector classifier, and extreme gradient boosting) with the term frequency-inverse document frequency (TF-IDF) and count vectorizer feature extraction techniques. The results reveal that random forest classifier coupled with TF-IDF has obtained the best test accuracy of 90%. This model was then applied on real-time tweets. The real-time prediction pipeline comprises Twitter streaming API for data collection, Apache Kafka for data ingestion, Apache Spark for real-time sentiment analysis, Elasticsearch for real-time data exploration, and Kibana for data visualization. The obtained results can be used by the Ministry of higher education, scientific research and innovation of Morocco for decision-making process

    Cloud-based sentiment analysis for measuring customer satisfaction in the Moroccan banking sector using NaĂŻve Bayes and Stanford NLP

    No full text
    In a world where every day we produce 2.5 quintillion bytes of data, sentiment analysis has been a key for making sense of that data. However, to process huge text data in real-time requires building a data processing pipeline in order to minimize the latency to process data streams. In this paper, we explain and evaluate our proposed real-time customer’ sentiment analysis pipeline on the Moroccan banking sector through data from the web and social network using open-source big data tools such as data ingestion using Apache Kafka, In-memory data processing using Apache Spark, Apache HBase for storing tweets and the satisfaction indicator, and ElasticSearch and Kibana for visualization then NodeJS for building a web application. The performance evaluation of Naïve Bayesian model show that for French Tweets the accuracy has reached 76.19% while for English Tweets the result was unsatisfactory and the resulting accuracy is 56%. To remedy this problem, we used the Stanford core NLP which, for English Tweets, reaches a precision of 80.7%

    Self-Attention-Based Bi-LSTM Model for Sentiment Analysis on Tweets about Distance Learning in Higher Education

    No full text
    For limiting the COVID-19 spread, countries around the world have implemented prevention measures such as lockdowns, social distancing, and the closers of educational institutions. Therefore, most academic activities are shifted to distance learning. This study proposes a deep learning approach for analyzing people’s sentiments (positive, negative, and neutral) from Twitter regarding distance learning in higher education. We collected and pre-processed 24642 English tweets about distance learning posted between July 20, 2022, and November 06, 2022. Then, a self-attention-based Bi-LSTM model with GloVe word embedding was used for sentiment classification. The proposed model performance was compared to LSTM (Long Short Term Memory), Bi-LSTM (Bidirectional-LSTM), and CNN-Bi-LSTM (Convolutional Neural Network-Bi-LSTM). Our proposed model obtains the best test accuracy of 95% on a stratified 90:10 split ratio. The results reveal generally neutral sentiments about distance learning for higher education, followed by positive sentiments, particularly in psychology and computer science, and negative sentiments in biology and chemistry. According to the obtained results, the proposed approach outperformed the state-of-art methods

    Implications of Reduced-Precision Computations in HPC: Performance, Energy and Error

    Get PDF
    International audienceError-tolerating applications are increasingly common in the emerging field of real-time HPC. Proposals have been made at the hardware level to take advantage of inherent perceptual limitations, redundant data, or reduced precision input [20], as well as to reduce system costs or improve power efficiency [19]. At the same time, works on floating-point to fixed-point conversion tools [9] allow us to trade-off the algorithm exactness for a more efficient implementation. In this work, we aim at leveraging existing, HPC-oriented hardware architectures, while including in the precision tuning an adaptive selection of floating-and fixed-point arithmetic. Our proposed solution takes advantage of the application domain knowledge of the programmers by involving them in the first step of the interaction chain. We rely on annotations written by the programmer on the input file to know which variables of a computational kernel should be converted to fixed-point. The second stage replaces the floating-point variables in the kernel with fixed-point equivalents. It also adds to the original source code the utility functions to perform data type conversions from floating-point to fixed-point, and vice versa. The output of the second stage is a new version of the kernel source code which exploits fixed-point computation instead of floating-point computation. As opposed to typical custom-width hardware designs, we only rely on the standard 16-bit, 32-bit and 64-bit types. We also explore the impact of the fixed-point representation on auto-vectorization. We discuss the effect of our solution in terms of time-to-solutions, error and energy-to-solution

    A framework for automatic and parameterizable memoization

    Get PDF
    International audienceImproving execution time and energy efficiency is needed for many applications and usually requires sophisticated code transformations and compiler optimizations. One of the optimization techniques is memoization, which saves the results of computations so that future computations with the same inputs can be avoided. In this article we present a framework that automatically applies memoization techniques to C/C++ applications. The framework is based on automatic code transformations using a source-to-source compiler and on a memoization library. With the framework users can select functions to memoize as long as they obey to certain restrictions imposed by our current memoization library. We show the use of the framework and associated memoization technique and the impact on reducing the execution time and energy consumption of four representative benchmarks

    ANTAREX: A DSL-Based approach to adaptively optimizing and enforcing extra-functional properties in high performance computing

    Get PDF
    The ANTAREX project relies on a Domain Specific Language (DSL) based on Aspect Oriented Programming (AOP) concepts to allow applications to enforce extra functional properties such as energy-efficiency and performance and to optimize Quality of Service (QoS) in an adaptive way. The DSL approach allows the definition of energy-efficiency, performance, and adaptivity strategies as well as their enforcement at runtime through application autotuning and resource and power management. In this paper, we present an overview of the ANTAREX DSL and some of its capabilities through a number of examples, including how the DSL is applied in the context of one of the project use cases

    Autotuning and adaptivity in energy efficient HPC systems: The ANTAREX toolbox (invited paper)

    Get PDF
    Designing and optimizing applications for energy-efficient High Performance Computing systems up to the Exascale era is an extremely challenging problem. This paper presents the toolbox developed in the ANTAREX European project for autotuning and adaptivity in energy efficient HPC systems. In particular, the modules of the ANTAREX toolbox are described as well as some preliminary results of the application to two target use cases
    corecore