17 research outputs found

    Fission multimodale pour les systèmes d'interaction

    Get PDF
    Les systèmes informatiques sont nés de besoins scientifiques. Leur succès est dû à leur utilisation grand public. Ceci a motivé les chercheurs à développer des systèmes qui permettent de satisfaire les besoins de l’utilisateur et de viser la démocratisation de leur utilisation à grande échelle. L’avancement technologique actuel a créé la nécessité de produire des machines de plus en plus performantes, faciles à utiliser et permettant de répondre aux besoins des utilisateurs. Pour atteindre ces objectifs, ces machines doivent être en mesure d’interférer d’une façon harmonieuse avec l’utilisateur. Cela n’est possible que si ces systèmes sont capables de comprendre la communication humaine. Cette dernière se fait à travers plusieurs modalités naturelles telles que la parole, les gestes, le regard et les expressions faciales. En s’inspirant de la communication humaine, les systèmes multimodaux ont était développés pour combiner plusieurs modalités en fonction de la tâche, des préférences et des intentions communicationnelles. Cette thèse s’inscrit dans ce cadre. Elle a pour thème principal la fission multimodale pour les systèmes d’interactions. L’objectif principal de nos travaux de recherche est triple. En premier lieu, nous proposons une architecture qui est très utile dans un système multimodal. Cette architecture est modélisée, spécifiée formellement et raffinée par l’emploi de réseaux de Pétri colorés. Elle réalise un module de fission multimodale. En second lieu, nous avons créé une ontologie du domaine qui décrit l’environnement du système multimodal. Ce modèle contient également les différents scénarios applicables pour la réalisation de la fission. Ces scénarios sont stockés sous forme de patterns. Notre algorithme de fission repose sur l’utilisation de la technique de pattern. Nous avons défini deux patterns 1) pattern de fission : sélectionne les sous-tâches élémentaires d’une commande complexe et 2) pattern de modalité : associe à chaque sous-tâche le ou les modalités adéquates. En troisième lieu, nous avons proposé une nouvelle méthode/technique basée sur le contexte en utilisant les réseaux bayésiens pour résoudre les problèmes d’ambiguïté ou d’incertitude dans un système de fission multimodal. Ces techniques ont été validées par des études de cas et en utilisant les réseaux de Pétri colorés et l’outil de simulation CPN-Tools. Ainsi, deux applications ont été implémentées : 1) une interface pour le contrôle d’un robot. Elle peut être utilisée pour assister des handicapés ou des personnes âgées. Cette interface est implémentée pour valider l’utilisation de la technique de pattern dans le processus de fission et 2) une interface GPS pour indiquer le trajet à un conducteur de voiture. Cette interface est implémentée pour valider notre nouvelle méthode basée sur le contexte en utilisant un réseau bayésien dans le cas d’ambiguïtés ou d’incertitudes

    Using Probabilistic Temporal Logic PCTL and Model Checking for Context Prediction

    Get PDF
    Context prediction is a promoting research topic with a lot of challenges and opportunities. Indeed, with the constant evolution of context-aware systems, context prediction remains a complex task due to the lack of formal approach. In this paper, we propose a new approach to enhance context prediction using a probabilistic temporal logic and model checking. The probabilistic temporal logic PCTL is used to provide an efficient expressivity and a reasoning based on temporal logic in order to fit with the dynamic and non-deterministic nature of the system's environment. Whereas, the probabilistic model checking is used for automatically verifying that a probabilistic system satisfies a property with a given likelihood. Our new approach allows a formal expressivity of a multidimensional context prediction. Tested on real data our model was able to achieve 78% of the future activities prediction accuracy

    Privacy Preserving Face Recognition in Cloud Robotics : A Comparative Study

    Get PDF
    Abstract: Real-time robotic applications encounter the robot on board resources’ limitations. The speed of robot face recognition can be improved by incorporating cloud technology. However, the transmission of data to the cloud servers exposes the data to security and privacy attacks. Therefore, encryption algorithms need to be set up. This paper aims to study the security and performance of potential encryption algorithms and their impact on the deep-learning-based face recognition task’s accuracy. To this end, experiments are conducted for robot face recognition through various deep learning algorithms after encrypting the images of the ORL database using cryptography and image-processing based algorithms

    Cognitive IoT-based e-Learning System : enabling context-aware remote schooling during the pandemic

    Get PDF
    Abstract: (e 2019–2020 coronavirus pandemic had far-reaching consequences beyond the spread of the disease and efforts to cure it. Today, it is obvious that the pandemic devastated key sectors ranging from health to economy, culture, and education. As far as education is concerned, one direct result of the spread of the pandemic was the resort to suspending traditional in-person classroom courses and relying on remote learning and homeschooling instead, by exploiting e-learning technologies, but many challenges are faced by these technologies. Most of these challenges are centered around the efficiency of these delivery methods, interactivity, and knowledge testing. (ese issues raise the need to develop an advanced smart educational system that assists home-schooled students, provides teachers with a range of smart new tools, and enable a dynamic and interactive e-learning experience. Technologies like the Internet of things (IoT) and artificial intelligence (AI), including cognitive models and contextawareness, can be a driving force in the future of e-learning, opening many opportunities to overcome the limitation of the existing remote learning systems and provide an efficient reliable augmented learning experience. Furthermore, virtual reality (VR) and augmented reality (AR), introduced in education as a way for asynchronous learning, can be a second driving force of future synchronous learning. (e teacher and students can see each other in a virtual class even if they are geographically spread in a city, a country, or the globe. (e main goal of this work is to design and provide a model supporting intelligent teaching assisting and engaging e-learning activity. (is paper presents a new model, ViRICTA, an intelligent system, proposing an end-to-end solution with a stack technology integrating the Internet of things and artificial intelligence. (e designed system aims to enable a valuable learning experience, providing an efficient, interactive, and proactive context-aware learning smart services

    One-Dimensional CNN Approach for ECG Arrhythmia Analysis in Fog-Cloud Environments

    Get PDF
    Cardiovascular diseases are considered the number one cause of death across the globe which can be primarily identified by the abnormal heart rhythms of the patients. By generating electrocardiogram (ECG) signals, wearable Internet of Things (IoT) devices can consistently track the patient’s heart rhythms. Although Cloud-based approaches for ECG analysis can achieve some levels of accuracy, they still have some limitations, such as high latency. Conversely, the Fog computing infrastructure is more powerful than edge devices but less capable than Cloud computing for executing compositionally intensive data analytic software. The Fog infrastructure can consist of Fog-based gateways directly connected with the wearable devices to offer many advanced benefits, including low latency and high quality of services. To address these issues, a modular one-dimensional convolution neural network (1D-CNN) approach is proposed in this work. The inference module of the proposed approach is deployable over the Fog infrastructure for analysing the ECG signals and initiating the emergency countermeasures within a minimum delay, whereas its training module is executable on the computationally enriched Cloud data centers. The proposed approach achieves the F1-measure score ≈1 on the MIT-BIH Arrhythmia database when applying GridSearch algorithm with the cross-validation method. This approach has also been implemented on a single-board computer and Google Colab-based hybrid Fog-Cloud infrastructure and embodied to a remote patient monitoring system that shows 25% improvement in the overall response time.</p

    Deep convolutional neural network regularization for alcoholism detection using EEG signals

    No full text
    Alcoholism is attributed to regular or excessive drinking of alcohol and leads to the disturbance of the neuronal system in the human brain. This results in certain malfunctioning of neurons that can be detected by an electroencephalogram (EEG) using several electrodes on a human skull at appropriate positions. It is of great interest to be able to classify an EEG activity as that of a normal person or an alcoholic person using data from the minimum possible electrodes (or channels). Due to the complex nature of EEG signals, accurate classification of alcoholism using only a small dataset is a challenging task. Artificial neural networks, specifically convolutional neural networks (CNNs), provide efficient and accurate results in various pattern-based classification problems. In this work, we apply CNN on raw EEG data and demonstrate how we achieved 98% average accuracy by optimizing a baseline CNN model and outperforming its results in a range of performance evaluation metrics on the University of California at Irvine Machine Learning (UCI-ML) EEG dataset. This article explains the stepwise improvement of the baseline model using the dropout, batch normalization, and kernel regularization techniques and provides a comparison of the two models that can be beneficial for aspiring practitioners who aim to develop similar classification models in CNN. A performance comparison is also provided with other approaches using the same dataset

    Smart Architecture Energy Management through Dynamic Bin-Packing Algorithms on Cloud

    No full text
    Smart Home Architecture is suitable for progressive and symmetric urbanization. Data being generated in smart home appliances using internet of things should be stored in cloud where computing resources can analyze the data and generate the decisive pattern within no time. This additional requirement of storage, majorly, comprising of unfiltered data escalates requirement of host machines which carries with itself extra overhead of energy consumption; thus, extra cost has to be beard by service providers. Various static algorithms are already proposed to improve energy management of cloud data centers by reducing number of active bins. These algorithms are not able to cater to the needs of present heterogeneous requests generated in cloud machines by people of diversified work environment with adhering to the requirements of quality parameters. Therefore, the paper has proposed and implemented dynamic bin-packing approaches for smart architecture that can significantly reduce energy consumption without compromising upon makespan, resource utilization and Quality of Service (QoS) parameters. The novelty of the proposed dynamic approaches in comparison to the existing static approaches is that the proposed approach dynamically creates and dissolves virtual machines as per incoming and completed requests which is a dire need of present computing paradigms via attachment of time-frame with each virtual machine. The simulations have been performed on JAVA platform and dynamic energy utilized-best fit decreasing bin packing technique has produced better results in maximum runs
    corecore