20 research outputs found

    Détection des écarts de tendance et analyse prédictive pour le traitement des flux d’événements en temps réel

    Get PDF
    Les systèmes d’information produisent différents types de journaux d’événements. Les données historiques contenues dans les journaux d’événements peuvent révéler des informations importantes sur l’exécution d’un processus métier. Le volume croissant de ces données collectées, pour être utile, doit être traité afin d’extraire des informations pertinentes. Dans de nombreuses situations, il peut être souhaitable de rechercher des tendances dans ces journaux. En particulier, les tendances calculées par le traitement et l’analyse de la séquence d’événements générés par plusieurs instances du même processus servent de base pour produire des prévisions sur les exécutions actuelles du processus. L’objectif de cette thèse est de proposer un cadre générique pour l’analyse des tendances sur ces flux d’événement, en temps réel. En premier lieu, nous montrons comment des tendances de différents types peuvent être calculées sur des journaux d’événements en temps réel, à l’aide d’un cadre générique appelé workflow de distance de tendance. De multiples calculs courants sur les flux d’événements s’avèrent être des cas particuliers de ce flux de travail, selon la façon dont différents paramètres de flux de travail sont définis. La suite naturelle de l’analyse statique des tendances est l’usage des algorithmes d’apprentissage. Nous joignons alors les concepts de traitement de flux d’événements et d’apprentissage automatique pour créer un cadre qui permet le calcul de différents types de prédictions sur les journaux d’événements. Le cadre proposé est générique : en fournissant différentes définitions à une poignée de fonctions d’événement, plusieurs types de prédictions différents peuvent être calculés à l’aide du même flux de travail de base. Les deux approches ont été mises en oeuvre et évaluées expérimentalement en étendant un moteur de traitement de flux d’événements existant, appelé BeepBeep. Les résultats expérimentaux montrent que les écarts par rapport à une tendance de référence peuvent être détectés en temps réel pour des flux produisant jusqu’à des milliers d’événements par seconde

    Advances in Information Security and Privacy

    Get PDF
    With the recent pandemic emergency, many people are spending their days in smart working and have increased their use of digital resources for both work and entertainment. The result is that the amount of digital information handled online is dramatically increased, and we can observe a significant increase in the number of attacks, breaches, and hacks. This Special Issue aims to establish the state of the art in protecting information by mitigating information risks. This objective is reached by presenting both surveys on specific topics and original approaches and solutions to specific problems. In total, 16 papers have been published in this Special Issue

    Human Enhancement Technologies and Our Merger with Machines

    Get PDF
    A cross-disciplinary approach is offered to consider the challenge of emerging technologies designed to enhance human bodies and minds. Perspectives from philosophy, ethics, law, and policy are applied to a wide variety of enhancements, including integration of technology within human bodies, as well as genetic, biological, and pharmacological modifications. Humans may be permanently or temporarily enhanced with artificial parts by manipulating (or reprogramming) human DNA and through other enhancement techniques (and combinations thereof). We are on the cusp of significantly modifying (and perhaps improving) the human ecosystem. This evolution necessitates a continuing effort to re-evaluate current laws and, if appropriate, to modify such laws or develop new laws that address enhancement technology. A legal, ethical, and policy response to current and future human enhancements should strive to protect the rights of all involved and to recognize the responsibilities of humans to other conscious and living beings, regardless of what they look like or what abilities they have (or lack). A potential ethical approach is outlined in which rights and responsibilities should be respected even if enhanced humans are perceived by non-enhanced (or less-enhanced) humans as “no longer human” at all

    Catalog 2021-2022

    Get PDF

    A study on the Probabilistic Interval-based Event Calculus

    Get PDF
    Η Αναγνώριση Σύνθετων Γεγονότων είναι το πεδίο εκείνο της Τεχνητής Νοημοσύνης το οποίο αποσκοπεί στο σχεδιασμό και την κατασκευή συστημάτων τα οποία επεξεργάζονται γρήγορα μεγάλες και πιθανώς ετερογενείς ροές δεδομένων και τα οποία είναι σε θέση να αναγνωρίζουν εγκαίρως μη τετριμμένα και ενδιαφέροντα συμβάντα, βάσει κατάλληλων ορισμών που προέρχονται από ειδικούς. Σκοπός ενός τέτοιου συστήματος είναι η αυτοματοποιημένη εποπτεία πολύπλοκων και απαιτητικών καταστάσεων και η υποβοήθηση της λήψης αποφάσεων από τον άνθρωπο. Η αβεβαιότητα και ο θόρυβος είναι έννοιες που υπεισέρχονται φυσικά σε τέτοιες ροές δεδομένων και συνεπώς, καθίσταται απαραίτητη η χρήση της Θεωρίας Πιθανοτήτων για την αντιμετώπισή τους. Η πιθανοτική Αναγνώριση Σύνθετων Γεγονότων μπορεί να πραγματοποιηθεί σε επίπεδο χρονικής στιγμής ή σε επίπεδο χρονικού διαστήματος. Η παρούσα εργασία εστιάζει στον PIEC, έναν σύγχρονο αλγόριθμο για την Αναγνώριση Σύνθετων Γεγονότων με τη χρήση πιθανοτικών, μέγιστων διαστημάτων. Αρχικά παρουσιάζουμε τον αλγόριθμο και τον ερευνούμε ενδελεχώς. Μελετούμε την ορθότητά του μέσα από μια σειρά μαθηματικών αποδείξεων περί της ευρωστίας (soundness) και της πληρότητάς του (completeness). Κατόπιν, παραθέτουμε εκτενή πειραματική αποτίμηση του υπό μελέτη αλγορίθμου και σύγκρισή του με συστήματα πιθανοτικής Αναγνώρισης Γεγονότων σε επίπεδο χρονικών σημείων. Τα αποτελέσματά μας δείχνουν ότι ο PIEC επιδεικνύει σταθερά καλύτερη Ανάκληση (Recall), παρουσιάζοντας, ωστόσο κάποιες απώλειες σε Ακρίβεια (Precision) σε ορισμένες περιπτώσεις. Για τον λόγο αυτόν, εμβαθύνουμε και εξετάζουμε συγκεκριμένες περιπτώσεις στις οποίες ο PIEC αποδίδει καλύτερα, καθώς και άλλες στις οποίες παράγει αποτελέσματα υποδεέστερα των παραδοσιακών μεθόδων σημειακής αναγνώρισης, σε μια προσπάθεια να εντοπίσουμε και να διατυπώσουμε τις δυνατότητες αλλά και τις αδυναμίες του αλγορίθμου. Τέλος, θέτουμε τις γενικές κατευθυντήριες γραμμές για περαιτέρω έρευνα στο εν λόγω ζήτημα, τμήματα της οποίας βρίσκονται ήδη σε εξέλιξη.Complex Event Recognition is the subdivision of Artificial Intelligence that aims to design and construct systems that quickly process large and often heterogeneous streams of data and timely deduce – based on definitions set by domain experts – the occurrence of non-trivial and interesting incidents. The purpose of such systems is to provide useful insights into involved and demanding situations that would otherwise be difficult to monitor, and to assist decision making. Uncertainty and noise are inherent in such data streams and therefore, Probability Theory becomes necessary in order to deal with them. The probabilistic recognition of Complex Events can be done in a timepoint-based or an interval-based manner. This thesis focuses on PIEC, a state-of-the-art probabilistic, interval-based Complex Event Recognition algorithm. We present the algorithm and examine it in detail. We study its correctness through a series of mathematical proofs of its soundness and completeness. Afterwards, we provide thorough experimental evaluation and comparison to point-based probabilistic Event Recognition methods. Our evaluation shows that PIEC consistently displays better Recall measures, often at the expense of a generally worse Precision. We then focus on cases where PIEC performs significantly better and cases where it falls short, in an effort to detect and state its main strengths and weaknesses. We also set the general directions for further research on the topic, parts of which are already in progress

    Orthographic practices in SMS text messaging as a case signifying diachronic change in linguistic and semiotic resources

    Get PDF
    From 1998, SMS text messaging diffused in the UK from an innovation associated with a small minority, mainly adolescents, to a method of written communication practised routinely by people of all ages and social profiles. From its earliest use, and continuing to the time of writing in 2015, SMS texting has attracted strong evaluation in public sphere commentary, often focused on its spelling. This thesis presents analysis of SMS orthographic choice as practised by a sample of adolescents and young adults in England, with data collected between 2000 and 2012. A threelevel analytical framework attends to the textual evidence of SMS orthographic practices in situated use; respondents’ accounts of their choices of spelling in text messaging as a literacy practice; and the metadiscursive evaluation of text messaging spelling in situated interaction and in the public sphere. I present analysis of a variety of representations of SMS orthographic choice, including facsimile texts, electronic corpus data, questionnaire survey responses and transcripts of recorded interviews. This mixed methods empirical approach enables a cross-verified, longitudinal perspective on respondents’ practices, and on the wider significance of SMS orthographic choice, as expressed in private and public commentary. I argue that the spelling used in SMS exemplifies features, patterns, and behaviours, which are found in other forms of digitally-mediated interaction, and in previous and concurrent vernacular literacy practices. I present SMS text messaging as one of the intertextually-related forms of self-published written interaction which mark a diachronic shift towards re-regulated forms of orthographic convention, so disrupting attitudes to standard English spelling. I consider some implications represented by SMS spelling choice for the future of written conventions in standardised English, and for teaching and learning about spelling and literacy in formal educational settings

    Contributions to Desktop Grid Computing : From High Throughput Computing to Data-Intensive Sciences on Hybrid Distributed Computing Infrastructures

    Get PDF
    Since the mid 90’s, Desktop Grid Computing - i.e the idea of using a large number of remote PCs distributed on the Internet to execute large parallel applications - has proved to be an efficient paradigm to provide a large computational power at the fraction of the cost of a dedicated computing infrastructure.This document presents my contributions over the last decade to broaden the scope of Desktop Grid Computing. My research has followed three different directions. The first direction has established new methods to observe and characterize Desktop Grid resources and developed experimental platforms to test and validate our approach in conditions close to reality. The second line of research has focused on integrating Desk- top Grids in e-science Grid infrastructure (e.g. EGI), which requires to address many challenges such as security, scheduling, quality of service, and more. The third direction has investigated how to support large-scale data management and data intensive applica- tions on such infrastructures, including support for the new and emerging data-oriented programming models.This manuscript not only reports on the scientific achievements and the technologies developed to support our objectives, but also on the international collaborations and projects I have been involved in, as well as the scientific mentoring which motivates my candidature for the Habilitation `a Diriger les Recherches

    Concept Mapping Strategy For Academic Writing Tutorial In Open And Distant Learning Higher Institution

    Get PDF
    Universitas Terbuka (UT) an open and distant higher education institution of Indonesia conducts the in-service teacher education program. In order to complete the program, the students – mostly teachers - have to submit the final academic paper. In fact, most of the UT students have difficulty to write this academic paper. UT offers an academic writing course to solve this writing program. Most of the student view academic writing still as a difficult assignment. Most of the students view academic writing as a difficult assignment to complete. UT has to find an appropriate instructional strategy that can facilitate student to write the academic writing assignment. One of the instructional strategy that can be selected to solve the academic writing problems is concept mapping. The aim of this study is to elaborate the implementation of concept map as an instructional strategy to facilitate the open and distance learning students io complete academic writing assignments. A design based research was applied to measure the effectiveness of using concept mapping strategy in helping students to gain academic writing skills. The steps of research and development model from Borg, Gall and Gall which consist of instructional design and development phases were implemented in this study. The result of this study indicated that students were facilitated and enjoyed the process of academic writing used the concept map strategy
    corecore