5,873 research outputs found

    Usage of Network Simulators in Machine-Learning-Assisted 5G/6G Networks

    Full text link
    Without any doubt, Machine Learning (ML) will be an important driver of future communications due to its foreseen performance when applied to complex problems. However, the application of ML to networking systems raises concerns among network operators and other stakeholders, especially regarding trustworthiness and reliability. In this paper, we devise the role of network simulators for bridging the gap between ML and communications systems. In particular, we present an architectural integration of simulators in ML-aware networks for training, testing, and validating ML models before being applied to the operative network. Moreover, we provide insights on the main challenges resulting from this integration, and then give hints discussing how they can be overcome. Finally, we illustrate the integration of network simulators into ML-assisted communications through a proof-of-concept testbed implementation of a residential Wi-Fi network

    A Review of Further Directions for Artificial Intelligence, Machine Learning, and Deep Learning in Smart Logistics

    Get PDF
    Industry 4.0 concepts and technologies ensure the ongoing development of micro- and macro-economic entities by focusing on the principles of interconnectivity, digitalization, and automation. In this context, artificial intelligence is seen as one of the major enablers for Smart Logistics and Smart Production initiatives. This paper systematically analyzes the scientific literature on artificial intelligence, machine learning, and deep learning in the context of Smart Logistics management in industrial enterprises. Furthermore, based on the results of the systematic literature review, the authors present a conceptual framework, which provides fruitful implications based on recent research findings and insights to be used for directing and starting future research initiatives in the field of artificial intelligence (AI), machine learning (ML), and deep learning (DL) in Smart Logistics

    Application of hybrid algorithms and Explainable Artificial Intelligence ingenomic sequencing

    Get PDF
    [EN]DNA sequencing is one of the fields that has advanced the most in recent years within clinical genetics and human biology. However, the large amount of data generated through next generation sequencing (NGS) techniques requires advanced data analysis processes that are sometimes complex and beyond the capabilities of clinical staff. Therefore, this work aims to shed light on the possibilities of applying hybrid algorithms and explainable artificial intelligence (XAI) to data obtained through NGS. The suitability of each architecture will be evaluated phase by phase in order to offer final recommendations that allow implementation in clinical sequencing workflow

    Counterfactuals and Causability in Explainable Artificial Intelligence: Theory, Algorithms, and Applications

    Full text link
    There has been a growing interest in model-agnostic methods that can make deep learning models more transparent and explainable to a user. Some researchers recently argued that for a machine to achieve a certain degree of human-level explainability, this machine needs to provide human causally understandable explanations, also known as causability. A specific class of algorithms that have the potential to provide causability are counterfactuals. This paper presents an in-depth systematic review of the diverse existing body of literature on counterfactuals and causability for explainable artificial intelligence. We performed an LDA topic modelling analysis under a PRISMA framework to find the most relevant literature articles. This analysis resulted in a novel taxonomy that considers the grounding theories of the surveyed algorithms, together with their underlying properties and applications in real-world data. This research suggests that current model-agnostic counterfactual algorithms for explainable AI are not grounded on a causal theoretical formalism and, consequently, cannot promote causability to a human decision-maker. Our findings suggest that the explanations derived from major algorithms in the literature provide spurious correlations rather than cause/effects relationships, leading to sub-optimal, erroneous or even biased explanations. This paper also advances the literature with new directions and challenges on promoting causability in model-agnostic approaches for explainable artificial intelligence

    Towards Interpretable Deep Learning Models for Knowledge Tracing

    Full text link
    As an important technique for modeling the knowledge states of learners, the traditional knowledge tracing (KT) models have been widely used to support intelligent tutoring systems and MOOC platforms. Driven by the fast advancements of deep learning techniques, deep neural network has been recently adopted to design new KT models for achieving better prediction performance. However, the lack of interpretability of these models has painfully impeded their practical applications, as their outputs and working mechanisms suffer from the intransparent decision process and complex inner structures. We thus propose to adopt the post-hoc method to tackle the interpretability issue for deep learning based knowledge tracing (DLKT) models. Specifically, we focus on applying the layer-wise relevance propagation (LRP) method to interpret RNN-based DLKT model by backpropagating the relevance from the model's output layer to its input layer. The experiment results show the feasibility using the LRP method for interpreting the DLKT model's predictions, and partially validate the computed relevance scores from both question level and concept level. We believe it can be a solid step towards fully interpreting the DLKT models and promote their practical applications in the education domain
    corecore