635 research outputs found

    Temperature dependence of the charge carrier mobility in gated quasi-one-dimensional systems

    Full text link
    The many-body Monte Carlo method is used to evaluate the frequency dependent conductivity and the average mobility of a system of hopping charges, electronic or ionic on a one-dimensional chain or channel of finite length. Two cases are considered: the chain is connected to electrodes and in the other case the chain is confined giving zero dc conduction. The concentration of charge is varied using a gate electrode. At low temperatures and with the presence of an injection barrier, the mobility is an oscillatory function of density. This is due to the phenomenon of charge density pinning. Mobility changes occur due to the co-operative pinning and unpinning of the distribution. At high temperatures, we find that the electron-electron interaction reduces the mobility monotonically with density, but perhaps not as much as one might intuitively expect because the path summation favour the in-phase contributions to the mobility, i.e. the sequential paths in which the carriers have to wait for the one in front to exit and so on. The carrier interactions produce a frequency dependent mobility which is of the same order as the change in the dc mobility with density, i.e. it is a comparably weak effect. However, when combined with an injection barrier or intrinsic disorder, the interactions reduce the free volume and amplify disorder by making it non-local and this can explain the too early onset of frequency dependence in the conductivity of some high mobility quasi-one-dimensional organic materials.Comment: 9 pages, 8 figures, to be published in Physical Review

    Explainable product backorder prediction exploiting CNN: Introducing explainable models in businesses

    Get PDF
    Due to expected positive impacts on business, the application of artificial intelligence has been widely increased. The decision-making procedures of those models are often complex and not easily understandable to the company’s stakeholders, i.e. the people having to follow up on recommendations or try to understand automated decisions of a system. This opaqueness and black-box nature might hinder adoption, as users struggle to make sense and trust the predictions of AI models. Recent research on eXplainable Artificial Intelligence (XAI) focused mainly on explaining the models to AI experts with the purpose of debugging and improving the performance of the models. In this article, we explore how such systems could be made explainable to the stakeholders. For doing so, we propose a new convolutional neural network (CNN)-based explainable predictive model for product backorder prediction in inventory management. Backorders are orders that customers place for products that are currently not in stock. The company now takes the risk to produce or acquire the backordered products while in the meantime, customers can cancel their orders if that takes too long, leaving the company with unsold items in their inventory. Hence, for their strategic inventory management, companies need to make decisions based on assumptions. Our argument is that these tasks can be improved by offering explanations for AI recommendations. Hence, our research investigates how such explanations could be provided, employing Shapley additive explanations to explain the overall models’ priority in decision-making. Besides that, we introduce locally interpretable surrogate models that can explain any individual prediction of a model. The experimental results demonstrate effectiveness in predicting backorders in terms of standard evaluation metrics and outperform known related works with AUC 0.9489. Our approach demonstrates how current limitations of predictive technologies can be addressed in the business domain

    What\u27s new in spine surgery

    Get PDF

    The Automation of the Taxi Industry – Taxi Drivers’ Expectations and Attitudes Towards the Future of their Work

    Get PDF
    Advocates of autonomous driving predict that the occupation of taxi driver could be made obsolete by shared autonomous vehicles (SAV) in the long term. Conducting interviews with German taxi drivers, we investigate how they perceive the changes caused by advancing automation for the future of their business. Our study contributes insights into how the work of taxi drivers could change given the advent of autonomous driving: While the task of driving could be taken over by SAVs for standard trips, taxi drivers are certain that other areas of their work such as providing supplementary services and assistance to passengers would constitute a limit to such forms of automation, but probably involving a shifting role for the taxi drivers, one which focuses on the sociality of the work. Our findings illustrate how taxi drivers see the future of their work, suggesting design implications for tools that take various forms of assistance into account, and demonstrating how important it is to consider taxi drivers in the co-design of future taxis and SAV services

    Arabic Sentiment Analysis with Noisy Deep Explainable Model

    Full text link
    Sentiment Analysis (SA) is an indispensable task for many real-world applications. Compared to limited resourced languages (i.e., Arabic, Bengali), most of the research on SA are conducted for high resourced languages (i.e., English, Chinese). Moreover, the reasons behind any prediction of the Arabic sentiment analysis methods exploiting advanced artificial intelligence (AI)-based approaches are like black-box - quite difficult to understand. This paper proposes an explainable sentiment classification framework for the Arabic language by introducing a noise layer on Bi-Directional Long Short-Term Memory (BiLSTM) and Convolutional Neural Networks (CNN)-BiLSTM models that overcome over-fitting problem. The proposed framework can explain specific predictions by training a local surrogate explainable model to understand why a particular sentiment (positive or negative) is being predicted. We carried out experiments on public benchmark Arabic SA datasets. The results concluded that adding noise layers improves the performance in sentiment analysis for the Arabic language by reducing overfitting and our method outperformed some known state-of-the-art methods. In addition, the introduced explainability with noise layer could make the model more transparent and accountable and hence help adopting AI-enabled system in practice.Comment: This is the pre-print version of our accepted paper at the 7th International Conference on Natural Language Processing and Information Retrieval~(ACM NLPIR'2023

    Trust your guts: fostering embodied knowledge and sustainable practices through voice interaction

    Get PDF
    Despite various attempts to prevent food waste and motivate conscious food handling, household members find it difficult to correctly assess the edibility of food. With the rise of ambient voice assistants, we did a design case study to support households’ in situ decision-making process in collaboration with our voice agent prototype, Fischer Fritz. Therefore, we conducted 15 contextual inquiries to understand food practices at home. Furthermore, we interviewed six fish experts to inform the design of our voice agent on how to guide consumers and teach food literacy. Finally, we created a prototype and discussed with 15 consumers its impact and capability to convey embodied knowledge to the human that is engaged as sensor. Our design research goes beyond current Human-Food Interaction automation approaches by emphasizing the human-food relationship in technology design and demonstrating future complementary human-agent collaboration with the aim to increase humans’ competence to sense, think, and act

    Unveiling Black-boxes: Explainable Deep Learning Models for Patent Classification

    Full text link
    Recent technological advancements have led to a large number of patents in a diverse range of domains, making it challenging for human experts to analyze and manage. State-of-the-art methods for multi-label patent classification rely on deep neural networks (DNNs), which are complex and often considered black-boxes due to their opaque decision-making processes. In this paper, we propose a novel deep explainable patent classification framework by introducing layer-wise relevance propagation (LRP) to provide human-understandable explanations for predictions. We train several DNN models, including Bi-LSTM, CNN, and CNN-BiLSTM, and propagate the predictions backward from the output layer up to the input layer of the model to identify the relevance of words for individual predictions. Considering the relevance score, we then generate explanations by visualizing relevant words for the predicted patent class. Experimental results on two datasets comprising two-million patent texts demonstrate high performance in terms of various evaluation measures. The explanations generated for each prediction highlight important relevant words that align with the predicted class, making the prediction more understandable. Explainable systems have the potential to facilitate the adoption of complex AI-enabled methods for patent classification in real-world applications.Comment: This is the pre-print of the submitted manuscript on the World Conference on eXplainable Artificial Intelligence (xAI2023), Lisbon, Portugal. The published manuscript can be found here https://doi.org/10.1007/978-3-031-44067-0_2

    Designing for ethical innovation:a case study on ELSI co-design in emergency

    Get PDF
    The ever more pervasive ‘informationalization’ of crisis management and response brings both unprecedented opportunities and challenges. Recent years have seen the emergence of attention to ethical, legal and social issues (ELSI) in the field of Information and Communication Technology. However, disclosing (and addressing) ELSI issues in design is still a challenge because they are inherently relational, arising from interactions between people, the material and design of the artifact, and the context. In this article, we discuss approaches for addressing such ‘deeper’ and ‘wider’ political implications, values and ethical, legal and social implications that arise between practices, people and technology. Based on a case study from the BRIDGE project, which has provided the opportunity for deep engagement with these issues through the concrete exploration and experimentation with technologically augmented practices of emergency response, we present insights from our interdisciplinary work aiming to make design and innovation projects ELSI-aware. Crucially, we have seen in our study a need for a shift from privacy by design towards designing for privacy, collaboration, trust, accessibility, ownership, transparency etc., acknowledging that these are emergent practices that we cannot control by design, but rather that we can help to design for—calling for approaches that allow to make ELSI issues explicit and addressable in design-time
    • …
    corecore