38 research outputs found

    Analysis of Spectrum Occupancy Using Machine Learning Algorithms

    Get PDF
    In this paper, we analyze the spectrum occupancy using different machine learning techniques. Both supervised techniques (naive Bayesian classifier (NBC), decision trees (DT), support vector machine (SVM), linear regression (LR)) and unsupervised algorithm (hidden markov model (HMM)) are studied to find the best technique with the highest classification accuracy (CA). A detailed comparison of the supervised and unsupervised algorithms in terms of the computational time and classification accuracy is performed. The classified occupancy status is further utilized to evaluate the probability of secondary user outage for the future time slots, which can be used by system designers to define spectrum allocation and spectrum sharing policies. Numerical results show that SVM is the best algorithm among all the supervised and unsupervised classifiers. Based on this, we proposed a new SVM algorithm by combining it with fire fly algorithm (FFA), which is shown to outperform all other algorithms.Comment: 21 pages, 6 figure

    CHALLENGES IN THE DEPLOYMENT AND OPERATION OF MACHINE LEARNING IN PRACTICE

    Get PDF
    Machine learning has recently emerged as a powerful technique to increase operational efficiency or to develop new value propositions. However, the translation of a prediction algorithm into an operationally usable machine learning model is a time-consuming and in various ways challenging task. In this work, we target to systematically elicit the challenges in deployment and operation to enable broader practical dissemination of machine learning applications. To this end, we first identify relevant challenges with a structured literature analysis. Subsequently, we conduct an interview study with machine learning practitioners across various industries, perform a qualitative content analysis, and identify challenges organized along three distinct categories as well as six overarching clusters. Eventually, results from both literature and interviews are evaluated with a comparative analysis. Key issues identified include automated strategies for data drift detection and handling, standardization of machine learning infrastructure, and appropriate communication and expectation management

    Toward the Sustainable Development of Machine Learning Applications in Industry 4.0

    Get PDF
    As the level of digitization in industrial environments increases, companies are striving to improve efficiency and resilience to unplanned disruptions through the development of machine learning (ML)-based applications. Still, sustainable deployment and operation beyond proofs-of-concept is a challenging and resource-intensive task in dynamic enviroments such as industry 4.0, often impeding practical adoption in the long-term and thus sustainable ML product development. In this work, we systematically identify these challenges based on the CRISP-ML process model phases by applying a design science research approach. To this end, we conducted 15 interviews with data science practitioners in industry 4.0. Following a qualitative content analysis, design requirements and design principles for the development and sustainable long-term deployment of ML systems are derived to address identified challenges such as robustness to, and management of data drift caused by time-dependencies and machine/product differences, missing metadata, interfaces to other IT systems, expectation management, and MLOps guidelines

    Tire Changes, Fresh Air, and Yellow Flags: Challenges in Predictive Analytics for Professional Racing

    Get PDF
    Our goal is to design a prediction and decision system for real-time use during a professional car race. In designing a knowledge discovery process for racing, we faced several challenges that were overcome only when domain knowledge of racing was carefully infused within statistical modeling techniques. In this article, we describe how we leveraged expert knowledge of the domain to produce a real-time decision system for tire changes within a race. Our forecasts have the potential to impact how racing teams can optimize strategy by making tire-change decisions to benefit their rank position. Our work significantly expands previous research on sports analytics, as it is the only work on analytical methods for within-race prediction and decision making for professional car racing

    DExT: Detector Explanation Toolkit

    Full text link
    State-of-the-art object detectors are treated as black boxes due to their highly non-linear internal computations. Even with unprecedented advancements in detector performance, the inability to explain how their outputs are generated limits their use in safety-critical applications. Previous work fails to produce explanations for both bounding box and classification decisions, and generally make individual explanations for various detectors. In this paper, we propose an open-source Detector Explanation Toolkit (DExT) which implements the proposed approach to generate a holistic explanation for all detector decisions using certain gradient-based explanation methods. We suggests various multi-object visualization methods to merge the explanations of multiple objects detected in an image as well as the corresponding detections in a single image. The quantitative evaluation show that the Single Shot MultiBox Detector (SSD) is more faithfully explained compared to other detectors regardless of the explanation methods. Both quantitative and human-centric evaluations identify that SmoothGrad with Guided Backpropagation (GBP) provides more trustworthy explanations among selected methods across all detectors. We expect that DExT will motivate practitioners to evaluate object detectors from the interpretability perspective by explaining both bounding box and classification decisions.Comment: 24 pages, with appendix. 1st World Conference on eXplainable Artificial Intelligence camera read

    DExT:Detector Explanation Toolkit

    Get PDF
    State-of-the-art object detectors are treated as black boxes due to their highly non-linear internal computations. Even with unprecedented advancements in detector performance, the inability to explain how their outputs are generated limits their use in safety-critical applications. Previous work fails to produce explanations for both bounding box and classification decisions, and generally make individual explanations for various detectors. In this paper, we propose an open-source Detector Explanation Toolkit (DExT) which implements the proposed approach to generate a holistic explanation for all detector decisions using certain gradient-based explanation methods. We suggests various multi-object visualization methods to merge the explanations of multiple objects detected in an image as well as the corresponding detections in a single image. The quantitative evaluation show that the Single Shot MultiBox Detector (SSD) is more faithfully explained compared to other detectors regardless of the explanation methods. Both quantitative and human-centric evaluations identify that SmoothGrad with Guided Backpropagation (GBP) provides more trustworthy explanations among selected methods across all detectors. We expect that DExT will motivate practitioners to evaluate object detectors from the interpretability perspective by explaining both bounding box and classification decisions
    corecore