38 research outputs found

    Provable Robustness for Streaming Models with a Sliding Window

    Full text link
    The literature on provable robustness in machine learning has primarily focused on static prediction problems, such as image classification, in which input samples are assumed to be independent and model performance is measured as an expectation over the input distribution. Robustness certificates are derived for individual input instances with the assumption that the model is evaluated on each instance separately. However, in many deep learning applications such as online content recommendation and stock market analysis, models use historical data to make predictions. Robustness certificates based on the assumption of independent input samples are not directly applicable in such scenarios. In this work, we focus on the provable robustness of machine learning models in the context of data streams, where inputs are presented as a sequence of potentially correlated items. We derive robustness certificates for models that use a fixed-size sliding window over the input stream. Our guarantees hold for the average model performance across the entire stream and are independent of stream size, making them suitable for large data streams. We perform experiments on speech detection and human activity recognition tasks and show that our certificates can produce meaningful performance guarantees against adversarial perturbations

    Implementing decision tree-based algorithms in medical diagnostic decision support systems

    Get PDF
    As a branch of healthcare, medical diagnosis can be defined as finding the disease based on the signs and symptoms of the patient. To this end, the required information is gathered from different sources like physical examination, medical history and general information of the patient. Development of smart classification models for medical diagnosis is of great interest amongst the researchers. This is mainly owing to the fact that the machine learning and data mining algorithms are capable of detecting the hidden trends between features of a database. Hence, classifying the medical datasets using smart techniques paves the way to design more efficient medical diagnostic decision support systems. Several databases have been provided in the literature to investigate different aspects of diseases. As an alternative to the available diagnosis tools/methods, this research involves machine learning algorithms called Classification and Regression Tree (CART), Random Forest (RF) and Extremely Randomized Trees or Extra Trees (ET) for the development of classification models that can be implemented in computer-aided diagnosis systems. As a decision tree (DT), CART is fast to create, and it applies to both the quantitative and qualitative data. For classification problems, RF and ET employ a number of weak learners like CART to develop models for classification tasks. We employed Wisconsin Breast Cancer Database (WBCD), Z-Alizadeh Sani dataset for coronary artery disease (CAD) and the databanks gathered in Ghaem Hospital’s dermatology clinic for the response of patients having common and/or plantar warts to the cryotherapy and/or immunotherapy methods. To classify the breast cancer type based on the WBCD, the RF and ET methods were employed. It was found that the developed RF and ET models forecast the WBCD type with 100% accuracy in all cases. To choose the proper treatment approach for warts as well as the CAD diagnosis, the CART methodology was employed. The findings of the error analysis revealed that the proposed CART models for the applications of interest attain the highest precision and no literature model can rival it. The outcome of this study supports the idea that methods like CART, RF and ET not only improve the diagnosis precision, but also reduce the time and expense needed to reach a diagnosis. However, since these strategies are highly sensitive to the quality and quantity of the introduced data, more extensive databases with a greater number of independent parameters might be required for further practical implications of the developed models

    Defending Adversarial Attacks on Cloud-aided Automatic Speech Recognition Systems

    Get PDF
    With the advancement of deep learning based speech recognition technology, an increasing number of cloud-aided automatic voice assistant applications, such as Google Home, Amazon Echo, and cloud AI services, such as IBM Watson, are emerging in our daily life. In a typical usage scenario, after keyword activation, the user's voice will be recorded and submitted to the cloud for automatic speech recognition (ASR) and then further action(s) might be triggered depending on the user's command(s). However, recent researches show that the deep learning based systems could be easily attacked by adversarial examples. Subsequently, the ASR systems are found being vulnerable to audio adversarial examples. Unfortunately, very few works about defending audio adversarial attack are known in the literature. Constructing a generic and robust defense mechanism to resolve this issue remains an open problem. In this work, we propose several proactive defense mechanisms against targeted audio adversarial examples in the ASR systems via code modulation and audio compression. We then show the effectiveness of the proposed strategies through extensive evaluation on natural dataset

    KNEE OSTEOARTHRITIS PREDICTION DRIVEN BY DEEP LEARNING AND THE KELLGREN-LAWRENCE GRADING

    Get PDF
    Degenerative osteoarthritis of the knee (KOA) affects the knee compartments and worsens over 10–15 years. Knee osteoarthritis is the major cause of activity restrictions and impairment in older persons. Clinicians' expertise affects visual examination interpretation. Hence, achieving early detection requires fast, accurate, and affordable methods. Deep learning (DL) convolutional neural networks (CNN) are the most accurate knee osteoarthritis diagnosis approach. CNNs require a significant amount of training data. Knee X-rays can be analyzed by models that use deep learning to extract the features and reduce number of training cycles. This study suggests the usage of DL system that is based on a trained network on five-class knee X-rays with VGG16, SoftMax (Normal, Doubtful, Mild, Moderate, Severe). Two deep CNNs are used to grade knee OA instantly using the Kellgren-Lawrence (KL) methodology. The experimental analysis makes use of two sets of 1650 different knee X-ray images. Each set consists of 514 normal, 477 doubtful, 232 mild, 221 moderate, and 206 severe cases of osteoarthritis of the knee. The suggested model for knee osteoarthritis (OA) identification and severity prediction using knee X-ray radiographs has a classification accuracy of more than 95%, with training and validation accuracy of 95% and 87%, respectively

    Haptic technology in society: A sentiment analysis of public engagement

    Get PDF
    With the rapid rise of metaverse spaces, the integration of haptic technology has become an integral part of users' immersion and engagement in different domains. This study shades the light on public engagement with haptic technology applications in various contexts. The YouTube Application Programming Interface was used to extract data about haptic technology within a specific time period. Public engagement was estimated based on users' cognitive and behavioral engagement with haptic-video materials. A topic modeling approach was used to extract the main topics associated with haptic technology. The main types of users’ emotions and their word association were extracted using the association rules mining technique. The results showed that public engagement with haptic technology was mostly observed in few categories (e.g., gaming). Joy and positive sentiments were strongly associated with the extracted topics. The findings offer new insights into the tendency of utilizing haptic technology across communities. The outcomes can help industries and decision makers understand ways to improve the integration of haptic technology in according to the needs of specific domains

    Policy Smoothing for Provably Robust Reinforcement Learning

    Full text link
    The study of provable adversarial robustness for deep neural networks (DNNs) has mainly focused on static supervised learning tasks such as image classification. However, DNNs have been used extensively in real-world adaptive tasks such as reinforcement learning (RL), making such systems vulnerable to adversarial attacks as well. Prior works in provable robustness in RL seek to certify the behaviour of the victim policy at every time-step against a non-adaptive adversary using methods developed for the static setting. But in the real world, an RL adversary can infer the defense strategy used by the victim agent by observing the states, actions, etc. from previous time-steps and adapt itself to produce stronger attacks in future steps. We present an efficient procedure, designed specifically to defend against an adaptive RL adversary, that can directly certify the total reward without requiring the policy to be robust at each time-step. Our main theoretical contribution is to prove an adaptive version of the Neyman-Pearson Lemma -- a key lemma for smoothing-based certificates -- where the adversarial perturbation at a particular time can be a stochastic function of current and previous observations and states as well as previous actions. Building on this result, we propose policy smoothing where the agent adds a Gaussian noise to its observation at each time-step before passing it through the policy function. Our robustness certificates guarantee that the final total reward obtained by policy smoothing remains above a certain threshold, even though the actions at intermediate time-steps may change under the attack. Our experiments on various environments like Cartpole, Pong, Freeway and Mountain Car show that our method can yield meaningful robustness guarantees in practice
    corecore