252 research outputs found
Learning to Auto Weight: Entirely Data-driven and Highly Efficient Weighting Framework
Example weighting algorithm is an effective solution to the training bias
problem, however, most previous typical methods are usually limited to human
knowledge and require laborious tuning of hyperparameters. In this paper, we
propose a novel example weighting framework called Learning to Auto Weight
(LAW). The proposed framework finds step-dependent weighting policies
adaptively, and can be jointly trained with target networks without any
assumptions or prior knowledge about the dataset. It consists of three key
components: Stage-based Searching Strategy (3SM) is adopted to shrink the huge
searching space in a complete training process; Duplicate Network Reward (DNR)
gives more accurate supervision by removing randomness during the searching
process; Full Data Update (FDU) further improves the updating efficiency.
Experimental results demonstrate the superiority of weighting policy explored
by LAW over standard training pipeline. Compared with baselines, LAW can find a
better weighting schedule which achieves much more superior accuracy on both
biased CIFAR and ImageNet.Comment: Accepted by AAAI 202
Do humans and machines have the same eyes? Human-machine perceptual differences on image classification
Trained computer vision models are assumed to solve vision tasks by imitating
human behavior learned from training labels. Most efforts in recent vision
research focus on measuring the model task performance using standardized
benchmarks. Limited work has been done to understand the perceptual difference
between humans and machines. To fill this gap, our study first quantifies and
analyzes the statistical distributions of mistakes from the two sources. We
then explore human vs. machine expertise after ranking tasks by difficulty
levels. Even when humans and machines have similar overall accuracies, the
distribution of answers may vary. Leveraging the perceptual difference between
humans and machines, we empirically demonstrate a post-hoc human-machine
collaboration that outperforms humans or machines alone.Comment: Paper under revie
Understanding Health Video Engagement: An Interpretable Deep Learning Approach
Health misinformation on social media devastates physical and mental health,
invalidates health gains, and potentially costs lives. Understanding how health
misinformation is transmitted is an urgent goal for researchers, social media
platforms, health sectors, and policymakers to mitigate those ramifications.
Deep learning methods have been deployed to predict the spread of
misinformation. While achieving the state-of-the-art predictive performance,
deep learning methods lack the interpretability due to their blackbox nature.
To remedy this gap, this study proposes a novel interpretable deep learning
approach, Generative Adversarial Network based Piecewise Wide and Attention
Deep Learning (GAN-PiWAD), to predict health misinformation transmission in
social media. Improving upon state-of-the-art interpretable methods, GAN-PiWAD
captures the interactions among multi-modal data, offers unbiased estimation of
the total effect of each feature, and models the dynamic total effect of each
feature when its value varies. We select features according to social exchange
theory and evaluate GAN-PiWAD on 4,445 misinformation videos. The proposed
approach outperformed strong benchmarks. Interpretation of GAN-PiWAD indicates
video description, negative video content, and channel credibility are key
features that drive viral transmission of misinformation. This study
contributes to IS with a novel interpretable deep learning method that is
generalizable to understand other human decision factors. Our findings provide
direct implications for social media platforms and policymakers to design
proactive interventions to identify misinformation, control transmissions, and
manage infodemics.Comment: WITS 2021 Best Paper Awar
An Interpretable Deep Learning Approach to Understand Health Misinformation Transmission on YouTube
Health misinformation on social media devastates physical and mental health, invalidates health gains, and potentially costs lives. Deep learning methods have been deployed to predict the spread of misinformation, but they lack the interpretability due to their blackbox nature. To remedy this gap, this study proposes a novel interpretable deep learning, Generative Adversarial Network based Piecewise Wide and Attention Deep Learning (GAN-PiWAD), to predict health misinformation transmission in social media. GAN-PiWAD captures the interactions among multi-modal data, offers unbiased estimation of the total effect of each feature, and models the dynamic total effect of each feature. Interpretation of GAN-PiWAD indicates video description, negative video content, and channel credibility are key features that drive viral transmission of misinformation. This study contributes to IS with a novel interpretable deep learning that is generalizable to understand human decisions. We provide direct implications to design interventions to identify misinformation, control transmissions, and manage infodemics
Discovering Barriers to Opioid Addiction Treatment from Social Media: A Similarity Network-Based Deep Learning Approach
Opioid use disorder (OUD) refers to the physical and psychological reliance on opioids. OUD costs the US healthcare systems $504 billion annually and poses significant mortality risk for patients. Understanding and mitigating the barriers to OUD treatment is a high-priority area. Current OUD treatment studies rely on surveys with low response rate because of social stigma. In this paper, we explore social media as a new data source to study OUD treatments. We develop the SImilarity Network-based DEep Learning (SINDEL) to discover barriers to OUD treatment from the patient narratives and address the challenge of morphs. SINDEL reaches an F1 score of 76.79%. Thirteen types of OUD treatment barriers were identified and verified by domain experts. This study contributes to IS literature by proposing a novel deep-learning-based analytical approach with impactful implications for health practitioners
- …