Stress in pigs poses significant challenges to animal welfare and productivity in modern pig farming, contributing to increased antimicrobial use and the rise of antimicrobial resistance (AMR). This study involves stress classification in pregnant sows by exploring five deep learning models: ConvNeXt, EfficientNet_V2, MobileNet_V3, RegNet, and Vision Transformer (ViT). These models are used for stress detection from facial images, leveraging an expanded dataset. A facial image dataset of sows was collected at Scotland’s Rural College (SRUC) and the images were categorized into primiparous Low-Stressed (LS) and High-Stress (HS) groups based on expert behavioural assessments and cortisol level analysis. The selected deep learning models were then trained on this enriched dataset and their performance was evaluated using cross-validation on unseen data. The Vision Transformer (ViT) model outperformed the others across the dataset of annotated facial images, achieving an average accuracy of 0.75, an F1 score of 0.78 for high-stress detection, and consistent batch-level performance (up to 0.88 F1 score). These findings highlight the efficacy of transformer-based models for automated stress detection in sows, supporting early intervention strategies to enhance welfare, optimize productivity, and mitigate AMR risks in livestock production.</p