International audienceSentiment analysis has become vital for understanding consumer attitudes, guiding product development, and informing strategic decisions. Although LLMs such as GPT-3.5 and GPT-4 deliver strong zero-shot performance, they can be cost prohibitive and raise privacy concerns. In contrast, Small Language Models (SLMs) provide a lighter and more deployable solution, but their ability to match LLM accuracy, especially in zero-shot scenarios, remains underexplored. In this experimental study , we examine whether ensembles of zero-shot SLMs can serve as a viable alternative to proprietary LLMs in sentiment classification tasks. We investigate five commonly used SLMs (Phi2 Mini, Mistral, Llama, Gemma, Aya) and compare them to GPT-based models (GPT-3.5, GPT-4, GPT-4 omni, GPT-4 omni mini) across seven English-language datasets. By automating prompt generation and filtering responses based on a strict output format, we maintain a purely zero-shot approach. We form SLM ensembles via majority voting and evaluate their performance on accuracy, weighted precision, and weighted F1. We also measure inference time to assess cost and scalability trade-offs. Results show that SLM ensembles as a form of decision fusion, consistently outperform single SLMs, significantly boosting metrics in zero-shot settings. In contrast with GPT models, the ensemble achieves accuracy comparable to GPT-3.5 and even rivals GPT-4 on certain prompts. However, GPT-4 retains a slight edge in both precision and F1 score. Moreover, local SLM ensembles incur higher latency yet offer potential advantages in data privacy and operational control. This experimental study’s findings illuminate the feasibility of employing lightweight, zero-shot SLM ensembles for sentiment analysis, providing organizations with an effective and more flexible alternative to exclusively relying on large proprietary models
Is data on this page outdated, violates copyrights or anything else? Report the problem now and we will take corresponding actions after reviewing your request.