9 research outputs found
Owner-Manager Perceived Relationship Between ICT Adoption and SME Performance in Busiro West Wakiso District, Uganda
This study explored the relationship between ICT adoption and SME performance from the perspective of owner-managers in Busiro West in Wakiso District. More specifically, (i) to assess the relationship between ICT adoption and SMEs performance in Busiro West Wakiso District, (ii)to identify the current challenges on ICT adoption by SMEs in Wakiso, and (iii) to analyse the level of adoption of ICT in SMEs. With the respondent of 140, the study adopted a mix method and cross-sectional survey design. The findings revealed: (i) There is a significant positive relationship between ICT adoption and SME performance (r = 0.913; p value = 0.000). (ii)Some of the challenges that impeded ICT adoption included high costs, limited skills, lack of infrastructure, and security concerns. The study also confirmed that SME owner-managers know the benefits of ICT adoption, but several have failed to consistently adopt and use it due to the several challenges associated with adopting ICT. (iii)The study also observed a low level of ICT adoption among the SMEs. Conclusions were drawn and recommendations made with regards to government intervention, service providers imitative, owner-manager role and general public sensitisations. The study contributed to body of knowledge that confirms the importance of ICT adoption to business performance and success. Keywords: ICT adoption, SME Performance, Owner-manager, Uganda DOI: 10.7176/EJBM/14-24-06 Publication date: December 31st 202
MasakhaNEWS: News Topic Classification for African languages
African languages are severely under-represented in NLP research due to lack
of datasets covering several NLP tasks. While there are individual language
specific datasets that are being expanded to different tasks, only a handful of
NLP tasks (e.g. named entity recognition and machine translation) have
standardized benchmark datasets covering several geographical and
typologically-diverse African languages. In this paper, we develop MasakhaNEWS
-- a new benchmark dataset for news topic classification covering 16 languages
widely spoken in Africa. We provide an evaluation of baseline models by
training classical machine learning models and fine-tuning several language
models. Furthermore, we explore several alternatives to full fine-tuning of
language models that are better suited for zero-shot and few-shot learning such
as cross-lingual parameter-efficient fine-tuning (like MAD-X), pattern
exploiting training (PET), prompting language models (like ChatGPT), and
prompt-free sentence transformer fine-tuning (SetFit and Cohere Embedding API).
Our evaluation in zero-shot setting shows the potential of prompting ChatGPT
for news topic classification in low-resource African languages, achieving an
average performance of 70 F1 points without leveraging additional supervision
like MAD-X. In few-shot setting, we show that with as little as 10 examples per
label, we achieved more than 90\% (i.e. 86.0 F1 points) of the performance of
full supervised training (92.6 F1 points) leveraging the PET approach.Comment: Accepted to IJCNLP-AACL 2023 (main conference
MasakhaNEWS:News Topic Classification for African languages
African languages are severely under-represented in NLP research due to lack of datasets covering several NLP tasks. While there are individual language specific datasets that are being expanded to different tasks, only a handful of NLP tasks (e.g. named entity recognition and machine translation) have standardized benchmark datasets covering several geographical and typologically-diverse African languages. In this paper, we develop MasakhaNEWS -- a new benchmark dataset for news topic classification covering 16 languages widely spoken in Africa. We provide an evaluation of baseline models by training classical machine learning models and fine-tuning several language models. Furthermore, we explore several alternatives to full fine-tuning of language models that are better suited for zero-shot and few-shot learning such as cross-lingual parameter-efficient fine-tuning (like MAD-X), pattern exploiting training (PET), prompting language models (like ChatGPT), and prompt-free sentence transformer fine-tuning (SetFit and Cohere Embedding API). Our evaluation in zero-shot setting shows the potential of prompting ChatGPT for news topic classification in low-resource African languages, achieving an average performance of 70 F1 points without leveraging additional supervision like MAD-X. In few-shot setting, we show that with as little as 10 examples per label, we achieved more than 90\% (i.e. 86.0 F1 points) of the performance of full supervised training (92.6 F1 points) leveraging the PET approach
Creating Dependency: Land and Gift-giving Practices in Uganda
President Museveni's re-election in February 2011 demonstrated once more the skills of the Ugandan leader to remain in control ever since he took over power in 1986 heading a guerrilla movement. Some of the campaign themes dealt with land and administration, others with security and the role of the armed forces in bringing back peace to the country. Museveni's populist stance in favour of squatters, in places where user rights are threatened by the progress of individual titling, came out prominently. Actual gifts and many promises of money, land, new districts as well as offers of protection were made during the campaign. These were meant to foster moral indebtedness and political support for the regime and its leader, making it difficult to break off from such an uneven relationship. This paper focuses on the double-edged politics of dependency and protection in Uganda
MasakhaNER 2.0:Africa-centric Transfer Learning for Named Entity Recognition
African languages are spoken by over a billion people, but are underrepresented in NLP research and development. The challenges impeding progress include the limited availability of annotated datasets, as well as a lack of understanding of the settings where current methods are effective. In this paper, we make progress towards solutions for these challenges, focusing on the task of named entity recognition (NER). We create the largest human-annotated NER dataset for 20 African languages, and we study the behavior of state-of-the-art cross-lingual transfer methods in an Africa-centric setting, demonstrating that the choice of source language significantly affects performance. We show that choosing the best transfer language improves zero-shot F1 scores by an average of 14 points across 20 languages compared to using English. Our results highlight the need for benchmark datasets and models that cover typologically-diverse African languages
MasakhaNER: Named entity recognition for African languages
International audienceWe take a step towards addressing the underrepresentation of the African continent in NLP research by bringing together different stakeholders to create the first large, publicly available, high-quality dataset for named entity recognition (NER) in ten African languages. We detail the characteristics of these languages to help researchers and practitioners better understand the challenges they pose for NER tasks. We analyze our datasets and conduct an extensive empirical evaluation of stateof-the-art methods across both supervised and transfer learning settings. Finally, we release the data, code, and models to inspire future research on African NLP.
MasakhaNEWS:News Topic Classification for African languages
African languages are severely under-represented in NLP research due to lack of datasets covering several NLP tasks. While there are individual language specific datasets that are being expanded to different tasks, only a handful of NLP tasks (e.g. named entity recognition and machine translation) have standardized benchmark datasets covering several geographical and typologically-diverse African languages. In this paper, we develop MasakhaNEWS -- a new benchmark dataset for news topic classification covering 16 languages widely spoken in Africa. We provide an evaluation of baseline models by training classical machine learning models and fine-tuning several language models. Furthermore, we explore several alternatives to full fine-tuning of language models that are better suited for zero-shot and few-shot learning such as cross-lingual parameter-efficient fine-tuning (like MAD-X), pattern exploiting training (PET), prompting language models (like ChatGPT), and prompt-free sentence transformer fine-tuning (SetFit and Cohere Embedding API). Our evaluation in zero-shot setting shows the potential of prompting ChatGPT for news topic classification in low-resource African languages, achieving an average performance of 70 F1 points without leveraging additional supervision like MAD-X. In few-shot setting, we show that with as little as 10 examples per label, we achieved more than 90\% (i.e. 86.0 F1 points) of the performance of full supervised training (92.6 F1 points) leveraging the PET approach