1,702 research outputs found
Distributed Ledger Technology (DLT) Applications in Payment, Clearing, and Settlement Systems:A Study of Blockchain-Based Payment Barriers and Potential Solutions, and DLT Application in Central Bank Payment System Functions
Payment, clearing, and settlement systems are essential components of the financial markets and exert considerable influence on the overall economy. While there have been considerable technological advancements in payment systems, the conventional systems still depend on centralized architecture, with inherent limitations and risks. The emergence of Distributed ledger technology (DLT) is being regarded as a potential solution to transform payment and settlement processes and address certain challenges posed by the centralized architecture of traditional payment systems (Bank for International Settlements, 2017). While proof-of-concept projects have demonstrated the technical feasibility of DLT, significant barriers still hinder its adoption and implementation. The overarching objective of this thesis is to contribute to the developing area of DLT application in payment, clearing and settlement systems, which is still in its initial stages of applications development and lacks a substantial body of scholarly literature and empirical research. This is achieved by identifying the socio-technical barriers to adoption and diffusion of blockchain-based payment systems and the solutions proposed to address them. Furthermore, the thesis examines and classifies various applications of DLT in central bank payment system functions, offering valuable insights into the motivations, DLT platforms used, and consensus algorithms for applicable use cases. To achieve these objectives, the methodology employed involved a systematic literature review (SLR) of academic literature on blockchain-based payment systems. Furthermore, we utilized a thematic analysis approach to examine data collected from various sources regarding the use of DLT applications in central bank payment system functions, such as central bank white papers, industry reports, and policy documents. The study's findings on blockchain-based payment systems barriers and proposed solutions; challenge the prevailing emphasis on technological and regulatory barriers in the literature and industry discourse regarding the adoption and implementation of blockchain-based payment systems. It highlights the importance of considering the broader socio-technical context and identifying barriers across all five dimensions of the social technical framework, including technological, infrastructural, user practices/market, regulatory, and cultural dimensions. Furthermore, the research identified seven DLT applications in central bank payment system functions. These are grouped into three overarching themes: central banks' operational responsibilities in payment and settlement systems, issuance of central bank digital money, and regulatory oversight/supervisory functions, along with other ancillary functions. Each of these applications has unique motivations or value proposition, which is the underlying reason for utilizing in that particular use case
Multidisciplinary perspectives on Artificial Intelligence and the law
This open access book presents an interdisciplinary, multi-authored, edited collection of chapters on Artificial Intelligence (âAIâ) and the Law. AI technology has come to play a central role in the modern data economy. Through a combination of increased computing power, the growing availability of data and the advancement of algorithms, AI has now become an umbrella term for some of the most transformational technological breakthroughs of this age. The importance of AI stems from both the opportunities that it offers and the challenges that it entails. While AI applications hold the promise of economic growth and efficiency gains, they also create significant risks and uncertainty. The potential and perils of AI have thus come to dominate modern discussions of technology and ethics â and although AI was initially allowed to largely develop without guidelines or rules, few would deny that the law is set to play a fundamental role in shaping the future of AI. As the debate over AI is far from over, the need for rigorous analysis has never been greater. This book thus brings together contributors from different fields and backgrounds to explore how the law might provide answers to some of the most pressing questions raised by AI. An outcome of the CatĂłlica Research Centre for the Future of Law and its interdisciplinary working group on Law and Artificial Intelligence, it includes contributions by leading scholars in the fields of technology, ethics and the law.info:eu-repo/semantics/publishedVersio
Rethink Digital Health Innovation: Understanding Socio-Technical Interoperability as Guiding Concept
Diese Dissertation sucht nach einem theoretischem GrundgerĂŒst, um komplexe, digitale Gesundheitsinnovationen so zu entwickeln, dass sie bessere Erfolgsaussichten haben, auch in der alltĂ€glichen Versorgungspraxis anzukommen. Denn obwohl es weder am Bedarf von noch an Ideen fĂŒr digitale Gesundheitsinnovationen mangelt, bleibt die Flut an erfolgreich in der Praxis etablierten Lösungen leider aus. Dieser unzureichende Diffusionserfolg einer entwickelten Lösung - gern auch als Pilotitis pathologisiert - offenbart sich insbesondere dann, wenn die geplante Innovation mit gröĂeren Ambitionen und KomplexitĂ€t verbunden ist. Dem geĂŒbten Kritiker werden sofort ketzerische Gegenfragen in den Sinn kommen. Beispielsweise was denn unter komplexen, digitalen Gesundheitsinnovationen verstanden werden soll und ob es ĂŒberhaupt möglich ist, eine universale Lösungsformel zu finden, die eine erfolgreiche Diffusion digitaler Gesundheitsinnovationen garantieren kann. Beide Fragen sind nicht nur berechtigt, sondern mĂŒnden letztlich auch in zwei ForschungsstrĂ€nge, welchen ich mich in dieser Dissertation explizit widme.
In einem ersten Block erarbeite ich eine Abgrenzung jener digitalen Gesundheitsinnovationen, welche derzeit in Literatur und Praxis besondere Aufmerksamkeit aufgrund ihres hohen Potentials zur Versorgungsverbesserung und ihrer resultierenden KomplexitĂ€t gewidmet ist. Genauer gesagt untersuche ich dominante Zielstellungen und welche Herausforderung mit ihnen einhergehen. Innerhalb der Arbeiten in diesem Forschungsstrang kristallisieren sich vier Zielstellungen heraus: 1. die UnterstĂŒtzung kontinuierlicher, gemeinschaftlicher Versorgungsprozesse ĂŒber diverse Leistungserbringer (auch als inter-organisationale Versorgungspfade bekannt); 2. die aktive Einbeziehung der Patient:innen in ihre Versorgungsprozesse (auch als Patient Empowerment oder Patient Engagement bekannt); 3. die StĂ€rkung der sektoren-ĂŒbergreifenden Zusammenarbeit zwischen Wissenschaft und Versorgungpraxis bis hin zu lernenden Gesundheitssystemen und 4. die Etablierung daten-zentrierter Wertschöpfung fĂŒr das Gesundheitswesen aufgrund steigender bzgl. VerfĂŒgbarkeit valider Daten, neuen Verarbeitungsmethoden (Stichwort KĂŒnstliche Intelligenz) sowie den zahlreichen Nutzungsmöglichkeiten. Im Fokus dieser Dissertation stehen daher weniger die autarken, klar abgrenzbaren Innovationen (bspw. eine Symptomtagebuch-App zur Beschwerdedokumentation). Vielmehr adressiert diese Doktorarbeit jene Innovationsvorhaben, welche eine oder mehrere der o.g. Zielstellung verfolgen, ein weiteres technologisches Puzzleteil in komplexe Informationssystemlandschaften hinzufĂŒgen und somit im Zusammenspiel mit diversen weiteren IT-Systemen zur Verbesserung der Gesundheitsversorgung und/ oder ihrer Organisation beitragen.
In der Auseinandersetzung mit diesen Zielstellungen und verbundenen Herausforderungen der Systementwicklung rĂŒckte das Problem fragmentierter IT-Systemlandschaften des Gesundheitswesens in den Mittelpunkt. Darunter wird der unerfreuliche Zustand verstanden, dass unterschiedliche Informations- und Anwendungssysteme nicht wie gewĂŒnscht miteinander interagieren können. So kommt es zu Unterbrechungen von InformationsflĂŒssen und Versorgungsprozessen, welche anderweitig durch fehleranfĂ€llige ZusatzaufwĂ€nde (bspw. Doppeldokumentation) aufgefangen werden mĂŒssen. Um diesen EinschrĂ€nkungen der EffektivitĂ€t und Effizienz zu begegnen, mĂŒssen eben jene IT-System-Silos abgebaut werden. Alle o.g. Zielstellungen ordnen sich dieser defragmentierenden Wirkung unter, in dem sie 1. verschiedene Leistungserbringer, 2. Versorgungsteams und Patient:innen, 3. Wissenschaft und Versorgung oder 4. diverse Datenquellen und moderne Auswertungstechnologien zusammenfĂŒhren wollen. Doch nun kommt es zu einem komplexen Ringschluss. Einerseits suchen die in dieser Arbeit thematisierten digitalen Gesundheitsinnovationen Wege zur Defragmentierung der Informationssystemlandschaften.
Andererseits ist ihre eingeschrĂ€nkte Erfolgsquote u.a. in eben jener bestehenden Fragmentierung begrĂŒndet, die sie aufzulösen suchen.
Mit diesem Erkenntnisgewinn eröffnet sich der zweite Forschungsstrang dieser Arbeit, der sich mit der Eigenschaft der 'InteroperabilitĂ€t' intensiv auseinandersetzt. Er untersucht, wie diese Eigenschaft eine zentrale Rolle fĂŒr Innovationsvorhaben in der Digital Health DomĂ€ne einnehmen soll. Denn InteroperabilitĂ€t beschreibt, vereinfacht ausgedrĂŒckt, die FĂ€higkeit von zwei oder mehreren Systemen miteinander gemeinsame Aufgaben zu erfĂŒllen. Sie reprĂ€sentiert somit das Kernanliegen der identifizierten Zielstellungen und ist Dreh- und Angelpunkt, wenn eine entwickelte Lösung in eine konkrete Zielumgebung integriert werden soll. Von einem technisch-dominierten Blickwinkel aus betrachtet, geht es hierbei um die GewĂ€hrleistung von validen, performanten und sicheren Kommunikationsszenarien, sodass die o.g. InformationsflussbrĂŒche zwischen technischen Teilsystemen abgebaut werden. Ein rein technisches InteroperabilitĂ€tsverstĂ€ndnis genĂŒgt jedoch nicht, um die Vielfalt an Diffusionsbarrieren von digitalen Gesundheitsinnovationen zu umfassen. Denn beispielsweise das Fehlen adĂ€quater VergĂŒtungsoptionen innerhalb der gesetzlichen Rahmenbedingungen oder eine mangelhafte PassfĂ€higkeit fĂŒr den bestimmten Versorgungsprozess sind keine rein technischen Probleme. Vielmehr kommt hier eine Grundhaltung der Wirtschaftsinformatik zum Tragen, die Informationssysteme - auch die des Gesundheitswesens - als sozio-technische Systeme begreift und dabei Technologie stets im Zusammenhang mit Menschen, die sie nutzen, von ihr beeinflusst werden oder sie organisieren, betrachtet. Soll eine digitale Gesundheitsinnovation, die einen Mehrwert gemÀà der o.g. Zielstellungen verspricht, in eine existierende Informationssystemlandschaft der Gesundheitsversorgung integriert werden, so muss sie aus technischen sowie nicht-technischen Gesichtspunkten 'interoperabel' sein.
Zwar ist die Notwendigkeit von InteroperabilitĂ€t in der Wissenschaft, Politik und Praxis bekannt und auch positive Bewegungen der DomĂ€ne hin zu mehr InteroperabilitĂ€t sind zu verspĂŒren. Jedoch dominiert dabei einerseits ein technisches VerstĂ€ndnis und andererseits bleibt das Potential dieser Eigenschaft als Leitmotiv fĂŒr das Innovationsmanagement bislang weitestgehend ungenutzt. An genau dieser Stelle knĂŒpft nun der Hauptbeitrag dieser Doktorarbeit an, in dem sie eine sozio-technische Konzeptualisierung und Kontextualisierung von InteroperabilitĂ€t fĂŒr kĂŒnftige digitale Gesundheitsinnovationen vorschlĂ€gt. Literatur- und expertenbasiert wird ein Rahmenwerk erarbeitet - das Digital Health Innovation Interoperability Framework - das insbesondere Innovatoren und Innovationsfördernde dabei unterstĂŒtzen soll, die Diffusionswahrscheinlichkeit in die Praxis zu erhöhen. Nun sind mit diesem Framework viele Erkenntnisse und Botschaften verbunden, die ich fĂŒr diesen Prolog wie folgt zusammenfassen möchte:
1. Um die Entwicklung digitaler Gesundheitsinnovationen bestmöglich auf eine erfolgreiche
Integration in eine bestimmte Zielumgebung auszurichten, sind die Realisierung
eines neuartigen Wertversprechens sowie die GewÀhrleistung sozio-technischer InteroperabilitÀt
die zwei zusammenhÀngenden Hauptaufgaben eines Innovationsprozesses.
2. Die GewÀhrleistung von InteroperabilitÀt ist eine aktiv zu verantwortende Managementaufgabe
und wird durch projektspezifische Bedingungen sowie von externen und internen Dynamiken beeinflusst.
3. Sozio-technische InteroperabilitÀt im Kontext digitaler Gesundheitsinnovationen kann
ĂŒber sieben, interdependente Ebenen definiert werden: Politische und regulatorische Bedingungen;
Vertragsbedingungen; Versorgungs- und GeschÀftsprozesse; Nutzung; Information; Anwendungen; IT-Infrastruktur.
4. Um InteroperabilitÀt auf jeder dieser Ebenen zu gewÀhrleisten, sind Strategien differenziert
zu definieren, welche auf einem Kontinuum zwischen KompatibilitÀtsanforderungen
aufseiten der Innovation und der Motivation von Anpassungen aufseiten der Zielumgebung
verortet werden können.
5. Das Streben nach mehr InteroperabilitÀt fördert sowohl den nachhaltigen Erfolg der einzelnen digitalen
Gesundheitsinnovation als auch die Defragmentierung existierender Informationssystemlandschaften und
trÀgt somit zur Verbesserung des Gesundheitswesens bei.
Zugegeben: die letzte dieser fĂŒnf Botschaften trĂ€gt eher die FĂ€rbung einer Ăberzeugung, als dass sie ein Ergebnis wissenschaftlicher BeweisfĂŒhrung ist. Dennoch empfinde ich diese, wenn auch persönliche Erkenntnis als Maxim der DomĂ€ne, der ich mich zugehörig fĂŒhle - der IT-Systementwicklung des Gesundheitswesens
La traduzione specializzata allâopera per una piccola impresa in espansione: la mia esperienza di internazionalizzazione in cinese di Bioretics© S.r.l.
Global markets are currently immersed in two all-encompassing and unstoppable processes: internationalization and globalization. While the former pushes companies to look beyond the borders of their country of origin to forge relationships with foreign trading partners, the latter fosters the standardization in all countries, by reducing spatiotemporal distances and breaking down geographical, political, economic and socio-cultural barriers. In recent decades, another domain has appeared to propel these unifying drives: Artificial Intelligence, together with its high technologies aiming to implement human cognitive abilities in machinery. The âLanguage Toolkit â Le lingue straniere al servizio dellâinternazionalizzazione dellâimpresaâ project, promoted by the Department of Interpreting and Translation (ForlĂŹ Campus) in collaboration with the Romagna Chamber of Commerce (ForlĂŹ-Cesena and Rimini), seeks to help Italian SMEs make their way into the global market. It is precisely within this project that this dissertation has been conceived. Indeed, its purpose is to present the translation and localization project from English into Chinese of a series of texts produced by Bioretics© S.r.l.: an investor deck, the company website and part of the installation and use manual of the Aliquis© framework software, its flagship product. This dissertation is structured as follows: Chapter 1 presents the project and the company in detail; Chapter 2 outlines the internationalization and globalization processes and the Artificial Intelligence market both in Italy and in China; Chapter 3 provides the theoretical foundations for every aspect related to Specialized Translation, including website localization; Chapter 4 describes the resources and tools used to perform the translations; Chapter 5 proposes an analysis of the source texts; Chapter 6 is a commentary on translation strategies and choices
A Proposed Artificial Intelligence-Based System for Developing E-management Skills in Saudi Primary Schools
This study aims to investigate the impact of Artificial intelligence-driven solutions on school leadersâ proficiencies. Leaders have the responsibility of making decisions in educational institutions as well as carrying out routine tasks daily. Artificial intelligence-assisted applications have noteworthy contributions to the field of educational management. The scope of this study is limited to selected features; data analytics, chatbot, and e-survey. The basic design of this study started with analyzing literature in this domain. This was followed by designing a system consisting of four models: building a dashboard, predicting studentsâ results, creating a chatbot for responding to parentsâ queries, and creating an e-survey for measuring staff satisfaction. The prominent finding of this study is the significant impact of Artificial intelligence on leadersâ competencies
Talking about personal recovery in bipolar disorder: Integrating health research, natural language processing, and corpus linguistics to analyse peer online support forum posts
Background: Personal recovery, âliving a satisfying, hopeful and contributing lifeeven with the limitations caused by the illnessâ (Anthony, 1993) is of particular value in bipolar disorder where symptoms often persist despite treatment. So far, personal recovery has only been studied in researcher-constructed environments (interviews, focus groups). Support forum posts can serve as a complementary naturalistic data source. Objective: The overarching aim of this thesis was to study personal recovery experiences that people living with bipolar disorder have shared in online support forums through integrating health research, NLP, and corpus linguistics in a mixed methods approach within a pragmatic research paradigm, while considering ethical issues and involving people with lived experience. Methods: This mixed-methods study analysed: 1) previous qualitative evidence on personal recovery in bipolar disorder from interviews and focus groups 2) who self-reports a bipolar disorder diagnosis on the online discussion platform Reddit 3) the relationship of mood and posting in mental health-specific Reddit forums (subreddits) 4) discussions of personal recovery in bipolar disorder subreddits. Results: A systematic review of qualitative evidence resulted in the first framework for personal recovery in bipolar disorder, POETIC (Purpose & meaning, Optimism & hope, Empowerment, Tensions, Identity, Connectedness). Mainly young or middle-aged US-based adults self-report a bipolar disorder diagnosis on Reddit. Of these, those experiencing more intense emotions appear to be more likely to post in mental health support subreddits. Their personal recovery-related discussions in bipolar disorder subreddits primarily focussed on three domains: Purpose & meaning (particularly reproductive decisions, work), Connectedness (romantic relationships, social support), Empowerment (self-management, personal responsibility). Support forum data highlighted personal recovery issues that exclusively or more frequently came up online compared to previous evidence from interviews and focus groups. Conclusion: This project is the first to analyse non-reactive data on personal recovery in bipolar disorder. Indicating the key areas that people focus on in personal recovery when posting freely and the language they use provides a helpful starting point for formal and informal carers to understand the concerns of people diagnosed with bipolar disorder and to consider how best to offer support
The Texture of Everyday Life: Carceral Realism and Abolitionist Speculation
Exploring the ways in which prisons shape the subjectivity of free-world thinkers, and the ways that subjectivity is expressed in literary texts, this dissertation develops the concept of carceral realism: a cognitive and literary mode that represents prisons and police as the only possible response to social disorder. As this dissertation illustrates, this form of consciousness is experienced as racial paranoia, and it is expressed literary texts, which reflect and help to reify it. Through this process of cultural reification, carceral realism increasingly insists on itself as the only possible mode of thinking. As I argue, however, carceral realism actually stands in a dialectical relationship to abolitionist speculation, or, the active imagining of a world without prisons and police and/or the conditions necessary to actualize such a world. In much the same way that carceral realism embeds itself in realist literary forms, abolitionist speculation plays a constitutive role in the utopian literary tradition.
In order to elaborate these concepts, this dissertation begins with a meta-consideration of how cultural productions by incarcerated people are typically framed. Building upon the work of scholars and incarcerated authorsâ own interventions in questions of consciousness, authorship, textual production, and study, this chapter contrasts that typical frame with a method of abolitionist reading. Chapter two applies this methodology to Edward Bunkerâs 1977 novel The Animal Factory and Claudia Rankineâs 2010 poem Citizen in order to develop the concept of carceral realism and demonstrate how it has developed from the 1970s to the present. In order to lay out the historical foundations of the modern prison, chapter three looks back to the late 18th century and situates the emergence of the penitentiary within debates regarding race, citizenship, and state power. Returning to the 1970s, chapter four investigates the role universities have played in the formation of carceral realism and the complex relationship Chicanos and Asian Americans have to prisons and police by analogizing the institutionalization of prison literary study to the formation of ethnic studies. Chapter five draws this project to a conclusion by developing the concept of abolitionist speculation, or the active imagining of a world without prisons or the police and/or the conditions necessary to realize such a world, which I identify as both a constitutive generic feature of utopian literature and something that exceeds literature altogether. In doing so, this dissertation establishes an ongoing historical relationship between social reproduction of prisons and literary forms that cuts across time, geography, race, gender, and genre
SCALE: Scaling up the Complexity for Advanced Language Model Evaluation
Recent strides in Large Language Models (LLMs) have saturated many NLP benchmarks (even professional domain-specific ones), emphasizing the need for novel, more challenging novel ones to properly assess LLM capabilities. In this paper, we introduce a novel NLP benchmark that poses challenges to current LLMs across four key dimensions: processing long documents (up to 50K tokens), utilizing domain specific knowledge (embodied in legal texts), multilingual understanding (covering five languages), and multitasking (comprising legal document to
document Information Retrieval, Court View Generation, Leading Decision Summarization, Citation Extraction, and eight challenging Text Classification tasks). Our benchmark comprises diverse legal NLP datasets from the Swiss legal system, allowing for a comprehensive study of the underlying Non-English, inherently multilingual, federal legal system. Despite recent advances, efficiently processing long documents for intense review/analysis tasks remains an open challenge for language models. Also, comprehensive, domain-specific benchmarks requiring high expertise to develop are rare, as are multilingual benchmarks. This scarcity underscores our contributionâs value, considering most public models are trained predominantly on English corpora, while other languages remain understudied, particularly for practical domain-specific NLP tasks. Our benchmark allows for testing and advancing the state-of-the-art LLMs. As part of our study, we evaluate several pre-trained multilingual language models on our benchmark to establish strong baselines as a point of reference. Despite the large size of our datasets â Equal contribution. (tens to hundreds of thousands of examples), existing publicly available models struggle with most tasks, even after in-domain pretraining. We publish all resources (benchmark suite, pre-trained models, code) under a fully permissive open CC BY-SA license
The Knowledge Graph Construction in the Educational Domain: Take an Australian School Science Course as an Example
The evolution of the Internet technology and artificial intelligence has changed the ways we gain knowledge, which has expanded to every aspect of our lives. In recent years, Knowledge Graphs technology as one of the artificial intelligence techniques has been widely used in the educational domain. However, there are few studies dedicating the construction of knowledge graphs for K-10 education in Australia, and most of the existing studies only focus on at the theory level, and little research shows practical pipeline steps to complete the complex flow of constructing the educational knowledge graph. Apart from that, most studies focused on concept entities and their relations but ignored the features of concept entities and the relations between learning knowledge points and required learning outcomes. To overcome these shortages and provide the data foundation for the development of downstream research and applications in this educational domain, the construction processes of building a knowledge graph for Australian K-10 education were analyzed at the theory level and implemented in a practical way in this research. We took the Year 9 science course as a typical data source example fed to the proposed method called K10EDU-RCF-KG to construct this educational knowledge graph and to enrich the features of entities in the knowledge graph. In the construction pipeline, a variety of techniques were employed to complete the building process. Firstly, the POI and OCR techniques were applied to convert Word and PDF format files into text, followed by developing an educational resources management platform where the machine-readable text could be stored in a relational database management system. Secondly, we designed an architecture framework as the guidance of the construction pipeline. According to this architecture, the educational ontology was initially designed, and a backend microservice was developed to process the entity extraction and relation extraction by NLP-NER and probabilistic association rule mining algorithms, respectively. We also adopted the NLP-POS technique to find out the neighbor adjectives related to entitles to enrich features of these concept entitles. In addition, a subject dictionary was introduced during the refinement process of the knowledge graph, which reduced the data noise rate of the knowledge graph entities. Furthermore, the connections between learning outcome entities and topic knowledge point entities were directly connected, which provides a clear and efficient way to identify what corresponding learning objectives are related to the learning unit. Finally, a set of REST APIs for querying this educational knowledge graph were developed
The Globalization of Artificial Intelligence: African Imaginaries of Technoscientific Futures
Imaginaries of artificial intelligence (AI) have transcended geographies of the Global North and become increasingly entangled with narratives of economic growth, progress, and modernity in Africa. This raises several issues such as the entanglement of AI with global technoscientific capitalism and its impact on the dissemination of AI in Africa. The lack of African perspectives on the development of AI exacerbates concerns of raciality and inclusion in the scientific research, circulation, and adoption of AI. My argument in this dissertation is that innovation in AI, in both its sociotechnical imaginaries and political economies, excludes marginalized countries, nations and communities in ways that not only bar their participation in the reception of AI, but also as being part and parcel of its creation.
Underpinned by decolonial thinking, and perspectives from science and technology studies and African studies, this dissertation looks at how AI is reconfiguring the debate about development and modernization in Africa and the implications for local sociotechnical practices of AI innovation and governance. I examined AI in international development and industry across Kenya, Ghana, and Nigeria, by tracing Canadaâs AI4D Africa program and following AI start-ups at AfriLabs. I used multi-sited case studies and discourse analysis to examine the data collected from interviews, participant observations, and documents.
In the empirical chapters, I first examine how local actors understand the notion of decolonizing AI and show that it has become a sociotechnical imaginary. I then investigate the political economy of AI in Africa and argue that despite Western efforts to integrate the African AI ecosystem globally, the AI epistemic communities in the continent continue to be excluded from dominant AI innovation spaces. Finally, I examine the emergence of a Pan-African AI imaginary and argue that AI governance can be understood as a state-building experiment in post-colonial Africa. The main issue at stake is that the lack of African perspectives in AI leads to negative impacts on innovation and limits the fair distribution of the benefits of AI across nations, countries, and communities, while at the same time excludes globally marginalized epistemic communities from the imagination and creation of AI
- âŠ