33 research outputs found

    Compute North vs. Compute South: the uneven possibilities of compute-based AI governance around the globe

    Get PDF
    Governments have begun to view AI compute infrastruc-tures, including advanced AI chips, as a geostrategic re-source. This is partly because “compute governance” is believed to be emerging as an important tool for govern-ing AI systems. In this governance model, states that host AI compute capacity within their territorial jurisdictions are likely to be better placed to impose their rules on AI systems than states that do not. In this study, we provide the first attempt at mapping the global geography of pub-lic cloud GPU compute, one particularly important cate-gory of AI compute infrastructure. Using a census of hyperscale cloud providers’ cloud regions, we observe that the world is divided into “Compute North” coun-tries that host AI compute relevant for AI development (ie. training), “Compute South” countries whose AI com-pute is more relevant for AI deployment (ie. running in-ferencing), and “Compute Desert” countries that host no public cloud AI compute at all. We generate potential explanations for the results using expert interviews, dis-cuss the implications to AI governance and technology geopolitics, and consider possible future trajectories

    Expectations of Artificial Intelligence and the Performativity of Ethics: Implications for Communication Governance

    Get PDF
    This article draws on the sociology of expectations to examine the construction of expectations of ‘ethical AI’ and considers the implications of these expectations for communication governance. We first analyse a range of public documents to identify the key actors, mechanisms and issues which structure societal expectations around artificial intelligence (AI) and an emerging discourse on ethics. We then explore expectations of AI and ethics through a survey of members of the public. Finally, we discuss the implications of our findings for the role of AI in communication gover- nance. We find that, despite societal expectations that we can design ethical AI, and public expectations that developers and governments should share responsibility for the outcomes of AI use, there is a significant divergence between these expectations and the ways in which AI technologies are currently used and governed in large scale communication systems. We conclude that discourses of ‘ethical AI’ are generically performative, but to become more effective we need to acknowledge the limitations of contemporary AI and the requirement for extensive human labour to meet the challenges of communication governance. An effective ethics of AI requires domain appropriate AI tools, updated professional practices, dignified places of work and robust regulatory and accountability frameworks

    Your Business Needs To Invest In Artificial Intelligence

    Get PDF
    Artificial intelligence (AI) is defined as “machines that respond to stimulation consistent with traditional responses from humans, given the human capacity for contemplation, judgment, and intention” by Brookings Institute. According to Amazon, AI is “the field of computer science dedicated to solving cognitive problems commonly associated with human intelligence, such as learning, problem solving, and pattern recognition”. AI has the ability to transform business as there are many applications of AI technology available today that could improve the operations and financials of a business. AI has turned capabilities that previously were only thought about as futuristic into a reality which businesses can capitalize on today. The goal of this paper is to simplify the complex topic of artificial intelligence into a sales proposal to communicate value to business leaders and other decision makers to both educate and sell these individuals on AI programs for their business. In educating the audience, I will be able to create an understanding of these complex systems. Potential buyers will be guided through information that will address the four main questions which decision makers are likely to want when assessing the potential of investing in AI for their business. These questions include, what application AI has in business, what impacts these systems will have for our business, how could our business potentially be limited by AI, and what is the future of AI technology. In answering these four core questions, potential buyers will be provided the information necessary to make a purchasing decision to implement AI within their organization. From a sales perspective, a sales manager will have the necessary research and insight into how to present AI technology to decision makers in order to close these potential customers on purchasing AI technology

    The AI Quid Pro Quo Problem: Suggesting a Framework for Patents Involving Artificial Intelligence-Assisted or -Created Inventions

    Full text link
    Innovation involving artificial intelligence (AI) is rapidly expanding and diffusing into other areas of technology. Additionally, inventors have been using AI to assist in new technology for quite a while and have likely received patents from the United States Patent and Trademark Office (USPTO or “Office”) for their inventions without disclosing the AI involved in the patentable subject matter. As AI has become increasingly present in the implementation of new technology, the question of whether an AI can be an inventor has arisen. In Thaler v. Iancu and on appeal, the courts have affirmatively said no. However, this decision implicates the reliability, clarity, and incentivization of patents involving this subject matter. Given how fast AI technology has been developing, Congress should act to modernize the Patent Act to account for AI-assisted or -created inventions. In the meantime, this Note suggests the USPTO, through its regulatory power, can help alleviate these concerns by creating an identification system and requiring inventors to disclose the kind of AI involved in the assistance or in the creation of subject matter in patent applications

    At the confluence of digital rights and climate & environmental justice: A Landscape Review

    Get PDF
    This research report, based on research conducted by The Engine Room from October 2021 to April 2022, is part of a larger body of work around the intersection of digital rights with environmental and climate justice, supported by the Ford Foundation, Ariadne and Mozilla Foundation. This research project aims at better equipping digital rights funders to craft grantmaking strategies that maximise impact on these issues.This report was published alongside several publications, including issue briefs by Association for Progressive Communications (APC), BSR, and the Open Environmental Data Project and Open Climate. All publications can be found at https://engn.it/climatejusticedigitalright

    Implementing digital health technologies in intensive care: a mixed methods study with development of an implementation framework

    Get PDF
    Background: In the context of the digital transformation of healthcare, technologies such as tablet-based remote patient monitoring systems promise to improve patient-related outcomes and reduce workload of healthcare staff. However, the introduction of novel digital technologies into routine clinical practice, e.g. in intensive care units (ICUs), is still lagging behind. In the context of implementing a remote patient monitoring system, we aimed to explore expectations of ICU staff regarding patient monitoring, validate them, and develop an implementation framework for digital health technologies in the ICU. Methods: We followed an exploratory research approach using mixed methods. The data collection included semi-structured interviews, field visits and focus groups; and an online cross-sectional survey to validate the insights gained. We derived the implementation framework applying inductive and deductive analysis. The deduction was oriented towards the categories of the Consolidated Framework for Implementation Research and the Expert Recommendations for Implementing Change. Results: Staff expectations regarding novel patient monitoring solutions included introducing wireless sensors, enhanced usability and optimized alarm management. Many false positive alarms due to poor alarm hygiene were considered problematic, more training with new devices was demanded. In the validation study, staff members stated that high rates of false-positive alarms (n=60, 70% chose “Strongly agree” or “Agree”) and too many sensor cables (n=66, 77%) would disturb patient care. They supported using remote patient monitoring for earlier alerts (n=55, 65%) and artificial-intelligence-powered clinical decision support systems for early detection of complications (n=67, 79%). To promote usage of such systems, respondents suggested more interoperability (n=79, 93%), high usability (n=78, 93%) and more training with technologies (n=75, 90%). High quality and regular staff training, clear leadership commitment and feedback opportunities for staff should be installed for improved implementation. The presented framework compiles strategies to apply before, during and in the general context of the implementation, focussing on usability and adaptability of the intervention, staff involvement, communication, and evaluation strategies. Conclusions: The implementation of digital health technology in specialized settings like the ICU requires a high level of staff resources and commitment. It is important to test the adaptability of the technology and improve it with a user-centered approach in design and implementation. The implementation involves interdisciplinary staff engagement, clear communication of the project, and continuous assessment of implementation requirements and conditions should be continuously reassessed. The presented framework may guide implementation leaders towards sustainable and user-centered introduction of digital health technology in the ICU.Neue digitale Gesundheitstechnologien könnten patientenbezogene Out-comes verbessern und die Arbeitsbelastung des Personals reduzieren. Ihre Einführung in die klinische Routinepraxis auf Intensivstationen verläuft jedoch schleppend. Im Kontext der Implementierung eines neuen Patientenmonitoringsystems auf einer Intensivstation untersuchten wir Erwartungen des Personals an die Monitoring-Technologie, validierten sie und entwickelten ein Implementierungsframework für digitale Gesundheitstechnologien auf Intensivstationen. Methoden: Wir verfolgten einen explorativen Mixed-Methods Forschungsansatz. Die Datenerhebung umfasste semistrukturierte Interviews, Feldbeobachtungen und Fokusgruppen, die induktiv und deduktiv analysiert wurden, sowie einen Onlinefragebogen, der deskriptiv ausgewertet wurde. Das Implementierungsframework wurde induktiv und deduktiv aus der Datengrundlage heraus sowie aufbauend auf evidenzbasierten Rahmenwerken entwickelt. Ergebnisse: Das Personal wünschte sich für ein zukünftiges Patientenmonitoring drahtlose Sensoren, höhere Benutzerfreundlichkeit und ein optimiertes Alarmmanagement. Sie bewerteten viele falsch-positive Alarme problematisch und forderten mehr Training mit neuen Geräten. Auch in der Validierungsstudie wurden zu viele falsch-positive Alarme (n=60, 70% wählten "stimme voll zu" oder "stimme zu") und zu viele Sensorkabel (n=66, 77%) bemängelt. Das Personal befürwortete den Einsatz von Patientenfernüberwachung um früher alarmiert zu werden (n=55, 65%), und von durch Künstliche Intelligenz gestützte Entscheidungshilfesystemen für die Früherkennung von Komplikationen (n=67, 79%). Für eine höhere Nutzung solcher Systeme seien Interoperabilität (n=79, 93%), Benutzerfreundlichkeit (n=78, 93%) und mehr Schulungen (n=75, 90%) sinnvoll. Zur Verbesserung der Implementierung sollten qualitativ hochwertige und regelmäßige Mitarbeiterschulungen, ein klares Leitungsengagement für das Projekt und Feedbackmöglichkeiten vorhanden sein. Das Implementierungsframework für digitale Gesundheitstechnologien auf Intensivstationen enthält Strategien, die vor, während und im allgemeinen Kontext der Implementierung angewandt werden können, wobei die Benutzerfreundlichkeit und Anpassungsfähigkeit der Intervention, die Einbeziehung des Personals, die Kommunikation und die Evaluierungsstrategien im Mittelpunkt stehen. Schlussfolgerungen: Die Implementierung digitaler Gesundheitstechnologien in spezialisierten Settings wie Intensivstationen muss sorgfältig geplant werden. Im Fokus steht die Einschätzung der Anpassungsfähigkeit der Technologie, die mit nutzerzentrierten Methoden verbessert werden sollte, u.a. durch die Einbeziehung des interdisziplinären Personals und eine klare Kommunikation des Projekts. Zudem sollten Anforderungen für die Implementierung kontinuierlich neu eingeschätzt werden. Das Framework kann Verantwortlichen in der Implementierungspraxis als Leitfaden dienen

    Digital twins of the Earth with and for humans

    Get PDF
    Digital twins of the Earth are digital representations of the Earth system, spanning scales and domains. Their purpose is to monitor, forecast and assess the Earth system and the consequences of human interventions on the Earth system. Providing users with the capability to interact with and interrogate the system, digital twins of the Earth are decision support systems for addressing environmental challenges. By informing humans of their impact on the Earth system, digital twins aspire to promote new pathways moving forward. By answering causal queries through intervention analysis, they can enhance evidence-based policy making. Existing digital twins of the Earth are primarily technological information systems that represent the physical world. However, as the social and physical worlds are intrinsically interconnected, we argue that humans must be accounted for both within and outside digital twins of the Earth: Within twins to represent human impacts and responses that are integral to the Earth system; and outside twins to govern access and evelopment and to guide responsible use of information acquired from twins. corporating human interactions in digital twins of the Earth represents a ransformative frontier, promising unparalleled insights into Earth system dynamics and empower humans for action

    Expanding Civil Rights to Combat Digital Discrimination on the Basis of Poverty

    Get PDF
    Low-income people suffer from digital discrimination on the basis of their socio-economic status. Automated decision-making systems, often powered by machine learning and artificial intelligence, shape the opportunities of those experiencing poverty because they serve as gatekeepers to the necessities of modern life. Yet in the existing legal regime, it is perfectly legal to discriminate against people because they are poor. Poverty is not a protected characteristic, unlike race, gender, disability, religion or certain other identities. This lack of legal protection has accelerated digital discrimination against the poor, fueled by the scope, speed, and scale of big data networks. This Article highlights four areas where data-centric technologies adversely impact low-income people by excluding them from opportunities or targeting them for exploitation: tenant screening, credit scoring, higher education, and targeted advertising. Currently, there are numerous proposals to combat algorithmic bias by updating analog-era civil rights laws for our datafied society, as well as to bolster civil rights within comprehensive data privacy protections and algorithmic accountability standards. On this precipice for legislative reform, it is time to include socio-economic status as a protected characteristic in antidiscrimination laws for the digital age. This Article explains how protecting low-income people within emerging legal frameworks would provide a valuable counterweight against opaque and unaccountable digital discrimination, which undermines any vision of economic justice

    Achieving Trustable Explanations Through Multi-Task Learning Neural Networks

    Get PDF
    Kunstig intelligens blir stadig mer tilstedeværende i høy-risiko domener slik som rettsvesen og medisin. Som en følge krever lovgivende makter innsikt i kunstig intelligens-systemer, samt forklaringer som både kan begrunne valg og åpne for læring fra ugjennomsiktige systemer. På bakgrunn av dette vokser fagfeltet forklarbar kunstig intelligens, som søker måter å bygge tillit, trygghet og ansvarlighet inn i kunstig intelligens-systemer. Tidligere forsking presenterer flere metoder for å generere forklaringer for kunstig intelligens-systener, men flere spørsmål er fremdeles ubesvart. Et av disse er hvordan man kan stole på forklaringene man får. I denne masteroppgaven utforsker vi moderne forskning på metoder for forklarbar kunstig intelligens, og designer en arkitektur basert på fleroppgavelæring. Arkitekturen åpner for å legge til tillitsverdige forklaringer som en innebygd del av det kunstige nevrale nettverket. Vi argumenterer for å bruke forklaringer basert på prinsipper fra samfunnsvitenskap i vår arkitektur. Vi presenterer funn som indikerer at arkitekturen beholder de positive kvalitetene ved fleroppgavelæring samtidig som den oppgir forklaringer. Vi viser at kontrastive forklaringer laget av domeneeksperter kan brukes til å utvide data slik at fleroppgavenettverk kan utmerke seg på små datamengder. Gjennom vår originale tapsfunksjon, som integrerer fortegnsforskjell mellom gradientene til forklaringen og hovedoppgaven, kan arkitekturen garantere at all delt informasjon blir brukt på tilnærmet lik måte. Som et resultat av dette kan man øke tilliten til forklaringene fra kunstig intelligens-systemet.Artificial intelligence is becoming more prominent in high-risk domains, such as criminal justice and health care, and as a result, legislature calls for insight into AI systems. This insight requires explanations that both grounds the decisions and allows us to learn from opaque box systems. The field of explainable artificial intelligence is gaining traction as a result, which aims to build trust, safety, and liability into artificial intelligence systems. Previous literature shows several methods for generating explanations for artificial intelligence systems, but several questions remain. One of them is how we can trust these explanations. This thesis explores the current state-of-the-art of explainable artificial intelligence methods and designs an architecture based on multi-task learning, enabling pre-existing neural networks to add trustable explanations as a native part of the neural network. We argue for using explanations based on principles from social sciences in our architecture. We present findings indicating that the architecture incorporates the positive qualities of a multi-task learner while providing explanations. We show that counterfactual explanations by domain experts can be used to amplify data to let multi-task learners excel on sparse data. Our novel loss function integrates the numerical sign difference between the gradient of the explanation and the gradient of the primary task. Through this loss, the architecture assures that all shared information is utilized similarly. As a result, one can gain increased trust in the explanations from the artificial intelligence system

    Expanding Civil Rights to Combat Digital Discrimination on the Basis of Poverty

    Get PDF
    Low-income people suffer from digital discrimination on the basis of their socio-economic status. Automated decision-making systems, often powered by machine learning and artificial intelligence, shape the opportunities of those experiencing poverty because they serve as gatekeepers to the necessities of modern life. Yet in the existing legal regime, it is perfectly legal to discriminate against people because they are poor. Poverty is not a protected characteristic, unlike race, gender, disability, religion or certain other identities. This lack of legal protection has accelerated digital discrimination against the poor, fueled by the scope, speed, and scale of big data networks. This Article highlights four areas where data-centric technologies adversely impact low-income people by excluding them from opportunities or targeting them for exploitation: tenant screening, credit scoring, higher education, and targeted advertising. Currently, there are numerous proposals to combat algorithmic bias by updating analog-era civil rights laws for our datafied society, as well as to bolster civil rights within comprehensive data privacy protections and algorithmic accountability standards. On this precipice for legislative reform, it is time to include socio-economic status as a protected characteristic in antidiscrimination laws for the digital age. This Article explains how protecting low-income people within emerging legal frameworks would provide a valuable counterweight against opaque and unaccountable digital discrimination, which undermines any vision of economic justice
    corecore