7 research outputs found
Content Moderation in the Metaverse Could Be a New Frontier to Attack Freedom of Expression
This commentary examines the challenges faced by metaverse platforms in cross-border content moderation, focusing on the implications for freedom of expression and nondiscrimination. It highlights the difficulties in determining what to remove for which users as well as how to do so, which has serious implications for freedom of expression and our shared sense of reality. Proto-metaverse platforms such as Roblox and Minecraft face similar questions, but have not yet encountered major cross-jurisdictional issues because, as looking at traditional social media platforms reveals, content moderation is not merely a question of law and policy, but also of geopolitics and government priorities. To avoid a “lowest common denominator effect” where freedom of expression is infringed upon worldwide and discrimination is entrenched, this commentary argues that metaverse platforms must clarify their moderation policies, assess their entry into specific markets based on local laws and their own values, and be prepared to exit overly restrictive markets
The Blueprint for an AI Bill of Rights: In Search of Enaction, at Risk of Inaction
The US is promoting a new vision of a "Good AI Society" through its recent AI Bill of Rights. This offers a promising vision of community-oriented equity unique amongst peer countries. However, it leaves the door open for potential rights violations. Furthermore, it may have some federal impact, but it is non-binding, and without concrete legislation, the private sector is likely to ignore it
Supporting Trustworthy AI Through Machine Unlearning
Machine unlearning (MU) is often analyzed in terms of how it can facilitate the “right to be forgotten.” In this commentary, we show that MU can support the OECD’s five principles for trustworthy AI, which are influencing AI development and regulation worldwide. This makes it a promising tool to translate AI principles into practice. We also argue that the implementation of MU is not without ethical risks. To address these concerns and amplify the positive impact of MU, we offer policy recommendations across six categories to encourage the research and uptake of this potentially highly influential new technology
Digital Sovereignty: A Descriptive Analysis and a Critical Evaluation of Existing Models
Digital sovereignty is a popular yet still emerging concept. It is claimed by and related to various global actors, whose narratives are often competing and mutually inconsistent. This article offers a mapping of the types of national digital sovereignty that are emerging, while testing their effectiveness in response to radical changes and challenges. To do this, we systematically analyse a corpus of 271 peer-reviewed articles to identify descriptive features (how digital sovereignty is pursued) and value features (why digital sovereignty is pursued), which we use to produce four models: the rights-based model, market-oriented model, centralisation model, and state-based model. We evaluate their effectiveness within a framework of robust governance that accounts for the models’ ability to absorb the disruptions caused by technological advancements, geopolitical changes, and evolving societal norms. We find that none of the models fully combine comprehensive regulation of digital technologies with a sufficient degree of responsiveness to fast-paced technological innovation and social and economic shifts. This paper’s analysis offers valuable lessons to policymakers who wish to implement an effective and robust form of digital sovereignty
Safety and Privacy in Immersive Extended Reality: An Analysis and Policy Recommendations
Extended reality (XR) technologies have experienced cycles of development—“summers” and “winters”—for decades, but their overall trajectory is one of increasing uptake. In recent years, immersive extended reality (IXR) applications, a kind of XR that encompasses immersive virtual reality (VR) and augmented reality (AR) environments, have become especially prevalent. The European Union (EU) is exploring regulating this type of technology, and this article seeks to support this endeavor. It outlines safety and privacy harms associated with IXR, analyzes to what extent the existing EU framework for digital governance—including the General Data Protection Regulation, Product Safety Legislation, ePrivacy Directive, Digital Markets Act, Digital Services Act, and AI Act—addresses these harms, and offers some recommendations to EU legislators on how to fill regulatory gaps and improve current approaches to the governance of IXR
New deepfake regulations in China are a tool for social stability, but at what cost?
China is pushing ahead of the European Union and the United States with its new synthetic content regulations. New draft provisions would place more responsibility on platforms to preserve social stability, with potential costs for online freedoms. They show that the Chinese Communist Party is prepared to protect itself against the unique threats of emerging technologies
Anniversary AI reflections
For our fifth anniversary, we reconnected with authors of recent Comments and Perspectives in Nature Machine Intelligence and asked them how the topic they wrote about developed. We also wanted to know what other topics in AI they found exciting, surprising or worrying, and what their hopes and expectations are for AI in 2024-and the next five years. A recurring theme is the ongoing developments in large language models and generative AI, their transformative effect on the scientific process and concerns about ethical implications