16 research outputs found

    Designing an AI governance framework: From research-based premises to meta-requirements

    Get PDF
    The development and increasing use of artificial intelligence (AI), particularly in high-risk application areas, calls for attention to the governance of AI systems. Organizations and researchers have proposed AI ethics principles, but translating principles into practice-oriented frameworks has proven difficult. This paper develops meta-requirements for organizational AI governance frameworks to help translate ethical AI principles into practice and align operations with the forthcoming European AI Act. We adopt a design science research approach. We put forward research-based premises, then we report the design method employed in an industry-academia research project. Based on these, we present seven meta-requirements for AI governance frameworks. The paper contributes to the IS research on AI governance by collating knowledge into meta-requirements and advancing a design approach to AI governance. The study underscores that governance frameworks need to incorporate the characteristics of AI, its contexts, and the different sources of requirements

    A Grand Challenges-Based Research Agenda for Scholarly Communication and Information Science [MIT Grand Challenge PubPub Participation Platform]

    Get PDF
    Identifying Grand Challenges A global and multidisciplinary community of stakeholders came together in March 2018 to identify, scope, and prioritize a common vision for specific grand research challenges related to the fields of information science and scholarly communications. The participants included domain researchers in academia, practitioners, and those who are aiming to democratize scholarship. An explicit goal of the summit was to identify research needs related to barriers in the development of scalable, interoperable, socially beneficial, and equitable systems for scholarly information; and to explore the development of non-market approaches to governing the scholarly knowledge ecosystem. To spur discussion and exploration, grand challenge provocations were suggested by participants and framed into one of three sections: scholarly discovery, digital curation and preservation, and open scholarship. A few people participated in three segments, but most only attended discussions around a single topic. To create the guest list of desired participants within our three workshop target areas we invited a distribution of expertise providing diversity across several facets. In addition to having expertise in the specific focus area, we aimed for the participants in each track to be diverse across sectors, disciplines, and regions of the world. Each track had approximately 20-25 people from different parts of the world—including the United States, European Union, South Africa, and India. Domain researchers brought perspectives from a range of scientific disciplines, while practitioners brought perspectives from different roles (drawn from commercial, non-profit, and governmental sectors). Notwithstanding, we were constrained by our social networks, and by the location of the workshop in Cambridge, Massachusetts— and most of the participants were affiliated with US and European institutions. During our discussions, it quickly became clear that the grand challenges themselves cannot be neatly categorized into discovery, curation and preservation, and open scholarship—or even, for that matter, limited to library science and information sciences. Several cross-cutting themes emerged, such as a strong need to include underrepresented voices and communities outside of mainstream publishing and academic institutions, a need to identify incentives that will motivate people to make changes in their own approaches and processes toward a more open and trusted framework, and a need to identify collaborators and partners from multiple disciplines in order to build strong programs. The discussions were full of energy, insights, and enthusiasm for inclusive participation—and concluded with a desire for a global call to action to spark changes that will enable more equitable and open scholarship. Some important and productive tensions surfaced in our discussions, particularly around the best paths forward on the challenges we identified. On many core topics, however, there was widespread agreement among participants, especially on the urgent need to address the exclusion of knowledge production and access of so many people around the globe, and the troubling overrepresentation in the scholarly record of white, male, English-language voices. Ultimately, all agreed that we have an obligation to better enrich and greatly expand this space so that our communities can be catalysts for change. Towards a more inclusive, open, equitable, and sustainable scholarly knowledge ecosystem: Vision; Broadest impacts; Recommendations for broad impact. Research landscape: Challenges, threats, and barriers; Challenges to participation in the research community; Restrictions on forms of knowledge; Threats to integrity and trust; Threats to the durability of knowledge; Threats to individual agency; Incentives to sustain a scholarly knowledge ecosystem that is inclusive, equity, trustworthy, and sustainable; Grand Challenges research areas; Recommendations for research areas and programs. Targeted research questions, research challenges: Legal economic, policy, and organizational design for enduring, equitable, open scholarship; Measuring, predicting, and adapting to use and utility across scholarly communities; Designing and governing algorithms in the scholarly knowledge ecosystem to support accountability, credibility, and agency; Integrating oral and tacit knowledge into the scholarly knowledge ecosystem. Integrating research, practice, and policy: The need for leadership to coordinate research, policy, and practice initiatives; Role of libraries and archives as advocates and collaborators; Incorporating values of openness, sustainability, and equity into scholarly infrastructure and practice; Funders, catalysts, and coordinators; Recommendations for integrating research, practice, and policy

    Algorithmic Fairness, Algorithmic Discrimination

    Get PDF
    There has been an explosion of concern about the use of computers to make decisions affecting humans, from hiring to lending approvals to setting prison terms. Many have pointed out that using computer programs to make these decisions may result in the propagation of biases or otherwise lead to undesirable outcomes. Many have called for increased transparency and others have called for algorithms to be tuned to produce more racially balanced outcomes. Attention to the problem is likely to grow as computers make increasingly important and sophisticated decisions in our daily lives. Drawing on both the computer science and legal literature on algorithmic fairness, this paper makes four major contributions to the debate over algorithmic discrimination. First, it provides a legal response to a recent flurry of work in computer science seeking to incorporate fairness in algorithmic decision-makers by demonstrating that legal rules generally apply in the form of side constraints, not fairness functions that can be optimized. Second, by looking at the problem through the lens of discrimination law, the paper recognizes that the problems posed by computational decisionmakers closely resemble the historical, institutional discrimination that discrimination law has evolved to control, a response to the claim that this problem is truly novel because it involves computerized decision-making. Third, the paper responds to calls for transparency in computational decision-making by demonstrating how transparency is unnecessary to providing accountability and that discrimination law itself provides a model for how to deal with cases of unfair algorithmic discrimination, with or without transparency. Fourth, the paper addresses a problem that has divided the literature on the topic: how to correct for discriminatory results produced by algorithms. Rather than seeing the problem as a binary one, I offer a third way, one that disaggregates the process of correcting algorithmic decision-makers into two separate decisions: a decision to reject an old process and a separate decision to adopt a new one. Those two decisions are subject to different legal requirements, providing added flexibility to firms and agencies seeking to avoid the worst kinds of discriminatory outcomes. Examples of disparate outcomes generated by algorithms combined with the novelty of computational decision-making are prompting many to push for new regulations to require algorithmic fairness. But, in the end, current discrimination law provides most of the answers for the wide variety of fairness-related claims likely to arise in the context of computational decision-makers, regardless of the specific technology underlying them

    The Survey, Taxonomy, and Future Directions of Trustworthy AI: A Meta Decision of Strategic Decisions

    Full text link
    When making strategic decisions, we are often confronted with overwhelming information to process. The situation can be further complicated when some pieces of evidence are contradicted each other or paradoxical. The challenge then becomes how to determine which information is useful and which ones should be eliminated. This process is known as meta-decision. Likewise, when it comes to using Artificial Intelligence (AI) systems for strategic decision-making, placing trust in the AI itself becomes a meta-decision, given that many AI systems are viewed as opaque "black boxes" that process large amounts of data. Trusting an opaque system involves deciding on the level of Trustworthy AI (TAI). We propose a new approach to address this issue by introducing a novel taxonomy or framework of TAI, which encompasses three crucial domains: articulate, authentic, and basic for different levels of trust. To underpin these domains, we create ten dimensions to measure trust: explainability/transparency, fairness/diversity, generalizability, privacy, data governance, safety/robustness, accountability, reproducibility, reliability, and sustainability. We aim to use this taxonomy to conduct a comprehensive survey and explore different TAI approaches from a strategic decision-making perspective

    Algorithmic Indirect Discrimination, Fairness, and Harm

    Get PDF
    Over the past decade, scholars, institutions, and activists have voiced strong concerns about the potential of automated decision systems to indirectly discriminate against vulnerable groups. This article analyses the ethics of algorithmic indirect discrimination, and argues that we can explain what is morally bad about such discrimination by reference to the fact that it causes harm. The article first sketches certain elements of the technical and conceptual background, including definitions of direct and indirect algorithmic differential treatment. It next introduces three prominent accounts of fairness as potential explanations if the badness of algorithmic indirect discrimination, but argues that all three are vulnerable to powerful levelling-down-style objections. Instead, the article demonstrates how proper attention to the way differences in decision-scenarios affect the distribution of harms can help us account for intuitions in prominent cases. Finally, the article considers a potential objection based on the fact that certain forms of algorithmic indirect discrimination appear to distribute rather than cause harm, and notes that we can explain how such distributions cause harm by attending to differences in individual and group vulnerability

    InteligĂŞncia artificial, algoritmos e policiamento preditivo no poder pĂşblico federal brasileiro

    Get PDF
    Trabalho de Conclusão de Curso (graduação)—Universidade de Brasília, Faculdade de Direito, 2019.O surgimento das tecnologias de informação, com o uso de big data pelos governos e pelas empresas, e com o desenvolvimento de sistemas algorítmicos e de inteligência artificial, sinalizaram o início de uma nova etapa no capitalismo: o Capitalismo de Vigilância. Em face disso, procurou-se verificar de que maneira o poder público brasileiro se situa em relação às iniciativas algorítmicas e de inteligência artificial (IA). Foi realizada revisão bibliográfica em paralelo com pesquisa exploratória, a partir do levantamento de dados, nos endereços eletrônicos dos Ministérios e de órgãos públicos, utilizando argumentos booleanos nas ferramentas de pesquisa do Google. Os resultados encontrados foram analisados e as iniciativas de tecnologia da informação foram tabeladas e seus dados registrados. Entretanto, a falta de um registro oficial e central dos sistemas implementados impediu a realização de uma exploração rigorosamente quantitativa. Tendo em vista o número elevado de iniciativas mapeadas, se decidiu por estudar o projeto Sinesp Big Data e Inteligência Artificial para Segurança Pública, desenvolvido pelo Ministério da Justiça e Segurança Pública (MJSP) em parceria com o Departamento de Computação da Universidade Federal do Ceará (UFC). Essa escolha foi motivada pelos objetivos do projeto e por sua atuação orientarem à prática de policiamento preditivo. Foram encontradas escassas informações públicas sobre o Sinesp Big Data, em razão do que, encaminhou-se comunicações eletrônicas aos responsáveis pelo projeto, e solicitações de acesso à informação, amparadas pela Lei n° 12.527/11 (Lei de Acesso à Informação) ao MJSP e à UFC. Foram recebidas respostas vagas aos questionamentos formulados, bem como negativa de contribuição em face de acordos de confidencialidade. Apesar disso, foi possível identificar que o desenvolvimento do projeto se vincula diretamente à Lei 13.675/18 (Lei do Sistema Único de Segurança Pública) e aos sistemas de inteligência policial implementados pelo Governo do Ceará. Considerando os dados levantados e utilizando categorias teóricas de Foucault, Deleuze e Zuboff, bem como estudos de governança ética das tecnologias informacionais, foi possível identificar o forte traço disciplinar do Sinesp Big Data, que, entende-se, realizará uma governança disciplinar de indivíduos de risco, identificando-se com a figura do polipanóptico e apresentando-se como um sistema black box com prováveis efeitos discriminatórios.The emergence of information technologies, with the use of big data by governments and companies, and with the development of algorithmic and artificial intelligence systems, signaled the beginning of a new stage in capitalism: Surveillance Capitalism. In view of this, we sought to verify how the Brazilian government is situated in relation to algorithmic and artificial intelligence (AI) initiatives. A parallel bibliographic review was carried out with exploratory research, based on data collection, in the electronic addresses of the Ministries and public agencies, using Boolean arguments in Google's search tools. The results found were analyzed and the information technology initiatives were tabulated and their data registered. However, the lack of an official and central registry of the systems implemented prevented a strictly quantitative exploration. In view of the large number of initiatives mapped, it was decided to study the Sinesp Big Data and Artificial Intelligence for Public Security project, developed by the Ministry of Justice and Public Security (MJSP) in partnership with the Computer Department of the Federal University of Ceará (UFC). This choice was motivated by the objectives of the project and by its operation to guide the practice of predictive policing. There was scarce public information about Sinesp Big Data, due to which, electronic communications were sent to those responsible for the project, and requests for access to information, supported by Law No. 12.527/11 (Law on Access to Information) to the MJSP and the UFC. Vague answers were received to the questions raised, as well as a negative contribution under confidentiality agreements. Nevertheless, it was possible to identify that the development of the project is directly linked to Law 13.675/18 (Law of the Unified Public Security System) and to the police intelligence systems implemented by the Government of Ceará. Considering the data collected and using the theoretical categories of Foucault, Deleuze and Zuboff, as well as studies on the ethical governance of information technologies, it was possible to identify the strong disciplinary trait of Sinesp Big Data, which, it is understood, will perform a disciplinary governance of individuals at risk, identifying itself with the polypanoid figure and presenting itself as a black box system with likely discriminatory effects

    Future Work

    Get PDF
    The Industrial Revolution. The Digital Age. These revolutions radically altered the workplace and society. We may be on the cusp of a new era—one that will rival or even surpass these historic disruptions. Technology such as artificial intelligence, robotics, virtual reality, and cutting-edge monitoring devices are developing at a rapid pace. These technologies have already begun to infiltrate the workplace and will continue to do so at ever increasing speed and breadth.This Article addresses the impact of these emerging technologies on the workplace of the present and the future. Drawing upon interviews with leading technologists, the Article explains the basics of these technologies, describes their current applications in the workplace, and predicts how they are likely to develop in the future. It then examines the legal and policy issues implicated by the adoption of technology in the workplace—most notably job losses, employee classification, privacy intrusions, discrimination, safety and health, and impacts on disabled workers. These changes will surely strain a workplace regulatory system that is ill-equipped to handle them. What is unclear is whether the strain will be so great that the system breaks, resulting in a new paradigm of work.Whether or not we are on the brink of a workplace revolution or a more modest evolution, emerging technology will exacerbate the inadequacies of our current workplace laws. This Article discusses possible legislative and judicial reforms designed to ameliorate these problems and stave off the possibility of a collapse that would leave a critical mass of workers without any meaningful protection, power, or voice. The most far-reaching of these options is a proposed “Law of Work” that would address the wide-ranging and interrelated issues posed by these new technologies via a centralized regulatory scheme. This proposal, as well as other more narrowly focused reforms, highlight the major impacts of technology on our workplace laws, underscore both the current and future shortcomings of those laws, and serve as a foundation for further research and discussion on the future of work

    Survey of Trustworthy AI: A Meta Decision of AI

    Get PDF
    When making strategic decisions, we are often confronted with overwhelming information to process. The situation can be further complicated when some pieces of evidence are contradicted each other or paradoxical. The challenge then becomes how to determine which information is useful and which ones should be eliminated. This process is known as meta-decision. Likewise, when it comes to using Artificial Intelligence (AI) systems for strategic decision-making, placing trust in the AI itself becomes a meta-decision, given that many AI systems are viewed as opaque "black boxes" that process large amounts of data. Trusting an opaque system involves deciding on the level of Trustworthy AI (TAI). We propose a new approach to address this issue by introducing a novel taxonomy or framework of TAI, which encompasses three crucial domains: articulate, authentic, and basic for different levels of trust. To underpin these domains, we create ten dimensions to measure trust: explainability/transparency, fairness/diversity, generalizability, privacy, data governance, safety/robustness, accountability, reproducibility, reliability, and sustainability. We aim to use this taxonomy to conduct a comprehensive survey and explore different TAI approaches from a strategic decision-making perspective.Cloud-based Computational Decision, Artificial Intelligence, Machine Learning9. Industry, innovation and infrastructur
    corecore