219 research outputs found
Framing TRUST in Artificial Intelligence (AI) Ethics Communication: Analysis of AI Ethics Guiding Principles through the Lens of Framing Theory
With the fast proliferation of Artificial Intelligence (AI) technologies in our society, several corporations, governments, research institutions, and NGOs have produced and published AI ethics guiding documents. These include principles, guidelines, frameworks, assessment lists, training modules, blogs, and principle-to-practice strategies. The priorities, focus, and articulation of these innumerable documents vary to different extents. Though they all aim and claim to ensure AI usage for the common good, the actual AI system outcomes in various social applications have invigorated ethical dilemmas and scholarly debates. This study presents the analysis of AI ethics principles and guidelines text published by three pioneers from three different sectors - Microsoft Corporation, National Institute of Standards and Technology (NIST), AI HLEG set up by the European Commission through the lens of media and communication’s Framing Theory. The TRUST Framings extracted from recent academic AI literature are used as standard construct to study the ethics framings in the selected text. The institutional framing of AI principles and guidelines shapes the AI ethics of an institution in a soft (as there is no legal binding) but strong (incorporating their respective position/societal role’s priorities) way. The AI principles’ framing approach directly relates to the AI actor’s ethics that enjoins risk mitigation and problem resolution associated with AI development and deployment cycle. Thus, it has become important to examine institutional AI ethics communication. This paper brings forth a Comm-Tech perspective around the ethics of evolving technologies known under the umbrella term - Artificial Intelligence and the human moralities governing them
AI for the Common Good?! Pitfalls, challenges, and Ethics Pen-Testing
Recently, many AI researchers and practitioners have embarked on research
visions that involve doing AI for "Good". This is part of a general drive
towards infusing AI research and practice with ethical thinking. One frequent
theme in current ethical guidelines is the requirement that AI be good for all,
or: contribute to the Common Good. But what is the Common Good, and is it
enough to want to be good? Via four lead questions, I will illustrate
challenges and pitfalls when determining, from an AI point of view, what the
Common Good is and how it can be enhanced by AI. The questions are: What is the
problem / What is a problem?, Who defines the problem?, What is the role of
knowledge?, and What are important side effects and dynamics? The illustration
will use an example from the domain of "AI for Social Good", more specifically
"Data Science for Social Good". Even if the importance of these questions may
be known at an abstract level, they do not get asked sufficiently in practice,
as shown by an exploratory study of 99 contributions to recent conferences in
the field. Turning these challenges and pitfalls into a positive
recommendation, as a conclusion I will draw on another characteristic of
computer-science thinking and practice to make these impediments visible and
attenuate them: "attacks" as a method for improving design. This results in the
proposal of ethics pen-testing as a method for helping AI designs to better
contribute to the Common Good.Comment: to appear in Paladyn. Journal of Behavioral Robotics; accepted on
27-10-201
Steps to an Ecology of Networked Knowledge and Innovation: Enabling new forms of collaboration among sciences, engineering, arts, and design
SEAD network White Papers ReportThe final White Papers (posted at http://seadnetwork.wordpress.com/white-paper- abstracts/final-white-papers/) represent a spectrum of interests in advocating for transdisciplinarity among arts, sciences, and technologies. All authors submitted plans of action and identified stakeholders they perceived as instrumental in carrying out such plans. The individual efforts led to an international scope. One of the important characteristics of this collection is that the papers do not represent a collective aim toward an explicit initiative. Rather, they offer a broad array of views on barriers faced and prospective solutions. In summary, the collected White Papers and associated Meta- analyses began as an effort to take the pulse of the SEAD community as broadly as possible. The ideas they generated provide a fruitful basis for gauging trends and challenges in facilitating the growth of the network and implementing future SEAD initiatives.National Science Foundation Grant No.1142510. Additional funding was provided by the ATEC program at the University of Texas at Dallas and the Institute for Applied Creativity at Texas A&M University
Recommended from our members
Neural-symbolic integration for fairness in AI
Deep learning has achieved state-of-the-art results in various application domains ranging from image recognition to language translation and game playing. However, it is now generally accepted that deep learning alone has not been able to satisfy the requirement of fairness and, ultimately, trust in Artificial Intelligence (AI). In this paper, we propose an interactive neural-symbolic approach for fairness in AI based on the Logic Tensor Network (LTN) framework. We show that the extraction of symbolic knowledge from LTN-based deep networks combined with fairness constraints offer a general method for instilling fairness into deep networks via continual learning. Explainable AI approaches which otherwise could identify but not fix fairness issues are shown to be enriched with an ability to improve fairness results. Experimental results on three real-world data sets used to predict income, credit risk and recidivism in financial applications show that our approach can satisfy fairness metrics while maintaining state-of-the-art classification performance
Engineering Advantage, Spring 2011
https://digitalcommons.calpoly.edu/ceng_news/1001/thumbnail.jp
Engineering Advantage, Spring 2011
https://digitalcommons.calpoly.edu/ceng_news/1001/thumbnail.jp
Computational socioeconomics
Uncovering the structure of socioeconomic systems and timely estimation of socioeconomic status are significant for economic development. The understanding of socioeconomic processes provides foundations to quantify global economic development, to map regional industrial structure, and to infer individual socioeconomic status. In this review, we will make a brief manifesto about a new interdisciplinary research field named Computational Socioeconomics, followed by detailed introduction about data resources, computational tools, data-driven methods, theoretical models and novel applications at multiple resolutions, including the quantification of global economic inequality and complexity, the map of regional industrial structure and urban perception, the estimation of individual socioeconomic status and demographic, and the real-time monitoring of emergent events. This review, together with pioneering works we have highlighted, will draw increasing interdisciplinary attentions and induce a methodological shift in future socioeconomic studies
- …