810 research outputs found
Society-in-the-Loop: Programming the Algorithmic Social Contract
Recent rapid advances in Artificial Intelligence (AI) and Machine Learning
have raised many questions about the regulatory and governance mechanisms for
autonomous machines. Many commentators, scholars, and policy-makers now call
for ensuring that algorithms governing our lives are transparent, fair, and
accountable. Here, I propose a conceptual framework for the regulation of AI
and algorithmic systems. I argue that we need tools to program, debug and
maintain an algorithmic social contract, a pact between various human
stakeholders, mediated by machines. To achieve this, we can adapt the concept
of human-in-the-loop (HITL) from the fields of modeling and simulation, and
interactive machine learning. In particular, I propose an agenda I call
society-in-the-loop (SITL), which combines the HITL control paradigm with
mechanisms for negotiating the values of various stakeholders affected by AI
systems, and monitoring compliance with the agreement. In short, `SITL = HITL +
Social Contract.'Comment: (in press), Ethics of Information Technology, 201
Sustainability Standards and Stakeholder Engagement: Lessons From Carbon Markets
Stakeholders play an increasingly active role in private governance, including development of standards for measuring sustainability. Building on prior studies focused on standards and stakeholder engagement, we use an innovation management theoretical lens to compare stakeholder engagement and standards developed in two carbon markets: the Climate Action Reserve and the U.N.’s Clean Development Mechanism. We develop and test hypotheses regarding how different processes of stakeholder engagement in standard development affect the number, identity, and age of stakeholders involved, as well as the variation and quality of the resulting standards. In doing so, we contribute to the growing literature on stakeholder engagement in developing sustainability standards
Recommended from our members
Exploring Societal Computing based on the Example of Privacy
Data privacy when using online systems like Facebook and Amazon has become an increasingly popular topic in the last few years. This thesis will consist of the following four projects that aim to address the issues of privacy and software engineering.
First, only a little is known about how users and developers perceive privacy and which concrete measures would mitigate their privacy concerns. To investigate privacy requirements, we conducted an online survey with closed and open questions and collected 408 valid responses. Our results show that users often reduce privacy to security, with data sharing and data breaches being their biggest concerns. Users are more concerned about the content of their documents and their personal data such as location than about their interaction data. Unlike users, developers clearly prefer technical measures like data anonymization and think that privacy laws and policies are less effective. We also observed interesting differences between people from different geographies. For example, people from Europe are more concerned about data breaches than people from North America. People from Asia/Pacific and Europe believe that content and metadata are more critical for privacy than people from North America. Our results contribute to developing a user-driven privacy framework that is based on empirical evidence in addition to the legal, technical, and commercial perspectives.
Second, a related challenge to above, is to make privacy more understandable in complex systems that may have a variety of user interface options, which may change often. As social network platforms have evolved, the ability for users to control how and with whom information is being shared introduces challenges concerning the configuration and comprehension of privacy settings. To address these concerns, our crowd sourced approach simplifies the understanding of privacy settings by using data collected from 512 users over a 17 month period to generate visualizations that allow users to compare their personal settings to an arbitrary subset of individuals of their choosing. To validate our approach we conducted an online survey with closed and open questions and collected 59 valid responses after which we conducted follow-up interviews with 10 respondents. Our results showed that 70% of respondents found visualizations using crowd sourced data useful for understanding privacy settings, and 80% preferred a crowd sourced tool for configuring their privacy settings over current privacy controls.
Third, as software evolves over time, this might introduce bugs that breach users' privacy. Further, there might be system-wide policy changes that could change users' settings to be more or less private than before. We present a novel technique that can be used by end-users for detecting changes in privacy, i.e., regression testing for privacy. Using a social approach for detecting privacy bugs, we present two prototype tools. Our evaluation shows the feasibility and utility of our approach for detecting privacy bugs. We highlight two interesting case studies on the bugs that were discovered using our tools. To the best of our knowledge, this is the first technique that leverages regression testing for detecting privacy bugs from an end-user perspective.
Fourth, approaches to addressing these privacy concerns typically require substantial extra computational resources, which might be beneficial where privacy is concerned, but may have significant negative impact with respect to Green Computing and sustainability, another major societal concern. Spending more computation time results in spending more energy and other resources that make the software system less sustainable. Ideally, what we would like are techniques for designing software systems that address these privacy concerns but which are also sustainable - systems where privacy could be achieved "for free", i.e., without having to spend extra computational effort. We describe how privacy can indeed be achieved for free an accidental and beneficial side effect of doing some existing computation - in web applications and online systems that have access to user data. We show the feasibility, sustainability, and utility of our approach and what types of privacy threats it can mitigate.
Finally, we generalize the problem of privacy and its tradeoffs. As Social Computing has increasingly captivated the general public, it has become a popular research area for computer scientists. Social Computing research focuses on online social behavior and using artifacts derived from it for providing recommendations and other useful community knowledge. Unfortunately, some of that behavior and knowledge incur societal costs, particularly with regards to Privacy, which is viewed quite differently by different populations as well as regulated differently in different locales. But clever technical solutions to those challenges may impose additional societal costs, e.g., by consuming substantial resources at odds with Green Computing, another major area of societal concern. We propose a new crosscutting research area, Societal Computing, that focuses on the technical tradeoffs among computational models and application domains that raise significant societal issues. We highlight some of the relevant research topics and open problems that we foresee in Societal Computing. We feel that these topics, and Societal Computing in general, need to gain prominence as they will provide useful avenues of research leading to increasing benefits for society as a whole
Moral Machine or Tyranny of the Majority?
With Artificial Intelligence systems increasingly applied in consequential
domains, researchers have begun to ask how these systems ought to act in
ethically charged situations where even humans lack consensus. In the Moral
Machine project, researchers crowdsourced answers to "Trolley Problems"
concerning autonomous vehicles. Subsequently, Noothigattu et al. (2018)
proposed inferring linear functions that approximate each individual's
preferences and aggregating these linear models by averaging parameters across
the population. In this paper, we examine this averaging mechanism, focusing on
fairness concerns in the presence of strategic effects. We investigate a simple
setting where the population consists of two groups, with the minority
constituting an {\alpha} < 0.5 share of the population. To simplify the
analysis, we consider the extreme case in which within-group preferences are
homogeneous. Focusing on the fraction of contested cases where the minority
group prevails, we make the following observations: (a) even when all parties
report their preferences truthfully, the fraction of disputes where the
minority prevails is less than proportionate in {\alpha}; (b) the degree of
sub-proportionality grows more severe as the level of disagreement between the
groups increases; (c) when parties report preferences strategically, pure
strategy equilibria do not always exist; and (d) whenever a pure strategy
equilibrium exists, the majority group prevails 100% of the time. These
findings raise concerns about stability and fairness of preference vector
averaging as a mechanism for aggregating diverging voices. Finally, we discuss
alternatives, including randomized dictatorship and median-based mechanisms.Comment: To appear in the proceedings of AAAI 202
CHARACTERIZING ENABLING INNOVATIONS AND ENABLING THINKING
The pursuit of innovation is engrained throughout society whether in business via the introduction of offerings, non-profits in their mission-driven initiatives, universities and agencies in their drive for discoveries and inventions, or governments in their desire to improve the quality of life of their citizens. Yet, despite these pursuits, innovations with long-lasting, significant impact represent an infrequent outcome in most domains. The seemingly random nature of these results stems, in part, from the definitions of innovation and the models based on such definitions. Although there is debate on this topic, a comprehensive and pragmatic perspective developed in this work defines innovation as the introduction of a novel or different idea into practice that has a positive impact on society. To date, models of innovation have focused on, for example, new technological advances, new approaches to connectivity in systems, new conceptual frameworks, or even new dimensions of performance - all effectively building on the first half of the definition of innovation and encouraging its pursuit based on the novelty of ideas. However, as explored herein, achieving profound results by innovating on demand might require a perspective that focuses on the impact of an innovation. In this view, innovation does not only entail doing new things, but consciously driving them towards achieving impact through proactive design behaviors. Explicit consideration of the impact dimension in innovation models has been missing, even though it may arguably be the most important since it represents the outcome of innovation
The Future of Science Governance: A review of public concerns, governance and institutional response
Citizens AND HYdrology (CANDHY): conceptualizing a transdisciplinary framework for citizen science addressing hydrological challenges
Widely available digital technologies are empowering citizens who are increasingly well informed and involved in numerous water, climate, and environmental challenges. Citizen science can serve many different purposes, from the "pleasure of doing science" to complementing observations, increasing scientific literacy, and supporting collaborative behaviour to solve specific water management problems. Still, procedures on how to incorporate citizens' knowledge effectively to inform policy and decision-making are lagging behind. Moreover, general conceptual frameworks are unavailable, preventing the widespread uptake of citizen science approaches for more participatory cross-sectorial water governance. In this work, we identify the shared constituents, interfaces, and interlinkages between hydrological sciences and other academic and non-academic disciplines in addressing water issues. Our goal is to conceptualize a transdisciplinary framework for valuing citizen science and advancing the hydrological sciences. Joint efforts between hydrological, computer, and social sciences are envisaged for integrating human sensing and behavioural mechanisms into the framework. Expanding opportunities of online communities complement the fundamental value of on-site surveying and indigenous knowledge. This work is promoted by the Citizens AND HYdrology (CANDHY) Working Group established by the International Association of Hydrological Sciences (IAHS)
Model(ing) Privacy: Empirical Approaches to Privacy Law and Governance
Model(ing) Privacy: Empirical Approaches to Privacy Law and Governanc
How open is innovation? A retrospective and ideas forward
This paper sheds fresh light on our 2010 paper How Open Is Innovation by taking into consideration notable developments in innovation over the last decade. The original paper developed four types of openness: sourcing, acquiring, selling, and revealing. Reflecting on important technological, organizational, and societal changes in the past decade, we highlight how these changes prompt novel questions for open innovation. While the core features of the original framework still stands, there are many new questions that have emerged in recent years. We end by charting a path for future research that emphasizes opportunities, costs and tradeoffs between different modes of open innovation, the need to better understand the nature of data, new organizational designs and legal instruments, and multilevel aspects and relationships that affect the extent and nature of openness
- …