23 research outputs found
Recommended from our members
Multi-user norm consensus
Many agents act in environments with multiple human users, from care robots to smart assistants. When interacting in multi-user environments it is paramount that these agents act as all users expect. However, it is not always possible to have well-defined collective preferences, nor to easily infer them from individual preferences. This is especially true in fast changing environments, like a device placed in a public space where users can enter and exit freely. In response, this paper proposes a model to represent individual preferences about the behaviour of an agent and a mechanism to find multi-user consensuses over these preferences. Norms can then be generated to ensure that when the agent follows them it will act according to the preferences of all users. We formalise what a consensus norm is and what properties the set of consensus norms should satisfy (i.e. generate the minimum number of norms while maximising the coverage of user preferences). We provide an optimisation approach to find this set of norms and show that our approach satisfies the aforementioned properties
Recommended from our members
Predicting Privacy Preferences for Smart Devices as Norms
Smart devices, such as smart speakers, are becoming ubiquitous, and users expect these devices to act in accordance with their preferences. In particular, since these devices gather and manage personal data, users expect them to adhere to their privacy preferences. However, the current approach of gathering these preferences consists in asking the users directly, which usually triggers automatic responses failing to capture their true preferences. In response, in this paper we present a collaborative filtering approach to predict user preferences as norms. These preference predictions can be readily adopted or can serve to assist users in determining their own preferences. Using a dataset of privacy preferences of smart assistant users, we test the accuracy of our predictions
Encoding Ethics to Compute Value‑Aligned Norms
Norms have been widely enacted in human and agent societies to regulate individuals’ actions. However, although legislators may have ethics in mind when establishing norms, moral values are only sometimes explicitly considered. This paper
advances the state of the art by providing a method for selecting the norms to
enact within a society that best aligns with the moral values of such a society. Our
approach to aligning norms and values is grounded in the ethics literature. Specifcally, from the literature’s study of the relations between norms, actions, and values, we formally defne how actions and values relate through the so-called value
judgment function and how norms and values relate through the so-called norm promotion function. We show that both functions provide the means to compute value
alignment for a set of norms. Moreover, we detail how to cast our decision-making
problem as an optimisation problem: fnding the norms that maximise value alignment. We also show how to solve our problem using of-the-shelf optimisation tools.
Finally, we illustrate our approach with a specifc case study on the European Value
Study
Recommended from our members
Value alignment in participatory budgeting
Participatory budgeting empowers citizens to take an active role in shaping their government’s policies by influencing the allocation of a limited budget. In this process, citizens file various proposals and then collectively decide which ones should receive funding through a voting system. While participatory budgets have garnered significant attention in research and practice, one aspect so far overlooked is the ethical dimension of the proposals. Thus, beyond just gauging citizen preferences, we propose also to consider how these initiatives align with the government’s core values. Specifically, we apply optimisation techniques to solve a multi-criteria decision problem that considers both citizen support and value alignment when choosing the proposals to fund. We illustrate our method in two real case studies and analyse how we can combine both criteria in an egalitarian way that does not necessarily compromise the will of citizens and may encourage governments to broaden the objectives and increase the allocated budget
Recommended from our members
Towards Pluralistic Value Alignment: Aggregating Value Systems through ℓp-Regression
Dealing with the challenges of an interconnected globalised world requires to handle plurality. This is no exception when considering value-aligned intelligent systems, since the values to align with should capture this plurality. So far, most literature on value-alignment has just considered a single value system. Thus, this paper advances the state of the art by proposing a method for the aggregation of value systems. By exploiting recent results in the social choice literature, we formalise our aggregation problem as an optimisation problem. We then cast such problem as an ℓp-regression problem. By doing so, we provide a general theoretical framework to model and solve the above-mentioned problem. Our aggregation method allows us to consider a range of ethical principles, from utilitarian (maximum utility) to egalitarian (maximum fairness). We illustrate the aggregation of value systems by considering real-world data from the European Values Study and we show how different consensus value systems can be obtained depending on the ethical principle of choice
Fluorescein labelled cationic carbosilane dendritic systems for biological studies
Cationic carbosilane dendrimers and dendrons labelled with one fluorescein unit have been synthesized. For dendrimers (generations 1–3), a random procedure was followed by successive addition of two types of thiol compounds to vinyl terminated derivatives, first one with –NH3Cl and second one with –NMe2HCl functions, subsequent reaction with FITC and finally quaternization with MeI. For dendrons, the use of compounds with a –NH2 group at the focal point and –NMe2 functions at the periphery allowed us to obtain the corresponding fluoresceinated cationic derivatives. The toxicity of these dendritic molecules was studied by MTT and their interaction with siRNA Nef by electrophoresis. Finally, second generation dendrimer and their dendriplexes with siRNA Nef were chosen as a model to analyse their in vivo biodistribution in a BALB/c mouse model. The highest levels for dendriplexes were found in spleen and liver, followed in lymph nodes, while lower levels were found in kidneys. This distribution is in accordance with long circulation times.Ministerio de Economía y EmpresaComunidad de MadridMinisterio de Educación y Cienci
Moral values in norm decision making
Most often, both agents and human societies use norms to coordinate their on-going activities. Nevertheless, choosing the ‘right’ set of norms to regulate these societies constitutes an open problem. Firstly, intrinsic norm relationships may lead to inconsistencies in the chosen set of norms. Secondly, and more importantly, there is an increasing demand of including ethical considerations in the decision making process. This paper focuses on choosing the ‘right’ norms by considering moral values together with society’s partial preferences over these values and the extent to which candidate norms promote them. The resulting decision making problem can then be encoded as a linear program, and hence solved by state-of-the art solvers. Furthermore, we empirically test several optimisation scenarios so to determine the system’s performance and the characteristics of the problem that affect its hardness
Exploiting moral values to choose the right norms
Norms constitute regulative mechanisms extensively enacted in groups, organisations, and societies. However, 'choosing the right norms to establish' constitutes an open problem that requires the consideration of a number of constraints (such as norm relations) and preference criteria (e.g over involved moral values). This paper advances the state of the art in the Normative Multiagent Systems literature by formally defining this problem and by proposing its encoding as a linear program so that it can be automatically solved
Exploiting moral values to choose the right norms
Norms constitute regulative mechanisms extensively enacted in groups, organisations, and societies. However, 'choosing the right norms to establish' constitutes an open problem that requires the consideration of a number of constraints (such as norm relations) and preference criteria (e.g over involved moral values). This paper advances the state of the art in the Normative Multiagent Systems literature by formally defining this problem and by proposing its encoding as a linear program so that it can be automatically solved