390 research outputs found

    Giving permission implies giving choice

    Get PDF
    When we want to examine different kinds of forms of acts within the framework of the description of the Dutch criminal law, whether an act is permitted or not permitted, we can encounter a difference. On the one hand, it could be the case that a certain act is permitted by a competent normative authority. On the other hand, it could be the case that in the Dutch criminal law a certain act is weak permitted without a competent normative authority having enacted that permission. The article presents the formalisation of the weak and strong permission in deontic logic based on the logic of enactment. A permission that follows from the absence of a prohibition, we call a weak permission; this permission is not enacted. A strong permission is always enacted (implicitly or explicitly), and implies a giving choice. The distinction between these two types of permission is a consequence of the universality of a normative system by the closure rule: 'whatever is not forbidden, is permitted'

    Representing legal rules in deontic logic

    Get PDF

    Does the End Justify the Means?:On the Moral Justification of Fairness-Aware Machine Learning

    Get PDF
    Despite an abundance of fairness-aware machine learning (fair-ml) algorithms, the moral justification of how these algorithms enforce fairness metrics is largely unexplored. The goal of this paper is to elicit the moral implications of a fair-ml algorithm. To this end, we first consider the moral justification of the fairness metrics for which the algorithm optimizes. We present an extension of previous work to arrive at three propositions that can justify the fairness metrics. Different from previous work, our extension highlights that the consequences of predicted outcomes are important for judging fairness. We draw from the extended framework and empirical ethics to identify moral implications of the fair-ml algorithm. We focus on the two optimization strategies inherent to the algorithm: group-specific decision thresholds and randomized decision thresholds. We argue that the justification of the algorithm can differ depending on one's assumptions about the (social) context in which the algorithm is applied - even if the associated fairness metric is the same. Finally, we sketch paths for future work towards a more complete evaluation of fair-ml algorithms, beyond their direct optimization objectives

    De robot die altijd raak schiet

    Get PDF
    Geen samenvatting

    Ethical issues in web data mining

    Full text link
    corecore