2 research outputs found

    The Reasonableness Machine

    Get PDF
    Automation might someday allow for the inexpensive creation of highly contextualized and effective laws. If that ever comes to pass, however, it will not be on a blank slate. Proponents will face the question of how to computerize bedrock aspects of our existing law, some of which are legal standards—norms that use evaluative, even moral, criteria. Conventional wisdom says that standards are difficult to translate into computer code because they do not present clear operational mechanisms to follow. If that wisdom holds, one could reasonably doubt that legal automation will ever get off the ground. Conventional wisdom, however, fails to account for the interpretive freedom that standards provide. Their murkiness makes them a fertile ground for the growth of competing explanations of their legal meaning. Some of those readings might be more rule-like than others. Proponents of automation will likely be drawn to those rule-like interpretations, so long as they are compatible enough with existing law. This complex dynamic between computer-friendliness and legal interpretation makes it troublesome for legislators to identify the variable and fixed costs of automation. This Article aims to shed light on this relationship by focusing our attention on a quintessential legal standard at the center of our legal system—the Reasonably Prudent Person Test. Here, I explain how automation proponents might be tempted by fringe, formulaic interpretations of the test, such as Averageness, because they bring comparatively low innovation costs. With time, however, technological advancement will likely drive down innovation costs, and mainstream interpretations, like Conventionalism, could find favor again. Regardless of the interpretation that proponents favor, though, an unavoidable fixed cost looms: by replacing the jurors who apply the test with a machine, they will eliminate a long-valued avenue for participatory and deliberative democracy

    Calcul de centralité et identification de structures de communautés dans les graphes de documents

    Get PDF
    Dans cette thèse, nous nous intéressons à la caractérisation de grandes collections de documents (en utilisant les liens entre ces derniers) afin de faciliter leur utilisation et leur exploitation par des humains ou par des outils informatiques. Dans un premier temps, nous avons abordé la problématique du calcul de centralité dans les graphes de documents. Nous avons décrit les principaux algorithmes de calcul de centralité existants en mettant l'accent sur le problème TKC (Tightly Knit Community) dont souffre la plupart des mesures de centralité récentes. Ensuite, nous avons proposé trois nouveaux algorithmes de calcul de centralité (MHITS, NHITS et DocRank) permettant d'affronter le phénomène TKC. Les différents algorithmes proposés ont été évalués et comparés aux approches existantes. Des critères d'évaluation ont notamment été proposés pour mesurer l'effet TKC. Dans un deuxième temps, nous nous sommes intéressés au problème de la classification non supervisée de documents. Plus précisément, nous avons envisagé ce regroupement comme une tâche d'identification de structures de communautés (ISC) dans les graphes de documents. Nous avons décrit les principales approches d'ISC existantes en distinguant les approches basées sur un modèle génératif des approches algorithmiques ou classiques. Puis, nous avons proposé un modèle génératif (SPCE) basé sur le lissage et sur une initialisation appropriée pour l'ISC dans des graphes de faible densité. Le modèle SPCE a été évalué et validé en le comparant à d'autres approches d'ISC. Enfin, nous avons montré que le modèle SPCE pouvait être étendu pour prendre en compte simultanément les liens et les contenus des documents.In this thesis, we are interested in characterizing large collections of documents (using the links between them) in order to facilitate their use and exploitation by humans or by software tools. Initially, we addressed the problem of centrality computation in document graphs. We described existing centrality algorithms by focusing on the TKC (Tightly Knit Community) problem which affects most existing centrality measures. Then, we proposed three new centrality algorithms (MHITS, NHITS and DocRank) which tackle the TKC effect. The proposed algorithms were evaluated and compared to existing approaches using several graphs and evaluation measures. In a second step, we investigated the problem of document clustering. Specifically, we considered this clustering as a task of community structure identification (CSI) in document graphs. We described the existing CSI approaches by distinguishing those based on a generative model from the algorithmic or traditional ones. Then, we proposed a generative model (SPCE) based on smoothing and on an appropriate initialization for CSI in sparse graphs. The SPCE model was evaluated and validated by comparing it to other CSI approaches. Finally, we showed that the SPCE model can be extended to take into account simultaneously the links and content of documents
    corecore