7 research outputs found

    Liberté? : réflexion sur un problème dans l'éthique de Theodor Adorno

    Full text link
    La réflexion morale de Theodor Adorno est manifestement traversée par une tension : l’exigence paradoxale d’enraciner pleinement la morale à la fois dans les impulsions les plus vives et dans la raison la plus lucide. Plus qu’une excentricité parmi d’autres de la figure de proue de l’École de Francfort, le présent mémoire donne à penser que ce problème pourrait être une des principales charnières de son éthique. L’objectif de ma recherche est de dégager une voie pour articuler conjointement, «sans sacrifice aucun», ces deux exigences. Pour ce faire, je tenterai d’étayer l’hypothèse suivante : l’analyse du problème de la liberté et de la non-liberté que développe le premier des trois «modèles» de Dialectique négative permet de comprendre à la fois le lien et l’écart entre la dimension impulsive et rationnelle de l’éthique d’Adorno. L’argument qui sera déployé se penchera d’abord sur le problème de la non-liberté et son incarnation à travers le phénomène concret de l’antisémitisme ainsi que de la peur et de la rage animale dans lesquelles il s’enracine, pour ensuite examiner la conception adornienne de la liberté dans ses deux dimensions de «pleine conscience théorique» et «d’impulsion spontanée», et pour finalement tenter d’apprécier la portée plus générale pour la compréhension de l’éthique d’Adorno de cette interprétation du problème de la liberté en tentant de comprendre sur cette base son «nouvel impératif catégorique».Throughout Theodor Adorno’s moral thought runs a paradoxical demand : that morality should be fully rooted in both the liveliest impulses and the keenest reasonings. More than a quirk among Adorno’s many, this essay suggests that this problem plays a pivotal role in his ethics. The current research seeks to develop a strategy to conjointly articulate these two demands. To this end, I will try to expound the following hypothesis : the analysis of the problem of freedom and unfreedom set forth by the first of the ‘models’ in Negative Dialectics enables making sense of both the bond and the disparity between the impulsive and rational constituents of adornian ethics. This study will first focus on the problem of unfreedom and its embodiment in the concrete phenomena of anti-Semitism as well as the animal fear and rage that it builds upon. It will then go on to examine Adorno’s conception of freedom in its two facets : «full theoretical consciousness» and «spontaneous impulse». It will finally try to ascertain the more general relevance of this interpretation of the problem of freedom for making sense of Adorno’s ethics, by trying to make sense on that basis of his «new categorical imperative»

    Making Intelligence: Ethics, IQ, and ML Benchmarks

    Full text link
    The ML community recognizes the importance of anticipating and mitigating the potential negative impacts of benchmark research. In this position paper, we argue that more attention needs to be paid to areas of ethical risk that lie at the technical and scientific core of ML benchmarks. We identify overlooked structural similarities between human IQ and ML benchmarks. Human intelligence and ML benchmarks share similarities in setting standards for describing, evaluating and comparing performance on tasks relevant to intelligence. This enables us to unlock lessons from feminist philosophy of science scholarship that need to be considered by the ML benchmark community. Finally, we outline practical recommendations for benchmark research ethics and ethics review

    Making Intelligence: Ethics, IQ, and ML Benchmarks

    Get PDF
    The ML community recognizes the importance of anticipating and mitigating the potential negative impacts of benchmark research. In this position paper, we argue that more attention needs to be paid to areas of ethical risk that lie at the technical and scientific core of ML benchmarks. We identify overlooked structural similarities between human IQ and ML benchmarks. Human intelligence and ML benchmarks share similarities in setting standards for describing, evaluating and comparing performance on tasks relevant to intelligence. This enables us to unlock lessons from feminist philosophy of science scholarship that need to be considered by the ML benchmark community. Finally, we outline practical recommendations for benchmark research ethics and ethics review

    Introducing v0.5 of the AI Safety Benchmark from MLCommons

    Get PDF
    This paper introduces v0.5 of the AI Safety Benchmark, which has been created by the MLCommons AI Safety Working Group. The AI Safety Benchmark has been designed to assess the safety risks of AI systems that use chat-tuned language models. We introduce a principled approach to specifying and constructing the benchmark, which for v0.5 covers only a single use case (an adult chatting to a general-purpose assistant in English), and a limited set of personas (i.e., typical users, malicious users, and vulnerable users). We created a new taxonomy of 13 hazard categories, of which 7 have tests in the v0.5 benchmark. We plan to release version 1.0 of the AI Safety Benchmark by the end of 2024. The v1.0 benchmark will provide meaningful insights into the safety of AI systems. However, the v0.5 benchmark should not be used to assess the safety of AI systems. We have sought to fully document the limitations, flaws, and challenges of v0.5. This release of v0.5 of the AI Safety Benchmark includes (1) a principled approach to specifying and constructing the benchmark, which comprises use cases, types of systems under test (SUTs), language and context, personas, tests, and test items; (2) a taxonomy of 13 hazard categories with definitions and subcategories; (3) tests for seven of the hazard categories, each comprising a unique set of test items, i.e., prompts. There are 43,090 test items in total, which we created with templates; (4) a grading system for AI systems against the benchmark; (5) an openly available platform, and downloadable tool, called ModelBench that can be used to evaluate the safety of AI systems on the benchmark; (6) an example evaluation report which benchmarks the performance of over a dozen openly available chat-tuned language models; (7) a test specification for the benchmark

    Introducing v0.5 of the AI Safety Benchmark from MLCommons

    Get PDF
    This paper introduces v0.5 of the AI Safety Benchmark, which has been created by the MLCommons AI Safety Working Group. The AI Safety Benchmark has been designed to assess the safety risks of AI systems that use chat-tuned language models. We introduce a principled approach to specifying and constructing the benchmark, which for v0.5 covers only a single use case (an adult chatting to a general-purpose assistant in English), and a limited set of personas (i.e., typical users, malicious users, and vulnerable users). We created a new taxonomy of 13 hazard categories, of which 7 have tests in the v0.5 benchmark. We plan to release version 1.0 of the AI Safety Benchmark by the end of 2024. The v1.0 benchmark will provide meaningful insights into the safety of AI systems. However, the v0.5 benchmark should not be used to assess the safety of AI systems. We have sought to fully document the limitations, flaws, and challenges of v0.5. This release of v0.5 of the AI Safety Benchmark includes (1) a principled approach to specifying and constructing the benchmark, which comprises use cases, types of systems under test (SUTs), language and context, personas, tests, and test items; (2) a taxonomy of 13 hazard categories with definitions and subcategories; (3) tests for seven of the hazard categories, each comprising a unique set of test items, i.e., prompts. There are 43,090 test items in total, which we created with templates; (4) a grading system for AI systems against the benchmark; (5) an openly available platform, and downloadable tool, called ModelBench that can be used to evaluate the safety of AI systems on the benchmark; (6) an example evaluation report which benchmarks the performance of over a dozen openly available chat-tuned language models; (7) a test specification for the benchmark

    Introducing v0.5 of the AI Safety Benchmark from MLCommons

    Get PDF
    This paper introduces v0.5 of the AI Safety Benchmark, which has been created by the MLCommons AI Safety Working Group. The AI Safety Benchmark has been designed to assess the safety risks of AI systems that use chat-tuned language models. We introduce a principled approach to specifying and constructing the benchmark, which for v0.5 covers only a single use case (an adult chatting to a general-purpose assistant in English), and a limited set of personas (i.e., typical users, malicious users, and vulnerable users). We created a new taxonomy of 13 hazard categories, of which 7 have tests in the v0.5 benchmark. We plan to release version 1.0 of the AI Safety Benchmark by the end of 2024. The v1.0 benchmark will provide meaningful insights into the safety of AI systems. However, the v0.5 benchmark should not be used to assess the safety of AI systems. We have sought to fully document the limitations, flaws, and challenges of v0.5. This release of v0.5 of the AI Safety Benchmark includes (1) a principled approach to specifying and constructing the benchmark, which comprises use cases, types of systems under test (SUTs), language and context, personas, tests, and test items; (2) a taxonomy of 13 hazard categories with definitions and subcategories; (3) tests for seven of the hazard categories, each comprising a unique set of test items, i.e., prompts. There are 43,090 test items in total, which we created with templates; (4) a grading system for AI systems against the benchmark; (5) an openly available platform, and downloadable tool, called ModelBench that can be used to evaluate the safety of AI systems on the benchmark; (6) an example evaluation report which benchmarks the performance of over a dozen openly available chat-tuned language models; (7) a test specification for the benchmark
    corecore