27 research outputs found
Explicit n-descent on elliptic curves. III. Algorithms
This is the third in a series of papers in which we study the n-Selmer group
of an elliptic curve, with the aim of representing its elements as curves of
degree n in P^{n-1}. The methods we describe are practical in the case n=3 for
elliptic curves over the rationals, and have been implemented in Magma.
One important ingredient of our work is an algorithm for trivialising central
simple algebras. This is of independent interest: for example, it could be used
for parametrising Brauer-Severi surfaces.Comment: 43 pages, comes with a file containing Magma code for the
computations used for the examples. v2: some small edit
'It's Reducing a Human Being to a Percentage'; Perceptions of Justice in Algorithmic Decisions
Data-driven decision-making consequential to individuals raises important
questions of accountability and justice. Indeed, European law provides
individuals limited rights to 'meaningful information about the logic' behind
significant, autonomous decisions such as loan approvals, insurance quotes, and
CV filtering. We undertake three experimental studies examining people's
perceptions of justice in algorithmic decision-making under different scenarios
and explanation styles. Dimensions of justice previously observed in response
to human decision-making appear similarly engaged in response to algorithmic
decisions. Qualitative analysis identified several concerns and heuristics
involved in justice perceptions including arbitrariness, generalisation, and
(in)dignity. Quantitative analysis indicates that explanation styles primarily
matter to justice perceptions only when subjects are exposed to multiple
different styles---under repeated exposure of one style, scenario effects
obscure any explanation effects. Our results suggests there may be no 'best'
approach to explaining algorithmic decisions, and that reflection on their
automated nature both implicates and mitigates justice dimensions.Comment: 14 pages, 3 figures, ACM Conference on Human Factors in Computing
Systems (CHI'18), April 21--26, Montreal, Canad
Personalization Paradox in Behavior Change Apps:Lessons from a Social Comparison-Based Personalized App for Physical Activity
Social comparison-based features are widely used in social computing apps.
However, most existing apps are not grounded in social comparison theories and
do not consider individual differences in social comparison preferences and
reactions. This paper is among the first to automatically personalize social
comparison targets. In the context of an m-health app for physical activity, we
use artificial intelligence (AI) techniques of multi-armed bandits. Results
from our user study (n=53) indicate that there is some evidence that motivation
can be increased using the AI-based personalization of social comparison. The
detected effects achieved small-to-moderate effect sizes, illustrating the
real-world implications of the intervention for enhancing motivation and
physical activity. In addition to design implications for social comparison
features in social apps, this paper identified the personalization paradox, the
conflict between user modeling and adaptation, as a key design challenge of
personalized applications for behavior change. Additionally, we propose
research directions to mitigate this Personalization Paradox
Recommended from our members
Explainable Fairness in Regulatory Algorithmic Auditing
How does a regulator know if an algorithm is compliant with existing anti-discrimination law? This is an urgent question, as algorithmic decision-making tools play an increasingly significant role in the lives of humans, especially at critical junctures such as getting into college, getting a job, getting a mortgage, housing, or insurance. In each of these regulated situations, moreover, the legal meaning of unlawful discrimination is different and context dependent. Regulators lack consensus on how to audit algorithms for discrimination. Recent legal precedent provides some clarity for review and provides the basis of the framework for algorithmic auditing outlined in this article. This article provides a review of precedent, a novel framework which explicitly decouples technical data science questions from legal and regulatory questions, an exploration of the framework’s relationship to disparate impact. The framework promotes algorithmic accountability and transparency by focusing on explainability to regulators and the public. Through case studies in student lending and insurance, we demonstrate operationalizing audits to enforce fairness standards. Our goal is an adaptable, robust framework to guide anti-discrimination algorithm auditing until legislative interventions emerge. As an ancillary benefit, this framework is robust, easily explainable, and implementable with immediate impacts to many public and private stakeholders
Nouvelle enquête sur l'intelligence artificielle: Médecine, santé, technologies : ce qui va changer dans nos vies
International audienceElles décryptent les mammographies, observent les rétines, scrutent les cerveaux et comparent les symptômes pour prédire d’éventuelles maladies. Les IA sont entrées à l’hôpital et dans les labos, où médecins et chercheurs inventent la santé du XXIe siècle.Des prothèses aux implants en passant par les consultations en ligne et les patients numériques, ces avancées vertigineuses interrogent notre rapport au corps. Elles réveillent le fantasme de l’homme augmenté et le désir d’immortalité des transhumanistes.Grâce à l’éclairage des plus grands experts, cette enquête dresse un état de la recherche et des dernières innovations en matière d’intelligence artificielle. Comment préserver le secret médical dans un monde ultra-connecté ? Les médecins vont-ils disparaître ?Les robots sont-ils une réponse à l’isolement ?Les machines nous privent-elles de notre libre arbitre