68 research outputs found
Consent to Targeted Advertising
Targeted advertising in digital markets involves multiple actors collecting, exchanging, and processing personal data for the purpose of capturing users’ attention in online environments. This ecosystem has given rise to considerable adverse effects on individuals and society, resulting from mass surveillance, the manipulation of choices and opinions, and the spread of addictive or fake messages. Against this background, this article critically discusses the regulation of consent in online targeted advertising. To this end, we review EU laws and proposals and consider the extent to which a requirement of informed consent may provide effective consumer protection. On the basis of such an analysis, we make suggestions for possible avenues that may be pursued
Algorithmic fairness through group parities? The case of COMPAS-SAPMOC
Machine learning classifiers are increasingly used to inform, or even make, decisions significantly affecting human lives. Fairness concerns have spawned a number of contributions aimed at both identifying and addressing unfairness in algorithmic decision-making. This paper critically discusses the adoption of group-parity criteria (e.g., demographic parity, equality of opportunity, treatment equality) as fairness standards. To this end, we evaluate the use of machine learning methods relative to different steps of the decision-making process: assigning a predictive score, linking a classification to the score, and adopting decisions based on the classification. Throughout our inquiry we use the COMPAS system, complemented by a radical simplification of it (our SAPMOC I and SAPMOC II models), as our running examples. Through these examples, we show how a system that is equally accurate for different groups may fail to comply with group-parity standards, owing to different base rates in the population. We discuss the general properties of the statistics determining the satisfaction of group-parity criteria and levels of accuracy. Using the distinction between scoring, classifying, and deciding, we argue that equalisation of classifications/decisions between groups can be achieved thorough group-dependent thresholding. We discuss contexts in which this approach may be meaningful and useful in pursuing policy objectives. We claim that the implementation of group-parity standards should be left to competent human decision-makers, under appropriate scrutiny, since it involves discretionary value-based political choices. Accordingly, predictive systems should be designed in such a way that relevant policy goals can be transparently implemented. Our paper presents three main contributions: (1) it addresses a complex predictive system through the lens of simplified toy models; (2) it argues for selective policy interventions on the different steps of automated decision-making; (3) it points to the limited significance of statistical notions of fairness to achieve social goals
E-Health: Criminal Liability and Automation
Questo lavoro di ricerca indaga i problemi relativi alla responsabilità penale legata all’uso di sistemi di automazione e d'intelligenza artificiale nel settore dell’e-health.
Tale indagine è stata svolta inquadrando il sistema sanitario all’interno di una visione socio-tecnica, con particolare attenzione all’interazione tra uomo e macchina, al livello di automazione dei sistemi e al concetto di errore e gestione del rischio.
Sono state approfondite alcune specifiche aree di interesse quali: la responsabilità penale per danno da dispositivi medici difettosi; la responsabilità medica, connessa all’uso di sistemi a elevata automazione e legata a difetti del sistema; e, in particolare, la responsabilità penale legata all’uso di sistemi d’intelligenza artificiale e i modelli elaborati dalla dottrina per regolare tale fenomeno. Sono stati esaminati: il modello zoologico, il modello dell’agente mediato, il modello della conseguenza naturale e probabile e il modello della responsabilità diretta. Si esamina la possibilità che un agente autonomo intelligente sia in grado di soddisfare i requisiti dell’actus reus e della mens rea, quali condizioni necessarie all’attribuzione di responsabilità penale, qualora un AI ponga in essere una condotta astrattamente riconducibile a una fattispecie criminosa.
I profili di responsabilitĂ sono analizzati sulla base di casi e scenari e infine si cerca di evidenziare possibili soluzioni e rimedi, anche alla luce della teoria degli agenti normativi.This research thesis investigates all the issues related to the criminal liability that arise when highly automated and/or artificial intelligence systems are used in e-Health.
This investigation has been conducted looking at the health system with a socio-technical point of view, paying specific attention to the human-machine interaction, the specific level of automation involved, and finally to concepts of error and risk management.
Some topics over the others have been deeply examined, e.g. product liability for defective medical devices; medical liability in case of highly automated systems with defects; criminal liability in presence of artificial intelligence systems, along with the doctrine models developed to cope with these issues. The following models have been analysed: the zoological model, the perpetration through another model, the natural and probable consequences model, and finally the direct liability model. The existence of the criminal requirements, actus reus and mens rea, as mandatory elements to identify the criminal liability, has also been investigated.
All the liability profiles have been analysed using real world case and scenarios. Eventually, some solution and remedies have been proposed as a conclusion, using also the theory elements of normative agents
No More Trade-Offs. GPT and Fully Informative Privacy Policies
The paper reports the results of an experiment aimed at testing to what
extent ChatGPT 3.5 and 4 is able to answer questions regarding privacy policies
designed in the new format that we propose. In a world of human-only
interpreters, there was a trade-off between comprehensiveness and
comprehensibility of privacy policies, leading to the actual policies not
containing enough information for users to learn anything meaningful. Having
shown that GPT performs relatively well with the new format, we provide
experimental evidence supporting our policy suggestion, namely that the law
should require fully comprehensive privacy policies, even if this means they
become less concise
Defeasible Systems in Legal Reasoning: A Comparative Assessment
Different formalisms for defeasible reasoning have been used to represent legal knowledge and to reason with it. In this work, we provide an overview of the following logic-based approaches to defeasible reasoning: Defeasible Logic, Answer Set Programming, ABA+, ASPIC+, and DeLP. We compare features of these approaches from three perspectives: the logical model (knowledge representation), the method (computational mechanisms), and the technology (available software). On this basis, we identify and apply criteria for assessing their suitability for legal applications. We discuss the different approaches through a legal running example
Unsupervised Factor Extraction from Pretrial Detention Decisions by Italian and Brazilian Supreme Courts
Pretrial detention is a debated and controversial measure since it is an exception to the principle of the presumption of innocence. To determine whether and to what extent legal systems make exces- sive use of pretrial detention, an empirical analysis of judicial practice is needed. The paper presents some preliminary results of experimental re- search aimed at identifying the relevant factors on the basis of which Ital- ian and Brazilian Supreme Courts impose the measure. To analyze and extract the relevant predictive-features, we rely on unsupervised learn- ing approaches, in particular association and clustering methods. As a result, we found common factors between the two legal systems in terms of crime, location, grounds for appeal, and judge’s reasoning
Combining WordNet and Word Embeddings in Data Augmentation for Legal Texts
Creating balanced labeled textual corpora for complex tasks, like legal analysis, is a challenging and expensive process that often requires the collaboration of domain experts. To address this problem, we propose a data augmentation method based on the combination of GloVe word embeddings and the WordNet ontology. We present an example of application in the legal domain, specifically on decisions of the Court of Justice of the European Union. Our evaluation with human experts confirms that our method is more robust than the alternatives
Argumentation and Defeasible Reasoning in the Law
Different formalisms for defeasible reasoning have been used to represent knowledge and reason in the legal field. In this work, we provide an overview of the following logic-based approaches to defeasible reasoning: defeasible logic, Answer Set Programming, ABA+, ASPIC+, and DeLP. We compare features of these approaches under three perspectives: the logical model (knowledge representation), the method (computational mechanisms), and the technology (available software resources). On top of that, two real examples in the legal domain are designed and implemented in ASPIC+ to showcase the benefit of an argumentation approach in real-world domains. The CrossJustice and Interlex projects are taken as a testbed, and experiments are conducted with the Arg2P technology
- …