13 research outputs found
Liable, but Not in Control? Ensuring Meaningful Human Agency in Automated Decision-Making Systems
Automated decision making is becoming the norm across large parts of society, which raises
interesting liability challenges when human control over technical systems becomes increasingly
limited. This article defines "quasi-automation" as inclusion of humans as a basic rubber-stamping
mechanism in an otherwise completely automated decision-making system. Three cases of quasi-
automation are examined, where human agency in decision making is currently debatable: self-
driving cars, border searches based on passenger name records, and content moderation on social
media. While there are specific regulatory mechanisms for purely automated decision making, these
regulatory mechanisms do not apply if human beings are (rubber-stamping) automated decisions.
More broadly, most regulatory mechanisms follow a pattern of binary liability in attempting to
regulate human or machine agency, rather than looking to regulate both. This results in regulatory
gray areas where the regulatory mechanisms do not apply, harming human rights by preventing
meaningful liability for socio-technical decision making. The article concludes by proposing criteria
to ensure meaningful agency when humans are included in automated decision-making systems,
and relates this to the ongoing debate on enabling human rights in Internet infrastructure
Data Mining and Predictive Policing
This paper focuses on the operation and utilization of predictive policing software that generates spatial and temporal hotspots. There is a literature review that evaluates previous work surrounding the topics branched from predictive policing. It dissects two different crime datasets for San Francisco, California and Chicago, Illinois. Provided, is an in depth comparison between the datasets using both statistical analysis and graphing tools. Then, it shows the application of the Apriori algorithm to re-enforce the formation of possible hotspots pointed out in a actual predictive policing software. To further the analysis, targeted demographics of the study were evaluated to create a snapshot of the factors that have attributed to the safety of the neighborhoods. The results of this study can be used to create solutions for long term crime reduction by adding green spaces and community planning in areas with high crime rates and heavy environmental neglect
Data criticality
The data moment, we argue, is not a single event, but a multiplicity of encounters that reveal what we call βdata criticalityβ. Data criticality draws our attention to those moments of deciding whether and how data will exist, thus rendering data critically relevant to a societal context and imbuing data with βlivelinessβ and agency. These encounters, we argue, also require our critical engagement. First, we develop and theorize our argument about data criticality. Second, by using predictive policing as an example, we present six moments of data criticality. A description of how data is imagined, generated, stored, selected, processed, and reused invites our reflections about data criticality within a broader range of data practices
What is wrong about Robocops as consultants? A technology-centric critique of predictive policing
Β© 2017, Springer-Verlag London. Fighting crime has historically been a field that drives technological innovation, and it can serve as an example of different governance styles in societies. Predictive policing is one of the recent innovations that covers technical trends such as machine learning, preventive crime fighting strategies, and actual policing in cities. However, it seems that a combination of exaggerated hopes produced by technology evangelists, media hype, and ignorance of the actual problems of the technology may have (over-)boosted sales of software that supports policing by predicting offenders and crime areas. In this paper we analyse currently used predictive policing software packages with respect to common problems of data mining, and describe challenges that arise in the context of their socio-technical application.status: publishe
Π Π°Π·ΡΠ΅ΡΠ°Π²Π°ΡΠ΅ ΠΈΠ΄Π΅Π½ΡΠΈΡΠ΅ΡΠ° ΠΈ Π³ΡΡΠΏΠΈΡΠ°ΡΠ΅ Π΄ΠΈΠ³ΠΈΡΠ°Π»Π½ΠΈΡ Π΄ΠΎΠΊΠ°Π·Π° ΠΎ ΠΎΡΡΠΌΡΠΈΡΠ΅Π½ΠΈΠΌΠ° ΠΏΡΠΈΠΌΠ΅Π½ΠΎΠΌ ΡΠ΅Ρ Π½ΠΎΠ»ΠΎΠ³ΠΈΡΠ° ΠΏΡΠ΅ΠΏΠΎΠ·Π½Π°Π²Π°ΡΠ° Π»ΠΈΡΠ° ΠΈ ΡΠΈΡΡΠ΅ΠΌΠ° ΡΠΎΡΡΠ²Π΅ΡΡΠΊΠΈΡ ΠΈΠ½ΡΠ΅Π»ΠΈΠ³Π΅Π½ΡΠΈΡ Π°Π³Π΅Π½Π°ΡΠ° Π·Π°ΡΠ½ΠΎΠ²Π°Π½ΠΎΠ³ Π½Π° Π½Π΅Π°ΠΊΡΠΈΠΎΠΌΠ°ΡΡΠΊΠΎΠΌ ΡΠ΅Π·ΠΎΠ½ΠΎΠ²Π°ΡΡ : Π΄ΠΎΠΊΡΠΎΡΡΠΊΠ° Π΄ΠΈΡΠ΅ΡΡΠ°ΡΠΈΡΠ°
The work of criminal police in modern society is characterized by the proliferation of data
and information to be processed, greater demands for restrictions on personal data, increased public
monitoring, and higher expectations in the efficiency of detecting perpetrators, but still lack
resources, both human and material. One of the more complex tasks is to resolve the identity, the
change of which seeks to cover up criminal activities, i.e., the perpetrator himself, who is on the run.
In order to resolve the identity, it is necessary to group and present all available evidence
related to specific persons. The thesis proposes a clustering approach by comparing pairs of face
feature vectors extracted from images created in unconstrained conditions and based on reasoning
using non-axiomatic logic and graphs. Face clusters will be the central points around which data
from various police reports will be grouped. A system model has also been proposed in which
software agents will play a significant role, primarily in connecting the distribution environment
points formed in practice by police information systems.
The clustering approach was experimentally tested with six different face image databases
characterized by the fact that they were created in a way that simulates unconstrained conditions.
The obtained results of the proposed solution are compared with other state-of-the-art methods. The
results showed that the approach gives similar but mostly better results than the others. What gives a
notable advantage over other methods is the possibility of using mechanisms from non-axiomatic
logic such as revision and deduction, which can be used to acquire new knowledge based on
information from different system nodes, or in the local knowledge base, respectively.Π Π°Π΄ ΠΊΡΠΈΠΌΠΈΠ½Π°Π»ΠΈΡΡΠΈΡΠΊΠ΅ ΠΏΠΎΠ»ΠΈΡΠΈΡΠ΅ Ρ ΡΠ°Π²ΡΠ΅ΠΌΠ΅Π½ΠΎΠΌ Π΄ΡΡΡΡΠ²Ρ ΠΎΠ΄Π»ΠΈΠΊΡΡΠ΅ ΠΏΡΠΎΠ»ΠΈΡΠ΅ΡΠ°ΡΠΈΡΠ°
ΠΏΠΎΠ΄Π°ΡΠ°ΠΊΠ° ΠΈ ΠΈΠ½ΡΠΎΡΠΌΠ°ΡΠΈΡΠ° ΠΊΠΎΡΠ΅ ΡΡΠ΅Π±Π° ΠΎΠ±ΡΠ°ΡΠΈΠ²Π°ΡΠΈ, Π²Π΅ΡΠΈ Π·Π°Ρ
ΡΠ΅Π²ΠΈ Π·Π° ΠΎΠ³ΡΠ°Π½ΠΈΡΠ΅ΡΠΈΠΌΠ° Ρ ΡΠ°Π΄Ρ ΡΠ°
Π»ΠΈΡΠ½ΠΈΠΌ ΠΏΠΎΠ΄Π°ΡΠΈΠΌΠ°, ΠΏΠΎΡΠ°ΡΠ°Π½ΠΈ Π½Π°Π΄Π·ΠΎΡ ΠΏΡΠ΅ ΡΠ²Π΅Π³Π° ΡΠ°Π²Π½ΠΎΡΡΠΈ, Π²Π΅ΡΠ° ΠΎΡΠ΅ΠΊΠΈΠ²Π°ΡΠ° Ρ Π΅ΡΠΈΠΊΠ°ΡΠ½ΠΎΡΡΠΈ
ΠΎΡΠΊΡΠΈΠ²Π°ΡΠ° ΠΈΠ·Π²ΡΡΠΈΠ»Π°ΡΠ° ΠΊΡΠΈΠ²ΠΈΡΠ½ΠΈΡ
Π΄Π΅Π»Π°, Π°Π»ΠΈ ΠΈ Π΄Π°ΡΠ΅ Π½Π΅Π΄ΠΎΡΡΠ°ΡΠ°ΠΊ ΡΠ΅ΡΡΡΡΠ°, ΠΊΠ°ΠΊΠΎ ΡΡΠ΄ΡΠΊΠΈΡ
ΡΠ°ΠΊΠΎ ΠΈ
ΠΌΠ°ΡΠ΅ΡΠΈΡΠ°Π»Π½ΠΈΡ
. ΠΠ΅Π΄Π°Π½ ΠΎΠ΄ ΡΠ»ΠΎΠΆΠ΅Π½ΠΈΡΠΈΡ
Π·Π°Π΄Π°ΡΠ°ΠΊΠ° ΡΠ΅ΡΡΠ΅ ΡΠ°Π·ΡΠ΅ΡΠ°Π²Π°ΡΠ΅ ΠΈΠ΄Π΅Π½ΡΠΈΡΠ΅ΡΠ° ΡΠΈΡΠΎΠΌ ΠΏΡΠΎΠΌΠ΅Π½ΠΎΠΌ
ΡΠ΅ Π½Π°ΡΡΠΎΡΠ΅ ΠΏΡΠΈΠΊΡΠΈΡΠΈ ΠΊΡΠΈΠΌΠΈΠ½Π°Π»Π½Π΅ Π°ΠΊΡΠΈΠ²Π½ΠΎΡΡΠΈ, ΠΎΠ΄Π½ΠΎΡΠ½ΠΎ ΡΠ°ΠΌ ΠΈΠ·Π²ΡΡΠΈΠ»Π°Ρ ΠΊΠΎΡΠΈ ΡΠ΅ Ρ Π±Π΅ΠΊΡΡΠ²Ρ.
ΠΠ° Π±ΠΈ ΡΠ΅ ΡΠ°Π·ΡΠ΅ΡΠΈΠΎ ΠΈΠ΄Π΅Π½ΡΠΈΡΠ΅Ρ, ΠΏΠΎΡΡΠ΅Π±Π½ΠΎ ΡΠ΅ Π³ΡΡΠΏΠΈΡΠ°ΡΠΈ ΠΈ ΠΏΡΠ΅Π·Π΅Π½ΡΠΎΠ²Π°ΡΠΈ ΡΠ²Π΅ ΡΠ°ΡΠΏΠΎΠ»ΠΎΠΆΠΈΠ²Π΅
Π΄ΠΎΠΊΠ°Π·Π΅ Π²Π΅Π·Π°Π½Π΅ Π·Π° ΠΎΠ΄ΡΠ΅ΡΠ΅Π½Π΅ ΠΎΡΠΎΠ±Π΅. Π£ Π΄ΠΈΡΠ΅ΡΡΠ°ΡΠΈΡΠΈ ΡΠ΅ ΠΏΡΠ΅Π΄Π»ΠΎΠΆΠ΅Π½ Π½ΠΎΠ²ΠΈ ΠΏΡΠΈΡΡΡΠΏ ΠΊΠ»Π°ΡΡΠ΅ΡΠΎΠ²Π°ΡΡ
ΠΏΠΎΡΠ΅ΡΠ΅ΡΠ΅ΠΌ ΠΏΠ°ΡΠΎΠ²Π° Π²Π΅ΠΊΡΠΎΡΠ° ΠΎΠ΄Π»ΠΈΠΊΠ° Π»ΠΈΡΠ° Π΅ΠΊΡΡΡΠ°Ρ
ΠΎΠ²Π°Π½ΠΈΡ
ΠΈΠ· ΡΠ»ΠΈΠΊΠ° Π½Π°ΡΡΠ°Π»ΠΈΡ
Ρ Π½Π΅ΠΊΠΎΠ½ΡΡΠΎΠ»ΠΈΡΠ°Π½ΠΈΠΌ
ΡΡΠ»ΠΎΠ²ΠΈΠΌΠ°, Π° Π·Π°ΡΠ½ΠΎΠ²Π°Π½ Π½Π° ΡΠ΅Π·ΠΎΠ½ΠΎΠ²Π°ΡΡ ΠΏΡΠΈΠΌΠ΅Π½ΠΎΠΌ Π½Π΅Π°ΠΊΡΠΈΠΎΠΌΠ°ΡΡΠΊΠ΅ Π»ΠΎΠ³ΠΈΠΊΠ΅ ΠΈ Π³ΡΠ°ΡΠΎΠ²Π°. ΠΠ»Π°ΡΡΠ΅ΡΠΈ
ΡΠ»ΠΈΠΊΠ° Π»ΠΈΡΠ° ΠΏΡΠ΅Π΄ΡΡΠ°Π²ΡΠ°ΡΡ ΡΠ΅Π½ΡΡΠ°Π»Π½Π΅ ΡΠ°ΡΠΊΠ΅ ΠΎΠΊΠΎ ΠΊΠΎΡΠΈΡ
ΡΠ΅ Π³ΡΡΠΏΠΈΡΡ ΠΏΠΎΠ΄Π°ΡΠΈ ΠΈΠ· ΡΠ°Π·Π»ΠΈΡΠΈΡΠΈΡ
ΠΏΠΎΠ»ΠΈΡΠΈΡΡΠΊΠΈΡ
ΠΈΠ·Π²Π΅ΡΡΠ°ΡΠ°. Π’Π°ΠΊΠΎΡΠ΅ ΡΠ΅ ΠΏΡΠ΅Π΄Π»ΠΎΠΆΠ΅Π½ ΠΌΠΎΠ΄Π΅Π» ΡΠΈΡΡΠ΅ΠΌΠ° Ρ ΠΊΠΎΠΌΠ΅ ΡΠ΅ Π·Π½Π°ΡΠ°ΡΠ½Ρ ΡΠ»ΠΎΠ³Ρ ΠΈΠΌΠ°ΡΠΈ
ΡΠΎΡΡΠ²Π΅ΡΡΠΊΠΈ Π°Π³Π΅Π½ΡΠΈ, ΠΏΡΠ΅ ΡΠ²Π΅Π³Π° Ρ ΠΏΠΎΠ²Π΅Π·ΠΈΠ²Π°ΡΡ ΡΠ°ΡΠ°ΠΊΠ° Π΄ΠΈΡΡΡΠΈΠ±ΡΠΈΡΠ°Π½ΠΎΠ³ ΠΎΠΊΡΡΠΆΠ΅ΡΠ° ΠΊΠΎΡΠ΅ Ρ ΠΏΡΠ°ΠΊΡΠΈ
ΡΠΎΡΠΌΠΈΡΠ°ΡΡ ΠΏΠΎΠ»ΠΈΡΠΈΡΡΠΊΠΈ ΠΈΠ½ΡΠΎΡΠΌΠ°ΡΠΈΠΎΠ½ΠΈ ΡΠΈΡΡΠ΅ΠΌΠΈ.
ΠΠΎΠ²ΠΈ ΠΏΡΠΈΡΡΡΠΏ ΠΊΠ»Π°ΡΡΠ΅ΡΠΎΠ²Π°ΡΡ ΡΠ΅ Π΅ΠΊΡΠΏΠ΅ΡΠΈΠΌΠ΅Π½ΡΠ°Π»Π½ΠΎ ΠΈΡΠΏΠΈΡΠ°Π½ ΡΠ° ΡΠ΅ΡΡ ΡΠ°Π·Π»ΠΈΡΠΈΡΠΈΡ
Π±Π°Π·Π°
ΠΏΠΎΠ΄Π°ΡΠ°ΠΊΠ° Π»ΠΈΡΠ° ΠΊΠ°ΡΠ°ΠΊΡΠ΅ΡΠΈΡΡΠΈΡΠ½ΠΈΡ
ΠΏΠΎ ΡΠΎΠΌΠ΅ ΡΡΠΎ ΡΡ ΠΊΡΠ΅ΠΈΡΠ°Π½Π΅ Π½Π° Π½Π°ΡΠΈΠ½ ΠΊΠΎΡΠΈΠΌ ΡΠ΅ ΡΠΈΠΌΡΠ»ΠΈΡΠ°ΡΡ
Π½Π΅ΠΊΠΎΠ½ΡΡΠΎΠ»ΠΈΡΠ°Π½ΠΈ ΡΡΠ»ΠΎΠ²ΠΈ. ΠΠΎΠ±ΠΈΡΠ΅Π½ΠΈ ΡΠ΅Π·ΡΠ»ΡΠ°ΡΠΈ ΠΏΡΠ΅Π΄Π»ΠΎΠΆΠ΅Π½ΠΎΠ³ ΡΠ΅ΡΠ΅ΡΠ° ΡΡ ΡΠΏΠΎΡΠ΅ΡΠ΅Π½ΠΈ ΡΠ° ΠΎΡΡΠ°Π»ΠΈΠΌ
Π²ΡΡ
ΡΠ½ΡΠΊΠΈΠΌ ΠΌΠ΅ΡΠΎΠ΄Π°ΠΌΠ°. Π Π΅Π·ΡΠ»ΡΠ°ΡΠΈ ΡΡ ΠΏΠΎΠΊΠ°Π·Π°Π»ΠΈ Π΄Π° ΠΏΡΠΈΡΡΡΠΏ Π΄Π°ΡΠ΅ ΠΏΡΠΈΠ±Π»ΠΈΠΆΠ½Π΅, Π°Π»ΠΈ ΡΠ³Π»Π°Π²Π½ΠΎΠΌ Π±ΠΎΡΠ΅
ΡΠ΅Π·ΡΠ»ΡΠ°ΡΠ΅ ΠΎΠ΄ ΠΎΡΡΠ°Π»ΠΈΡ
. ΠΠ½ΠΎ ΡΡΠΎ Π΄Π°ΡΠ΅ ΠΏΠΎΡΠ΅Π±Π½Ρ ΠΏΡΠ΅Π΄Π½ΠΎΡΡ Ρ ΠΎΠ΄Π½ΠΎΡΡ Π½Π° ΠΎΡΡΠ°Π»Π΅ ΠΌΠ΅ΡΠΎΠ΄Π΅ ΡΠ΅ΡΡΠ΅
ΠΌΠΎΠ³ΡΡΠ½ΠΎΡΡ ΠΊΠΎΡΠΈΡΡΠ΅ΡΠ° ΠΌΠ΅Ρ
Π°Π½ΠΈΠ·Π°ΠΌΠ° ΠΈΠ· Π½Π΅Π°ΠΊΡΠΈΠΎΠΌΠ°ΡΡΠΊΠ΅ Π»ΠΎΠ³ΠΈΠΊΠ΅ ΠΏΠΎΠΏΡΡ ΡΠ΅Π²ΠΈΠ·ΠΈΡΠ΅ ΠΈ Π΄Π΅Π΄ΡΠΊΡΠΈΡΠ΅,
ΠΏΠΎΠΌΠΎΡΡ ΠΊΠΎΡΠΈΡ
ΡΠ΅ ΠΌΠΎΠ³Ρ ΡΡΠΈΡΠ°ΡΠΈ Π½ΠΎΠ²Π° Π·Π½Π°ΡΠ° Π½Π° ΠΎΡΠ½ΠΎΠ²Ρ ΠΈΠ½ΡΠΎΡΠΌΠ°ΡΠΈΡΠ° ΠΈΠ· ΡΠ°Π·Π»ΠΈΡΠΈΡΠΈΡ
Π½ΠΎΠ΄ΠΎΠ²Π°
ΡΠΈΡΡΠ΅ΠΌΠ°, ΠΈΠ»ΠΈ Ρ Π»ΠΎΠΊΠ°Π»Π½ΠΎΡ Π±Π°Π·ΠΈ Π·Π½Π°ΡΠ°, ΡΠ΅ΡΠΏΠ΅ΠΊΡΠΈΠ²Π½ΠΎ
Big Data as a Technology of Power
The growing importance of big data in contemporary society raises significant and urgent ethical questions. In the academic literature and in the media, the dominant response to many of these ethical questions is to re-examine the role and importance of privacy protections, but I argue that it is far more fruitful to investigate the relationship between power and big data. As algorithmic processes are increasingly used in decision-making processes, it is crucial that we understand the ways in which big data can be used as a technology of power. Only then can we properly understand the ways in which the use of big data impacts on and reorganises society, and go on to develop effective, tailored protections for individuals against harm from the use of big data. First, I show that the rise of big data highlights the limits of privacy protections, as big data-based analytics allow for personal information to be inferred in ways that circumvent privacy protections and problematises the category of personal information. In order to properly protect people from the potential harms that can arise from the use of big data in decision making, I argue that we must also examine the relationship between big data and power. In this thesis, I will present an argument for a pluralistic understanding of power, and a lens through which we can identify the kinds of power being exercised in the contexts we are investigating. Power is best understood as an umbrella term that refers to a diverse range of phenomena across an equally diverse range of domains or contexts. We can use this attitude to examine the central features of an exercise of power to identify the relevant theoretical accounts of power to draw on in understanding the modes of power present in a context. In Chapter 4, I will demonstrate the value of this approach by using it to analyse four contexts where big data is used as a technology of power, showing that we cannot use a single theoretical understanding of power across all exercises of power. Following this, I examine the impacts of big data on the operation of power. While many in the literature see big data as necessitating the development of new theoretical understandings of power, I argue that there are important historical continuities in power. Big data can be picked up and used as part of existing kinds of power just as any new technology can, and while this may change the efficiency, range, and effectiveness of exercises of power, it does not change their fundamental nature. However, there are impacts on the operation of power that are unique to big data, and one of these impacts I consider here is that the inferential capabilities of big data shift power from acting on human subjects and towards acting on data doubles (fragmentary digital representations of people). This leads to significant ethical problems with ensuring that power is exercised accountably. Finally, I will demonstrate these problems in Chapter 7 through examining four more contexts in which big data is used as a technology of power, showing how the shift to the data double as the subject of power undermines the effectiveness of accountability as a check on the abuse of power
Rule of law and human rights issues in social media content moderation
This thesis explores the content moderation process at social media companies. This process is divided into three distinct stages: Creation (the production of terms and conditions), Enforcement (the enforcement of those rules), and Response (the use of both internal and external methods of appeal to enact change). It explains how content moderation occurs and identifies a number of serious issues for both human rights and the rule of law in the current approach. It also proposes a variety of solutions for both small-scale and broader reform and argues for a regulatory approach grounded in procedural rule of law principles and mandatory human rights due diligence
LegitimitΓ€t, Sicherheit, Autonomie: Eine philosophische Analyse der EU-Sicherheitspolitik im Kontext der Digitalisierung
Das Buch ergrΓΌndet die aktuelle dynamische und folgenreiche Entwicklung der europΓ€ischen Sicherheitspolitik. Dabei liefert es einen wichtigen und originellen Beitrag sowohl fΓΌr die Praktische Philosophie als auch fΓΌr die Bereiche der Security- und European Studies. Durch konkrete Analysen und die Herausarbeitung mΓΆglicher LΓΆsungsansΓ€tze, verwirklicht das Buch einen philosophischen Ansatz, der in der RealitΓ€t verankert ist und gleichzeitig auf Theorie und NormativitΓ€t besteht. Im Fokus stehen die Charakteristika von neuen Sicherheitstechnologien und -verstΓ€ndnissen sowie deren Einfluss auf die "kopernikanische Wende" der Neuzeit, mit der das Individuum und der Schutz seiner Grundrechte ins Zentrum der politischen Legitimation gerΓΌckt sind.This book examines the current dynamic and momentous development of European security policy. In doing so, it provides an important and original contribution to both practical philosophy and to the fields of security studies and European studies. Using concrete analyses and by offering possible solutions to certain problems, the book develops an approach that is embedded in reality and which, at the same time, insists on theory and normativity. It focuses on the characteristics of new security technologies and ways in which security is understood as well as their influence on the 'Copernican Revolution' of the modern age, through which individuals and the protection of their fundamental rights have become the focus of political legitimation