75,193 research outputs found
Paradoxes in Fair Computer-Aided Decision Making
Computer-aided decision making--where a human decision-maker is aided by a
computational classifier in making a decision--is becoming increasingly
prevalent. For instance, judges in at least nine states make use of algorithmic
tools meant to determine "recidivism risk scores" for criminal defendants in
sentencing, parole, or bail decisions. A subject of much recent debate is
whether such algorithmic tools are "fair" in the sense that they do not
discriminate against certain groups (e.g., races) of people.
Our main result shows that for "non-trivial" computer-aided decision making,
either the classifier must be discriminatory, or a rational decision-maker
using the output of the classifier is forced to be discriminatory. We further
provide a complete characterization of situations where fair computer-aided
decision making is possible
Algorithmic Statistics
While Kolmogorov complexity is the accepted absolute measure of information
content of an individual finite object, a similarly absolute notion is needed
for the relation between an individual data sample and an individual model
summarizing the information in the data, for example, a finite set (or
probability distribution) where the data sample typically came from. The
statistical theory based on such relations between individual objects can be
called algorithmic statistics, in contrast to classical statistical theory that
deals with relations between probabilistic ensembles. We develop the
algorithmic theory of statistic, sufficient statistic, and minimal sufficient
statistic. This theory is based on two-part codes consisting of the code for
the statistic (the model summarizing the regularity, the meaningful
information, in the data) and the model-to-data code. In contrast to the
situation in probabilistic statistical theory, the algorithmic relation of
(minimal) sufficiency is an absolute relation between the individual model and
the individual data sample. We distinguish implicit and explicit descriptions
of the models. We give characterizations of algorithmic (Kolmogorov) minimal
sufficient statistic for all data samples for both description modes--in the
explicit mode under some constraints. We also strengthen and elaborate earlier
results on the ``Kolmogorov structure function'' and ``absolutely
non-stochastic objects''--those rare objects for which the simplest models that
summarize their relevant information (minimal sufficient statistics) are at
least as complex as the objects themselves. We demonstrate a close relation
between the probabilistic notions and the algorithmic ones.Comment: LaTeX, 22 pages, 1 figure, with correction to the published journal
versio
Shannon Information and Kolmogorov Complexity
We compare the elementary theories of Shannon information and Kolmogorov
complexity, the extent to which they have a common purpose, and where they are
fundamentally different. We discuss and relate the basic notions of both
theories: Shannon entropy versus Kolmogorov complexity, the relation of both to
universal coding, Shannon mutual information versus Kolmogorov (`algorithmic')
mutual information, probabilistic sufficient statistic versus algorithmic
sufficient statistic (related to lossy compression in the Shannon theory versus
meaningful information in the Kolmogorov theory), and rate distortion theory
versus Kolmogorov's structure function. Part of the material has appeared in
print before, scattered through various publications, but this is the first
comprehensive systematic comparison. The last mentioned relations are new.Comment: Survey, LaTeX 54 pages, 3 figures, Submitted to IEEE Trans
Information Theor
Equations for Hereditary Substitution in Leivant's Predicative System F: A Case Study
This paper presents a case study of formalizing a normalization proof for
Leivant's Predicative System F using the Equations package. Leivant's
Predicative System F is a stratified version of System F, where type
quantification is annotated with kinds representing universe levels. A weaker
variant of this system was studied by Stump & Eades, employing the hereditary
substitution method to show normalization. We improve on this result by showing
normalization for Leivant's original system using hereditary substitutions and
a novel multiset ordering on types. Our development is done in the Coq proof
assistant using the Equations package, which provides an interface to define
dependently-typed programs with well-founded recursion and full dependent
pattern- matching. Equations allows us to define explicitly the hereditary
substitution function, clarifying its algorithmic behavior in presence of term
and type substitutions. From this definition, consistency can easily be
derived. The algorithmic nature of our development is crucial to reflect
languages with type quantification, enlarging the class of languages on which
reflection methods can be used in the proof assistant.Comment: In Proceedings LFMTP 2015, arXiv:1507.07597. www:
http://equations-fpred.gforge.inria.fr
Case board, traces, & chicanes: Diagrams for an archaeology of algorithmic prediction through critical design practice
This PhD thesis utilises diagrams as a language for research and design practice to critically investigate algorithmic prediction. As a tool for practice-based research, the language of diagrams is presented as a way to
read algorithmic prediction as a set of intricate computational geometries, and to write it through critical practice immersed in the very materials in question: data and code. From a position rooted in graphic and interaction design, the research uses diagrams to gain purchase on algorithmic prediction, making it available for examination, experimentation, and critique. The project is framed by media archaeology, used here as a methodology through which both the technical and historical "depths" of algorithmic systems are excavated.
My main research question asks:
How can diagrams be used as a language to critically investigate algorithmic prediction through design practice?
This thesis presents two secondary questions for critical examination, asking:
Through which mechanisms does thinking/writing/designing in diagrammatic terms inform research and practice focused on algorithmic prediction?
As algorithmic systems claim to produce objective knowledge, how can diagrams be used as instruments for speculative and/or conjectural knowledge production?
I contextualise my research by establishing three registers of relations between diagrams and algorithmic prediction. These are identified as: Data Diagrams to describe the algorithmic forms and processes through which data are turned into predictions; Control Diagrams to afford critical perspectives on algorithmic prediction, framing the latter as an apparatus of prescription and control; and Speculative Diagrams to open up opportunities for reclaiming the generative potential of computation. These categories form the scaffolding for the three practice-oriented chapters where I evidence a range of meaningful ways to investigate algorithmic prediction through diagrams.
This includes, the 'case board' where I unpack some of the historical genealogies of algorithmic prediction. A purpose-built graph application materialises broader reflections about how such genealogies might be conceptualised, and facilitates a visual and subjective mode of knowledge production. I then move to producing 'traces', namely probing the output of an algorithmic prediction system|in this case YouTube recommendations. Traces, and the purpose-built instruments used to visualise them, interrogate both the mechanisms of algorithmic capture and claims to make these mechanisms transparent through data visualisations. Finally, I produce algorithmic predictions and examine the diagrammatic "tricks," or 'chicanes', that this involves. I revisit a historical prototype for algorithmic prediction, the almanac publication, and use it to question the boundaries between data-science and divination. This is materialised through a new version of the almanac - an automated publication where algorithmic processes are used to produce divinatory predictions.
My original contribution to knowledge is an approach to practice-based research which draws from media archaeology and focuses on diagrams to investigate algorithmic prediction through design practice. I demonstrate to researchers and practitioners with interests in algorithmic systems, prediction, and/or speculation, that diagrams can be used as a language to engage critically with these themes
Efficient Micro-Mobility using Intra-domain Multicast-based Mechanisms (M&M)
One of the most important metrics in the design of IP mobility protocols is
the handover performance. The current Mobile IP (MIP) standard has been shown
to exhibit poor handover performance. Most other work attempts to modify MIP to
slightly improve its efficiency, while others propose complex techniques to
replace MIP. Rather than taking these approaches, we instead propose a new
architecture for providing efficient and smooth handover, while being able to
co-exist and inter-operate with other technologies. Specifically, we propose an
intra-domain multicast-based mobility architecture, where a visiting mobile is
assigned a multicast address to use while moving within a domain. Efficient
handover is achieved using standard multicast join/prune mechanisms. Two
approaches are proposed and contrasted. The first introduces the concept
proxy-based mobility, while the other uses algorithmic mapping to obtain the
multicast address of visiting mobiles. We show that the algorithmic mapping
approach has several advantages over the proxy approach, and provide mechanisms
to support it. Network simulation (using NS-2) is used to evaluate our scheme
and compare it to other routing-based micro-mobility schemes - CIP and HAWAII.
The proactive handover results show that both M&M and CIP shows low handoff
delay and packet reordering depth as compared to HAWAII. The reason for M&M's
comparable performance with CIP is that both use bi-cast in proactive handover.
The M&M, however, handles multiple border routers in a domain, where CIP fails.
We also provide a handover algorithm leveraging the proactive path setup
capability of M&M, which is expected to outperform CIP in case of reactive
handover.Comment: 12 pages, 11 figure
- …