9,045 research outputs found
Algorithmic Jim Crow
This Article contends that current immigration- and security-related vetting protocols risk promulgating an algorithmically driven form of Jim Crow. Under the “separate but equal” discrimination of a historic Jim Crow regime, state laws required mandatory separation and discrimination on the front end, while purportedly establishing equality on the back end. In contrast, an Algorithmic Jim Crow regime allows for “equal but separate” discrimination. Under Algorithmic Jim Crow, equal vetting and database screening of all citizens and noncitizens will make it appear that fairness and equality principles are preserved on the front end. Algorithmic Jim Crow, however, will enable discrimination on the back end in the form of designing, interpreting, and acting upon vetting and screening systems in ways that result in a disparate impact
A dual process account of creative thinking
This article explicates the potential role played by type 1 thinking (automatic, fast) and type 2 thinking (effortful, logical) in creative thinking. The relevance of Evans's (2007) models of conflict of dual processes in thinking is discussed with regards to creative thinking. The role played by type 1 thinking and type 2 thinking during the different stages of creativity (problem finding and conceptualization, incubation, illumination, verification and dissemination) is discussed. It is proposed that although both types of thinking are active in creativity, the extent to which they are active and the nature of their contribution to creativity will vary between stages of the creative process. Directions for future research to test this proposal are outlined; differing methodologies and the investigation of different stages of creative thinking are discussed. © Taylor & Francis Group, LLC
Rational Fair Consensus in the GOSSIP Model
The \emph{rational fair consensus problem} can be informally defined as
follows. Consider a network of (selfish) \emph{rational agents}, each of
them initially supporting a \emph{color} chosen from a finite set .
The goal is to design a protocol that leads the network to a stable
monochromatic configuration (i.e. a consensus) such that the probability that
the winning color is is equal to the fraction of the agents that initially
support , for any . Furthermore, this fairness property must
be guaranteed (with high probability) even in presence of any fixed
\emph{coalition} of rational agents that may deviate from the protocol in order
to increase the winning probability of their supported colors. A protocol
having this property, in presence of coalitions of size at most , is said to
be a \emph{whp\,--strong equilibrium}. We investigate, for the first time,
the rational fair consensus problem in the GOSSIP communication model where, at
every round, every agent can actively contact at most one neighbor via a
\emph{pushpull} operation. We provide a randomized GOSSIP protocol that,
starting from any initial color configuration of the complete graph, achieves
rational fair consensus within rounds using messages of
size, w.h.p. More in details, we prove that our protocol is a
whp\,--strong equilibrium for any and, moreover, it
tolerates worst-case permanent faults provided that the number of non-faulty
agents is . As far as we know, our protocol is the first solution
which avoids any all-to-all communication, thus resulting in message
complexity.Comment: Accepted at IPDPS'1
Towards Efficient Verification of Population Protocols
Population protocols are a well established model of computation by
anonymous, identical finite state agents. A protocol is well-specified if from
every initial configuration, all fair executions reach a common consensus. The
central verification question for population protocols is the
well-specification problem: deciding if a given protocol is well-specified.
Esparza et al. have recently shown that this problem is decidable, but with
very high complexity: it is at least as hard as the Petri net reachability
problem, which is EXPSPACE-hard, and for which only algorithms of non-primitive
recursive complexity are currently known.
In this paper we introduce the class WS3 of well-specified strongly-silent
protocols and we prove that it is suitable for automatic verification. More
precisely, we show that WS3 has the same computational power as general
well-specified protocols, and captures standard protocols from the literature.
Moreover, we show that the membership problem for WS3 reduces to solving
boolean combinations of linear constraints over N. This allowed us to develop
the first software able to automatically prove well-specification for all of
the infinitely many possible inputs.Comment: 29 pages, 1 figur
Toward Open-Set Face Recognition
Much research has been conducted on both face identification and face
verification, with greater focus on the latter. Research on face identification
has mostly focused on using closed-set protocols, which assume that all probe
images used in evaluation contain identities of subjects that are enrolled in
the gallery. Real systems, however, where only a fraction of probe sample
identities are enrolled in the gallery, cannot make this closed-set assumption.
Instead, they must assume an open set of probe samples and be able to
reject/ignore those that correspond to unknown identities. In this paper, we
address the widespread misconception that thresholding verification-like scores
is a good way to solve the open-set face identification problem, by formulating
an open-set face identification protocol and evaluating different strategies
for assessing similarity. Our open-set identification protocol is based on the
canonical labeled faces in the wild (LFW) dataset. Additionally to the known
identities, we introduce the concepts of known unknowns (known, but
uninteresting persons) and unknown unknowns (people never seen before) to the
biometric community. We compare three algorithms for assessing similarity in a
deep feature space under an open-set protocol: thresholded verification-like
scores, linear discriminant analysis (LDA) scores, and an extreme value machine
(EVM) probabilities. Our findings suggest that thresholding EVM probabilities,
which are open-set by design, outperforms thresholding verification-like
scores.Comment: Accepted for Publication in CVPR 2017 Biometrics Worksho
Synthesis of Parametric Programs using Genetic Programming and Model Checking
Formal methods apply algorithms based on mathematical principles to enhance
the reliability of systems. It would only be natural to try to progress from
verification, model checking or testing a system against its formal
specification into constructing it automatically. Classical algorithmic
synthesis theory provides interesting algorithms but also alarming high
complexity and undecidability results. The use of genetic programming, in
combination with model checking and testing, provides a powerful heuristic to
synthesize programs. The method is not completely automatic, as it is fine
tuned by a user that sets up the specification and parameters. It also does not
guarantee to always succeed and converge towards a solution that satisfies all
the required properties. However, we applied it successfully on quite
nontrivial examples and managed to find solutions to hard programming
challenges, as well as to improve and to correct code. We describe here several
versions of our method for synthesizing sequential and concurrent systems.Comment: In Proceedings INFINITY 2013, arXiv:1402.661
- …