6,449 research outputs found
Coding Solutions for the Secure Biometric Storage Problem
The paper studies the problem of securely storing biometric passwords, such
as fingerprints and irises. With the help of coding theory Juels and Wattenberg
derived in 1999 a scheme where similar input strings will be accepted as the
same biometric. In the same time nothing could be learned from the stored data.
They called their scheme a "fuzzy commitment scheme". In this paper we will
revisit the solution of Juels and Wattenberg and we will provide answers to two
important questions: What type of error-correcting codes should be used and
what happens if biometric templates are not uniformly distributed, i.e. the
biometric data come with redundancy. Answering the first question will lead us
to the search for low-rate large-minimum distance error-correcting codes which
come with efficient decoding algorithms up to the designed distance. In order
to answer the second question we relate the rate required with a quantity
connected to the "entropy" of the string, trying to estimate a sort of
"capacity", if we want to see a flavor of the converse of Shannon's noisy
coding theorem. Finally we deal with side-problems arising in a practical
implementation and we propose a possible solution to the main one that seems to
have so far prevented real life applications of the fuzzy scheme, as far as we
know.Comment: the final version appeared in Proceedings Information Theory Workshop
(ITW) 2010, IEEE copyrigh
Building a Multimodal, Trust-Based E-Voting System
This paper addresses the issue of voter identification and authentication, voter participation and trust in the electoral system. A multimodal/hybrid identification and authentication scheme is proposed which captures what a voter knows – PIN, what he has – smartcard and what he is – biometrics. Massive participation of voters in and out of the country of origin was enhanced through an integrated channel (kiosk and internet voting). A multi-trust voting system is built based on service oriented architecture. Microsoft Visual C#.Net, ASP.Net and Microsoft SQL Server 2005 Express Edition components of Microsoft Visual Studio 2008 was used to realize the Windows and Web-based solutions for the electronic voting system
Toward Open-Set Face Recognition
Much research has been conducted on both face identification and face
verification, with greater focus on the latter. Research on face identification
has mostly focused on using closed-set protocols, which assume that all probe
images used in evaluation contain identities of subjects that are enrolled in
the gallery. Real systems, however, where only a fraction of probe sample
identities are enrolled in the gallery, cannot make this closed-set assumption.
Instead, they must assume an open set of probe samples and be able to
reject/ignore those that correspond to unknown identities. In this paper, we
address the widespread misconception that thresholding verification-like scores
is a good way to solve the open-set face identification problem, by formulating
an open-set face identification protocol and evaluating different strategies
for assessing similarity. Our open-set identification protocol is based on the
canonical labeled faces in the wild (LFW) dataset. Additionally to the known
identities, we introduce the concepts of known unknowns (known, but
uninteresting persons) and unknown unknowns (people never seen before) to the
biometric community. We compare three algorithms for assessing similarity in a
deep feature space under an open-set protocol: thresholded verification-like
scores, linear discriminant analysis (LDA) scores, and an extreme value machine
(EVM) probabilities. Our findings suggest that thresholding EVM probabilities,
which are open-set by design, outperforms thresholding verification-like
scores.Comment: Accepted for Publication in CVPR 2017 Biometrics Worksho
Partially Identified Prevalence Estimation under Misclassification using the Kappa Coefficient
We discuss a new strategy for prevalence estimation in the presence of misclassification. Our method is applicable when misclassification probabilities are unknown but independent replicate measurements are available. This yields the kappa coefficient, which indicates the agreement between the two measurements. From this information, a direct correction for misclassification is not feasible due to non-identifiability. However, it is possible to derive estimation intervals relying on the concept of partial identification. These intervals give interesting insights into possible bias due to misclassification. Furthermore, confidence intervals can be constructed. Our method is illustrated in several theoretical scenarios and in an example from oral health, where prevalence estimation of caries in children is the issue
The efficiency factorization multiplier for the Watson efficiency in partitioned linear models: some examples and a literature review
We consider partitioned linear models where the model matrix X = (X1 : X2) has
full column rank, and concentrate on the special case whereX0
1X2 = 0 when we say
that the model is orthogonally partitioned. We assume that the underlying covariance
matrix is positive definite and introduce the efficiency factorization multiplier which
relates the total Watson efficiency of ordinary least squares to the product of the
two subset Watson efficiencies. We illustrate our findings with several examples and
present a literature review
ILR Faculty Publications 2015-2016
The production of scholarly research continues to be one of the primary missions of the ILR School. During a typical academic year, ILR faculty members published or had accepted for publication over 25 books, edited volumes, and monographs, 170 articles and chapters in edited volumes, numerous book reviews. In addition, a large number of manuscripts were submitted for publication, presented at professional association meetings, or circulated in working paper form. Our faculty's research continues to find its way into the very best industrial relations, social science and statistics journals.FacultyPublications_2015_16.pdf: 21 downloads, before Oct. 1, 2020
Race: the difference that makes a difference
During the last two decades, critical enquiry into the nature of race has begun to enter the philosophical mainstream. The same period has also witnessed the emergence of an increasingly visible discourse about the nature of information within a diverse range of popular and academic settings. What is yet to emerge, however, is engagement at the interface of the two disciplines – critical race theory and the philosophy of information. In this paper, I shall attempt to contribute towards the emergence of such a field of enquiry by using a reflexive hermeneutic (or interpretative) approach to analyze the concept of race from an information-theoretical perspective, while reflexively analyzing the concept of information from a critical race-theoretical perspective. In order to facilitate a more concrete enquiry, the concept of information formulated by cyberneticist Gregory Bateson and the concept of race formulated by philosopher Charles W Mills will be placed at the centre of analysis. Crucially, both concepts can be shown to have a connection to the critical philosophy of Immanuel Kant, thereby justifying their selection as topics of examination on critical reflexive hermeneutic grounds
Graphical Log-linear Models: Fundamental Concepts and Applications
We present a comprehensive study of graphical log-linear models for
contingency tables. High dimensional contingency tables arise in many areas
such as computational biology, collection of survey and census data and others.
Analysis of contingency tables involving several factors or categorical
variables is very hard. To determine interactions among various factors,
graphical and decomposable log-linear models are preferred. First, we explore
connections between the conditional independence in probability and graphs;
thereafter we provide a few illustrations to describe how graphical log-linear
model are useful to interpret the conditional independences between factors. We
also discuss the problem of estimation and model selection in decomposable
models
- …