19,253 research outputs found

    Quantum surveillance and 'shared secrets'. A biometric step too far? CEPS Liberty and Security in Europe, July 2010

    Get PDF
    It is no longer sensible to regard biometrics as having neutral socio-economic, legal and political impacts. Newer generation biometrics are fluid and include behavioural and emotional data that can be combined with other data. Therefore, a range of issues needs to be reviewed in light of the increasing privatisation of ‘security’ that escapes effective, democratic parliamentary and regulatory control and oversight at national, international and EU levels, argues Juliet Lodge, Professor and co-Director of the Jean Monnet European Centre of Excellence at the University of Leeds, U

    Biometric Authentication System on Mobile Personal Devices

    Get PDF
    We propose a secure, robust, and low-cost biometric authentication system on the mobile personal device for the personal network. The system consists of the following five key modules: 1) face detection; 2) face registration; 3) illumination normalization; 4) face verification; and 5) information fusion. For the complicated face authentication task on the devices with limited resources, the emphasis is largely on the reliability and applicability of the system. Both theoretical and practical considerations are taken. The final system is able to achieve an equal error rate of 2% under challenging testing protocols. The low hardware and software cost makes the system well adaptable to a large range of security applications

    An evaluation of a three-modal hand-based database to forensic-based gender recognition

    Get PDF
    In recent years, behavioural soft-biometrics have been widely used to improve biometric systems performance. Information like gender, age and ethnicity can be obtained from more than one behavioural modality. In this paper, we propose a multimodal hand-based behavioural database for gender recognition. Thus, our goal in this paper is to evaluate the performance of the multimodal database. For this, the experiment was realised with 76 users and was collected keyboard dynamics, touchscreen dynamics and handwritten signature data. Our approach consists of compare two-modal and one-modal modalities of the biometric data with the multimodal database. Traditional and new classifiers were used and the statistical Kruskal-Wallis to analyse the accuracy of the databases. The results showed that the multimodal database outperforms the other databases

    Confidence intervals of prediction accuracy measures for multivariable prediction models based on the bootstrap-based optimism correction methods

    Full text link
    In assessing prediction accuracy of multivariable prediction models, optimism corrections are essential for preventing biased results. However, in most published papers of clinical prediction models, the point estimates of the prediction accuracy measures are corrected by adequate bootstrap-based correction methods, but their confidence intervals are not corrected, e.g., the DeLong's confidence interval is usually used for assessing the C-statistic. These naive methods do not adjust for the optimism bias and do not account for statistical variability in the estimation of parameters in the prediction models. Therefore, their coverage probabilities of the true value of the prediction accuracy measure can be seriously below the nominal level (e.g., 95%). In this article, we provide two generic bootstrap methods, namely (1) location-shifted bootstrap confidence intervals and (2) two-stage bootstrap confidence intervals, that can be generally applied to the bootstrap-based optimism correction methods, i.e., the Harrell's bias correction, 0.632, and 0.632+ methods. In addition, they can be widely applied to various methods for prediction model development involving modern shrinkage methods such as the ridge and lasso regressions. Through numerical evaluations by simulations, the proposed confidence intervals showed favourable coverage performances. Besides, the current standard practices based on the optimism-uncorrected methods showed serious undercoverage properties. To avoid erroneous results, the optimism-uncorrected confidence intervals should not be used in practice, and the adjusted methods are recommended instead. We also developed the R package predboot for implementing these methods (https://github.com/nomahi/predboot). The effectiveness of the proposed methods are illustrated via applications to the GUSTO-I clinical trial

    Biometric surveillance in schools : cause for concern or case for curriculum?

    Get PDF
    This article critically examines the draft consultation paper issued by the Scottish Government to local authorities on the use of biometric technologies in schools in September 2008 (see http://www.scotland.gov.uk/Publications/2008/09/08135019/0). Coming at a time when a number of schools are considering using biometric systems to register and confirm the identity of pupils in a number of settings (cashless catering systems, automated registration of pupils' arrival in school and school library automation), this guidance is undoubtedly welcome. The present focus seems to be on using fingerprints, but as the guidance acknowledges, the debate in future may encompass iris prints, voice prints and facial recognition systems, which are already in use in non-educational settings. The article notes broader developments in school surveillance in Scotland and in the rest of the UK and argues that serious attention must be given to the educational considerations which arise. Schools must prepare pupils for life in the newly emergent 'surveillance society', not by uncritically habituating them to the surveillance systems installed in their schools, but by critically engaging them in thought about the way surveillance technologies work in the wider world, the various rationales given to them, and the implications - in terms of privacy, safety and inclusion - of being a 'surveilled subject'

    Algorithmic Jim Crow

    Get PDF
    This Article contends that current immigration- and security-related vetting protocols risk promulgating an algorithmically driven form of Jim Crow. Under the “separate but equal” discrimination of a historic Jim Crow regime, state laws required mandatory separation and discrimination on the front end, while purportedly establishing equality on the back end. In contrast, an Algorithmic Jim Crow regime allows for “equal but separate” discrimination. Under Algorithmic Jim Crow, equal vetting and database screening of all citizens and noncitizens will make it appear that fairness and equality principles are preserved on the front end. Algorithmic Jim Crow, however, will enable discrimination on the back end in the form of designing, interpreting, and acting upon vetting and screening systems in ways that result in a disparate impact
    corecore