107,048 research outputs found
GOTCHA Password Hackers!
We introduce GOTCHAs (Generating panOptic Turing Tests to Tell Computers and
Humans Apart) as a way of preventing automated offline dictionary attacks
against user selected passwords. A GOTCHA is a randomized puzzle generation
protocol, which involves interaction between a computer and a human.
Informally, a GOTCHA should satisfy two key properties: (1) The puzzles are
easy for the human to solve. (2) The puzzles are hard for a computer to solve
even if it has the random bits used by the computer to generate the final
puzzle --- unlike a CAPTCHA. Our main theorem demonstrates that GOTCHAs can be
used to mitigate the threat of offline dictionary attacks against passwords by
ensuring that a password cracker must receive constant feedback from a human
being while mounting an attack. Finally, we provide a candidate construction of
GOTCHAs based on Inkblot images. Our construction relies on the usability
assumption that users can recognize the phrases that they originally used to
describe each Inkblot image --- a much weaker usability assumption than
previous password systems based on Inkblots which required users to recall
their phrase exactly. We conduct a user study to evaluate the usability of our
GOTCHA construction. We also generate a GOTCHA challenge where we encourage
artificial intelligence and security researchers to try to crack several
passwords protected with our scheme.Comment: 2013 ACM Workshop on Artificial Intelligence and Security (AISec
Leveraging Geospatial Information to address Space Epidemiology through Multi\unicode{x2013}omics \unicode{x2013} Report of an Interdisciplinary Workshop
This article will summarize the workshop proceedings of a workshop conducted
at the University of Missouri that addressed the use of multi-omics fused with
geospatial information to assess and improve the precision and environmental
analysis of indicators of crew space health. The workshop addressed the state
of the art of multi-omics research and practice and the potential future use of
multi-omics platforms in extreme environments. The workshop also focused on
potential new strategies for data collection, analysis, and fusion with
crosstalk with the field of environmental health, biosecurity, and radiation
safety, addressing gaps and shortfalls and potential new approaches to
enhancing astronaut health safety and security. Ultimately, the panel
proceedings resulted in a synthesis of new research and translational
opportunities to improve space and terrestrial epidemiology. In the future,
early disease prevention that employs new and expanded data sources enhanced by
the analytic precision of geospatial information and artificial intelligence
algorithms.Comment: 9 pages, 1 figur
The More Secure, The Less Equally Usable: Gender and Ethnicity (Un)fairness of Deep Face Recognition along Security Thresholds
Face biometrics are playing a key role in making modern smart city
applications more secure and usable. Commonly, the recognition threshold of a
face recognition system is adjusted based on the degree of security for the
considered use case. The likelihood of a match can be for instance decreased by
setting a high threshold in case of a payment transaction verification. Prior
work in face recognition has unfortunately showed that error rates are usually
higher for certain demographic groups. These disparities have hence brought
into question the fairness of systems empowered with face biometrics. In this
paper, we investigate the extent to which disparities among demographic groups
change under different security levels. Our analysis includes ten face
recognition models, three security thresholds, and six demographic groups based
on gender and ethnicity. Experiments show that the higher the security of the
system is, the higher the disparities in usability among demographic groups
are. Compelling unfairness issues hence exist and urge countermeasures in
real-world high-stakes environments requiring severe security levels.Comment: Accepted as a full paper at the 2nd International Workshop on
Artificial Intelligence Methods for Smart Cities (AISC 2022
ZOO: Zeroth Order Optimization based Black-box Attacks to Deep Neural Networks without Training Substitute Models
Deep neural networks (DNNs) are one of the most prominent technologies of our
time, as they achieve state-of-the-art performance in many machine learning
tasks, including but not limited to image classification, text mining, and
speech processing. However, recent research on DNNs has indicated
ever-increasing concern on the robustness to adversarial examples, especially
for security-critical tasks such as traffic sign identification for autonomous
driving. Studies have unveiled the vulnerability of a well-trained DNN by
demonstrating the ability of generating barely noticeable (to both human and
machines) adversarial images that lead to misclassification. Furthermore,
researchers have shown that these adversarial images are highly transferable by
simply training and attacking a substitute model built upon the target model,
known as a black-box attack to DNNs.
Similar to the setting of training substitute models, in this paper we
propose an effective black-box attack that also only has access to the input
(images) and the output (confidence scores) of a targeted DNN. However,
different from leveraging attack transferability from substitute models, we
propose zeroth order optimization (ZOO) based attacks to directly estimate the
gradients of the targeted DNN for generating adversarial examples. We use
zeroth order stochastic coordinate descent along with dimension reduction,
hierarchical attack and importance sampling techniques to efficiently attack
black-box models. By exploiting zeroth order optimization, improved attacks to
the targeted DNN can be accomplished, sparing the need for training substitute
models and avoiding the loss in attack transferability. Experimental results on
MNIST, CIFAR10 and ImageNet show that the proposed ZOO attack is as effective
as the state-of-the-art white-box attack and significantly outperforms existing
black-box attacks via substitute models.Comment: Accepted by 10th ACM Workshop on Artificial Intelligence and Security
(AISEC) with the 24th ACM Conference on Computer and Communications Security
(CCS
Barriers to the Adoption of Artificial Intelligence in Healthcare in India
Artificial Intelligence (AI) has the potential to transform healthcare in various ways. It can turn large amounts of patient data into actionable information, improve public health surveillance, accelerate health responses, and produce leaner, faster and more targeted research and development (Raghupathi and Raghupathi, 2014). This review examines evidence on the barriers to the adoption of Artificial Intelligence in healthcare in India. While literature related to AI in healthcare in India – and on obstacles specifically -seems to comprise of largely news reports, blog posts and conference and workshop proceedings, there are some academic studies on the topic. In addition, it is possible to draw from other literature on AI in healthcare in low-resource or low and middle income countries (LMICs); and from literature on the implementation of AI more generally in India. However, the literature does not allow for assessment of barriers across different stakeholders, aside from some mention of particular obstacles experienced by start-up companies. The key barriers to the adoption of AI in the healthcare in India are: The substantial cost, initial investment and infrastructure, Challenges to working with big data, Trust issues and apprehension with new technologies, An inadequate framework to ensure privacy, security, quality and accuracy of AI solutions, Regulatory weaknesses and uncertainties remain a challenge, Concerns over human job losses can contribute to lack of trust, and other inequality concerns
- …