1,564 research outputs found

    Password Based a Generalize Robust Security System Design Using Neural Network

    Get PDF
    Among the various means of available resource protection including biometrics, password based system is most simple, user friendly, cost effective and commonly used. But this method having high sensitivity with attacks. Most of the advanced methods for authentication based on password encrypt the contents of password before storing or transmitting in physical domain. But all conventional cryptographic based encryption methods are having its own limitations, generally either in terms of complexity or in terms of efficiency. Multi-application usability of password today forcing users to have a proper memory aids. Which itself degrades the level of security. In this paper a method to exploit the artificial neural network to develop the more secure means of authentication, which is more efficient in providing the authentication, at the same time simple in design, has given. Apart from protection, a step toward perfect security has taken by adding the feature of intruder detection along with the protection system. This is possible by analysis of several logical parameters associated with the user activities. A new method of designing the security system centrally based on neural network with intrusion detection capability to handles the challenges available with present solutions, for any kind of resource has presented

    Biometric Backdoors: A Poisoning Attack Against Unsupervised Template Updating

    Full text link
    In this work, we investigate the concept of biometric backdoors: a template poisoning attack on biometric systems that allows adversaries to stealthily and effortlessly impersonate users in the long-term by exploiting the template update procedure. We show that such attacks can be carried out even by attackers with physical limitations (no digital access to the sensor) and zero knowledge of training data (they know neither decision boundaries nor user template). Based on the adversaries' own templates, they craft several intermediate samples that incrementally bridge the distance between their own template and the legitimate user's. As these adversarial samples are added to the template, the attacker is eventually accepted alongside the legitimate user. To avoid detection, we design the attack to minimize the number of rejected samples. We design our method to cope with the weak assumptions for the attacker and we evaluate the effectiveness of this approach on state-of-the-art face recognition pipelines based on deep neural networks. We find that in scenarios where the deep network is known, adversaries can successfully carry out the attack over 70% of cases with less than ten injection attempts. Even in black-box scenarios, we find that exploiting the transferability of adversarial samples from surrogate models can lead to successful attacks in around 15% of cases. Finally, we design a poisoning detection technique that leverages the consistent directionality of template updates in feature space to discriminate between legitimate and malicious updates. We evaluate such a countermeasure with a set of intra-user variability factors which may present the same directionality characteristics, obtaining equal error rates for the detection between 7-14% and leading to over 99% of attacks being detected after only two sample injections.Comment: 12 page

    Automated Privacy Protection for Mobile Device Users and Bystanders in Public Spaces

    Get PDF
    As smartphones have gained popularity over recent years, they have provided usersconvenient access to services and integrated sensors that were previously only available through larger, stationary computing devices. This trend of ubiquitous, mobile devices provides unparalleled convenience and productivity for users who wish to perform everyday actions such as taking photos, participating in social media, reading emails, or checking online banking transactions. However, the increasing use of mobile devices in public spaces by users has negative implications for their own privacy and, in some cases, that of bystanders around them. Specifically, digital photography trends in public have negative implications for bystanders who can be captured inadvertently in users’ photos. Those who are captured often have no knowledge of being photographed and have no control over how photos of them are distributed. To address this growing issue, a novel system is proposed for protecting the privacy of bystanders captured in public photos. A fully automated approach to accurately distinguish the intended subjects from strangers is explored. A feature-based classification scheme utilizing entire photos is presented. Additionally, the privacy-minded case of only utilizing local face images with no contextual information from the original image is explored with a convolutional neural network-based classifier. Three methods of face anonymization are implemented and compared: black boxing, Gaussian blurring, and pose-tolerant face swapping. To validate these methods, a comprehensive user survey is conducted to understand the difference in viability between them. Beyond photographing, the privacy of mobile device users can sometimes be impacted in public spaces, as visual eavesdropping or “shoulder surfing” attacks on device screens become feasible. Malicious individuals can easily glean personal data from smartphone and mobile device screens while they are accessed visually. In order to protect displayed user content, anovel, sensor-based visual eavesdropping detection scheme using integrated device cameras is proposed. In order to selectively obfuscate private content while an attacker is nearby, a dynamic scheme for detecting and hiding private content is also developed utilizing User-Interface-as-an-Image (UIaaI). A deep, convolutional object detection network is trained and utilized to identify sensitive content under this scheme. To allow users to customize the types ofcontent to hide, dynamic training sample generation is introduced to retrain the content detection network with very few original UI samples. Web applications are also considered with a Chrome browser extension which automates the detection and obfuscation of sensitive web page fields through HTML parsing and CSS injection

    Snoopy: Sniffing Your Smartwatch Passwords via Deep Sequence Learning

    Get PDF
    Demand for smartwatches has taken off in recent years with new models which can run independently from smartphones and provide more useful features, becoming first-class mobile platforms. One can access online banking or even make payments on a smartwatch without a paired phone. This makes smartwatches more attractive and vulnerable to malicious attacks, which to date have been largely overlooked. In this paper, we demonstrate Snoopy, a password extraction and inference system which is able to accurately infer passwords entered on Android/Apple watches within 20 attempts, just by eavesdropping on motion sensors. Snoopy uses a uniform framework to extract the segments of motion data when passwords are entered, and uses novel deep neural networks to infer the actual passwords. We evaluate the proposed Snoopy system in the real-world with data from 362 participants and show that our system offers a ~ 3-fold improvement in the accuracy of inferring passwords compared to the state-of-the-art, without consuming excessive energy or computational resources. We also show that Snoopy is very resilient to user and device heterogeneity: it can be trained on crowd-sourced motion data (e.g. via Amazon Mechanical Turk), and then used to attack passwords from a new user, even if they are wearing a different model. This paper shows that, in the wrong hands, Snoopy can potentially cause serious leaks of sensitive information. By raising awareness, we invite the community and manufacturers to revisit the risks of continuous motion sensing on smart wearable devices

    Synesthesia: Detecting Screen Content via Remote Acoustic Side Channels

    Full text link
    We show that subtle acoustic noises emanating from within computer screens can be used to detect the content displayed on the screens. This sound can be picked up by ordinary microphones built into webcams or screens, and is inadvertently transmitted to other parties, e.g., during a videoconference call or archived recordings. It can also be recorded by a smartphone or "smart speaker" placed on a desk next to the screen, or from as far as 10 meters away using a parabolic microphone. Empirically demonstrating various attack scenarios, we show how this channel can be used for real-time detection of on-screen text, or users' input into on-screen virtual keyboards. We also demonstrate how an attacker can analyze the audio received during video call (e.g., on Google Hangout) to infer whether the other side is browsing the web in lieu of watching the video call, and which web site is displayed on their screen
    corecore