797 research outputs found

    Password Cracking and Countermeasures in Computer Security: A Survey

    Full text link
    With the rapid development of internet technologies, social networks, and other related areas, user authentication becomes more and more important to protect the data of the users. Password authentication is one of the widely used methods to achieve authentication for legal users and defense against intruders. There have been many password cracking methods developed during the past years, and people have been designing the countermeasures against password cracking all the time. However, we find that the survey work on the password cracking research has not been done very much. This paper is mainly to give a brief review of the password cracking methods, import technologies of password cracking, and the countermeasures against password cracking that are usually designed at two stages including the password design stage (e.g. user education, dynamic password, use of tokens, computer generations) and after the design (e.g. reactive password checking, proactive password checking, password encryption, access control). The main objective of this work is offering the abecedarian IT security professionals and the common audiences with some knowledge about the computer security and password cracking, and promoting the development of this area.Comment: add copyright to the tables to the original authors, add acknowledgement to helpe

    EyeSpot: leveraging gaze to protect private text content on mobile devices from shoulder surfing

    Get PDF
    As mobile devices allow access to an increasing amount of private data, using them in public can potentially leak sensitive information through shoulder surfing. This includes personal private data (e.g., in chat conversations) and business-related content (e.g., in emails). Leaking the former might infringe on users’ privacy, while leaking the latter is considered a breach of the EU’s General Data Protection Regulation as of May 2018. This creates a need for systems that protect sensitive data in public. We introduce EyeSpot, a technique that displays content through a spot that follows the user’s gaze while hiding the rest of the screen from an observer’s view through overlaid masks. We explore different configurations for EyeSpot in a user study in terms of users’ reading speed, text comprehension, and perceived workload. While our system is a proof of concept, we identify crystallized masks as a promising design candidate for further evaluation with regard to the security of the system in a shoulder surfing scenario

    A biologically motivated computational architecture inspired in the human immunological system to quantify abnormal behaviors to detect presence of intruders

    Get PDF
    In this article is presented a detection model of intruders by using an architecture based in agents that imitates the principal aspects of the Immunological System, such as detection and elimination of antigens in the human body. This model is based on the hypothesis of an intruder which is a strange element in the system, whereby can exist mechanisms able to detect their presence. We will use recognizer agents of intruders (Lymphocytes-B) for such goal and macrophage agents (Lymphocytes-T) for alerting and reacting actions. The core of the system is based in recognizing abnormal patterns of conduct by agents (Lymphocytes-B), which will recognize anomalies in the behavior of the user, through a catalogue of Metrics that will allow us quantify the conduct of the user according to measures of behaviors and then we will apply Statistic and Data Minig technics to classify the conducts of the user in intruder or normal behavior. Our experiments suggest that both methods are complementary for this purpose. This approach was very flexible and customized in the practice for the needs of any particular system.1st IFIP International Conference on Biologically Inspired Cooperative Computing - Biological Inspiration 2Red de Universidades con Carreras en Informática (RedUNCI

    Using Genetic Algorithms in Secured Business Intelligence Mobile Applications

    Get PDF
    The paper aims to assess the use of genetic algorithms for training neural networks used in secured Business Intelligence Mobile Applications. A comparison is made between classic back-propagation method and a genetic algorithm based training. The design of these algorithms is presented. A comparative study is realized for determining the better way of training neural networks, from the point of view of time and memory usage. The results show that genetic algorithms based training offer better performance and memory usage than back-propagation and they are fit to be implemented on mobile devices

    C-BRIG: A Network Architecture for Real-Time Information Exchange in Smart and Connected Campuses

    Get PDF
    Education at all levels is one of the major driver of sustainable development in any nation. The number of institutions of higher learning and enrollment in Africa has continued to increase significantly over the last few years. However, the needs of the knowledge society is still yet to be met due to challenges of overcrowding, infrastructure deficiencies, and inadequate access to international knowledge. In this paper, we developed a robust network architecture that enables real-time information exchange within and among connected campuses using connections of local networks and mobile applications. Campus Bridge Information Grid (CBRIG) is a novel initiative that meets the information and social needs of staff, faculty, and students of tertiary institutions in developing nations. The expanded version of CBRIG named i-Spider Information Grid (i-SPIG), will facilitate real-time information exchange among university campuses that are already on the C-BRIG platform. The proposed framework will help in boosting the productivity of every member of the university community, and foster research collaborations and social interactions among several institutions that are physical separated but digitally connecte

    An Experimental Study on the Role of Password Strength and Cognitive Load on Employee Productivity

    Get PDF
    The proliferation of information systems (IS) over the past decades has increased the demand for system authentication. While the majority of system authentications are password-based, it is well documented that passwords have significant limitations. To address this issue, companies have been placing increased requirements on the user to ensure their passwords are more complex and consequently stronger. In addition to meeting a certain complexity threshold, the password must also be changed on a regular basis. As the cognitive load increases on the employees using complex passwords and changing them often, they may have difficulty recalling their passwords. As such, the focus of this experimental study was to determine the effects of raising the cognitive load of the authentication strength for users upon accessing a system via increased strength for passwords requirements. This experimental research uncovered the point at which raising the authentication strength for passwords becomes counterproductive by its impact on end-user performances. To investigate the effects of changing the cognitive load (via different password strength) over time, a quasi-experiment was proposed. Data was collected in an effort to analyze the number of failed operating system (OS) logon attempts, users’ average logon times, average task completion times, and number of requests for assistance (unlock & reset account). Data was also collected for the above relationships when controlled for computer experience, age, and gender. This quasi-experiment included two experimental groups (Group A & B), and a control group (Group C). There was a total of 72 participants from the three groups. Additionally, a pretest-posttest experiment survey was administered before and after the quasi-experiment. Such assessment was done in an effort to see if user’s perceptions of password use would be changed by participating in this experimental study. The results indicated a significant difference between the user’s perceptions about passwords before and after the quasi-experiment. The Multivariate Analysis of Variance (MANOVA) and Multivariate Analysis of Covariate (MANCOVA) tests were conducted. The results revealed a significance difference on the number of failed logon attempts, average logon times, average task completion, and amount of request for assistance between the three groups (two treatment groups & the control group). However, no significant differences were observed when controlling for computer experience, age, and gender. This research study contributed to the body of knowledge and has implications for industry as well as for further study in the information systems domain. It contributed by giving insight into the point at which an increase of the cognitive load (via different password strengths) become counterproductive to the organization by causing an increase in number of failed OS logon attempts, users\u27 average logon times, average task completion times, and number of requests for assistance (unlock and reset account). Future studies may be conducted in the industry as results by differ from college students

    Face Processing & Frontal Face Verification

    Get PDF
    In this report we first review important publications in the field of face recognition; geometric features, templates, Principal Component Analysis (PCA), pseudo-2D Hidden Markov Models, Elastic Graph Matching, as well as other points are covered; important issues, such as the effects of an illumination direction change and the use of different face areas, are also covered. A new feature set (termed DCT-mod2) is then proposed; the feature set utilizes polynomial coefficients derived from 2D Discrete Cosine Transform (DCT) coefficients obtained from horizontally & vertically neighbouring blocks. Face authentication results on the VidTIMIT database suggest that the proposed feature set is superior (in terms of robustness to illumination changes and discrimination ability) to features extracted using four popular methods: PCA, PCA with histogram equalization pre-processing, 2D DCT and 2D Gabor wavelets; the results also suggest that histogram equalization pre-processing increases the error rate and offers no help against illumination changes. Moreover, the proposed feature set is over 80 times faster to compute than features based on 2D Gabor wavelets. Further experiments on the Weizmann Database also show that the proposed approach is more robust than 2D Gabor wavelets and 2D DCT coefficients
    • …
    corecore