371,903 research outputs found

    Addressing The Human Factor In Information Systems Security

    Get PDF
    In this paper the historically persistent mismatch between the information systems development and security paradigms is revisited. By considering the human activity systems as a point of reference rather than a variable in information systems security, we investigate the necessity for a change in the information systems security agenda, accepting that a viable system would be more user-centric by accommodating and balancing human processes rather then entertaining an expectation of a one sided change of behaviour of the end user. This is done by drawing upon well established information systems methodologies and research

    Guidelines to address the human factor in the South African National Research and Education Network beneficiary institutions

    Get PDF
    Even if all the technical security solutions appropriate for an organisation’s network are implemented, for example, firewalls, antivirus programs and encryption, if the human factor is neglected then these technical security solutions will serve no purpose. The greatest challenge to network security is probably not the technological solutions that organisations invest in, but the human factor (non-technical solutions), which most organisations neglect. The human factor is often ignored even though humans are the most important resources of organisations and perform all the physical tasks, configure and manage equipment, enter data, manage people and operate the systems and networks. The same people that manage and operate networks and systems have vulnerabilities. They are not perfect and there will always be an element of mistake-making or error. In other words, humans make mistakes that could result in security vulnerabilities, and the exploitation of these vulnerabilities could in turn result in network security breaches. Human vulnerabilities are driven by many factors including insufficient security education, training and awareness, a lack of security policies and procedures in the organisation, a limited attention span and negligence. Network security may thus be compromised by this human vulnerability. In the context of this dissertation, both physical and technological controls should be implemented to ensure the security of the SANReN network. However, if the human factors are not adequately addressed, the network would become vulnerable to risks posed by the human factor which could threaten the security of the network. Accordingly, the primary research objective of this study is to formulate guidelines that address the information security related human factors in the rolling out and continued management of the SANReN network. An analysis of existing policies and procedures governing the SANReN network was conducted and it was determined that there are currently no guidelines addressing the human factor in the SANReN beneficiary institutions. Therefore, the aim of this study is to provide the guidelines for addressing the human factor threats in the SANReN beneficiary institutions

    How Attitude Toward the Behavior, Subjective Norm, and Perceived Behavioral Control Affects Information Security Behavior Intention

    Get PDF
    The education sector is at high risk for information security (InfoSec) breaches and in need of improved security practices. Achieving data protections cannot be through technical means alone. Addressing the human behavior factor is required. Security education, training, and awareness (SETA) programs are an effective method of addressing human InfoSec behavior. Applying sociobehavioral theories to InfoSec research provides information to aid IT security program managers in developing improved SETA programs. The purpose of this correlational study was to examine through the theoretical lens of the theory of planned behavior (TPB) how attitude toward the behavior (ATT), subjective norm (SN), and perceived behavioral control (PBC) affected the intention of computer end users in a K-12 environment to follow InfoSec policy. Data collection was from 165 K-12 school administrators in Northeast Georgia using an online survey instrument. Data analysis occurred applying multiple linear regression and logistic regression. The TPB model accounted for 30.8% of the variance in intention to comply with InfoSec policies. SN was a significant predictor of intention in the model. ATT and PBC did not show to be significant. These findings suggest improvement to K-12 SETA programs can occur by addressing normative beliefs of the individual. The application of improved SETA programs by IT security program managers that incorporate the findings and recommendations of this study may lead to greater information security in K-12 school systems. More secure school systems can contribute to social change through improved information protection as well as increased freedoms and privacy for employees, students, the organization, and the community

    A Risk-Driven Investment Model for Analysing Human Factors in Information Security

    Get PDF
    Information systems are of high importance in organisations because of the revolutionary industrial transformation undergone by digital and electronic platforms. A wide range of factors and issues forming the current business environments have created an unprecedented level of uncertainty and exposure to risks in all areas of strategic and operational activities in organisations including IT management and information security. Subsequently, securing these systems, which keep assets safe, serves organisational objectives. The Information Security System (ISS) is a process that organisations can adopt to achieve information security goals. It has gained the attention of academics, businesses, governments, security and IT professionals in recent years. Like any other system, the ISS is highly dependent on human factors as people are the primary concern of such systems and their roles should be taken into consideration. However, identifying reasoning and analysing human factors is a complex task. This is due to the fact that human factors are hugely subjective in nature and depend greatly on the specific organisational context. Every ISS development has unique demands both in terms of human factor specifications and organisational expectations. Developing an ISS often involves a notable proportion of risk due to the nature of technology and business demands; therefore, responding to these demands and technological challenges is critical. Furthermore, every business decision has inherent risk, and it is crucial to understand and make decisions based on the cost and potential value of that risk. Most research is solely concentrated upon the role of human factors in information security without addressing interrelated issues such as risk, cost and return of investment in security. The central focus and novelty of this research is to develop a risk-driven investment model within the security system framework. This model will support the analysis and reasoning of human factors in the information system development process. It contemplates risk, cost and the return of investment on security controls. The model will consider concepts from Requirements Engineering (RE), Security Tropos and organisational context. This model draws from the following theories and techniques: Socio-technical theory, Requirements Engineering (RE), SWOT analysis, Delphi Expert Panel technique and Force Field Analysis (FFA). The findings underline that the roles of human factors in ISSs are not being fully recognised or embedded in organisations and there is a lack of formalisation of main human factors in information security risk management processes. The study results should confirm that a diverse level of understanding of human factors impacts security systems. Security policies and guidelines do not reflect this reality. Moreover, information security has been perceived as being solely the domain of IT departments and not a collective responsibility, with the importance of the support of senior management ignored. A further key finding is the validation of all components of the Security Risk-Driven Model (RIDIM). Model components were found to be iterative and interdependent. The RIDIM model provides a significant opportunity to identify, assess and address these elements. Some elements of ISSs offered in this research can be used to evaluate the role of human factors in enterprise information security; therefore, the research presents some aspects of computer science and information system features to introduce a solution for a business-oriented problem. The question of how to address the psychological dimensions of human factors related to information security would, however, be a rich topic of research on its own. The risk-driven investment model provides tangible methods and values of relevant variables that define the human factors, risk and return on investment that contribute to organisations’ information security systems. Such values and measures need to be interpreted in the context of organisational culture and the risk management model. Further research into the implementation of these measurements and evaluations for improving organisational risk management is required

    Model-based Evaluation: from Dependability Theory to Security

    Get PDF
    How to quantify security is a classic question in the security community that until today has had no plausible answer. Unfortunately, current security evaluation models are often either quantitative but too specific (i.e., applicability is limited), or comprehensive (i.e., system-level) but qualitative. The importance of quantifying security cannot be overstated, but doing so is difficult and complex, for many reason: the “physics” of the amount of security is ambiguous; the operational state is defined by two confronting parties; protecting and breaking systems is a cross-disciplinary mechanism; security is achieved by comparable security strength and breakable by the weakest link; and the human factor is unavoidable, among others. Thus, security engineers face great challenges in defending the principles of information security and privacy. This thesis addresses model-based system-level security quantification and argues that properly addressing the quantification problem of security first requires a paradigm shift in security modeling, addressing the problem at the abstraction level of what defines a computing system and failure model, before any system-level analysis can be established. Consequently, we present a candidate computing systems abstraction and failure model, then propose two failure-centric model-based quantification approaches, each including a bounding system model, performance measures, and evaluation techniques. The first approach addresses the problem considering the set of controls. To bound and build the logical network of a security system, we extend our original work on the Information Security Maturity Model (ISMM) with Reliability Block Diagrams (RBDs), state vectors, and structure functions from reliability engineering. We then present two different groups of evaluation methods. The first mainly addresses binary systems, by extending minimal path sets, minimal cut sets, and reliability analysis based on both random events and random variables. The second group addresses multi-state security systems with multiple performance measures, by extending Multi-state Systems (MSSs) representation and the Universal Generating Function (UGF) method. The second approach addresses the quantification problem when the two sets of a computing system, i.e., assets and controls, are considered. We adopt a graph-theoretic approach using Bayesian Networks (BNs) to build an asset-control graph as the candidate bounding system model, then demonstrate its application in a novel risk assessment method with various diagnosis and prediction inferences. This work, however, is multidisciplinary, involving foundations from many fields, including security engineering; maturity models; dependability theory, particularly reliability engineering; graph theory, particularly BNs; and probability and stochastic models

    Stakeholder involvement, motivation, responsibility, communication: How to design usable security in e-Science

    Get PDF
    e-Science projects face a difficult challenge in providing access to valuable computational resources, data and software to large communities of distributed users. Oil the one hand, the raison d'etre of the projects is to encourage members of their research communities to use the resources provided. Oil the other hand, the threats to these resources from online attacks require robust and effective Security to mitigate the risks faced. This raises two issues: ensuring that (I) the security mechanisms put in place are usable by the different users of the system, and (2) the security of the overall system satisfies the security needs of all its different stakeholders. A failure to address either of these issues call seriously jeopardise the success of e-Science projects.The aim of this paper is to firstly provide a detailed understanding of how these challenges call present themselves in practice in the development of e-Science applications. Secondly, this paper examines the steps that projects can undertake to ensure that security requirements are correctly identified, and security measures are usable by the intended research community. The research presented in this paper is based Oil four case studies of c-Science projects. Security design traditionally uses expert analysis of risks to the technology and deploys appropriate countermeasures to deal with them. However, these case studies highlight the importance of involving all stakeholders in the process of identifying security needs and designing secure and usable systems.For each case study, transcripts of the security analysis and design sessions were analysed to gain insight into the issues and factors that surround the design of usable security. The analysis concludes with a model explaining the relationships between the most important factors identified. This includes a detailed examination of the roles of responsibility, motivation and communication of stakeholders in the ongoing process of designing usable secure socio-technical systems such as e-Science. (C) 2007 Elsevier Ltd. All rights reserved

    Problems in increasing innovative sustainability of regional development

    Get PDF
    The article provides a comparative analysis of innovative and technological development in Russia and other countries. The paper shows that the innovation sector of the Russian Federation lags behind most developed and developing countries: Russia has almost left the market of high technologies, the main expenditures on innovations are incurred in the sectors of low and medium technology industries; the self-sufficiency in the Russian economy in a number of key types of manufacturing equipment is significantly below the threshold marks determined by national security requirements. The authors describe the differentiation of innovative development in the Russian regions. The study of Russian innovation space has revealed that there are fairly intensive processes of science decay on the periphery, which causes serious problems for the spread of innovative impulses across the country. The article elaborates the methodology for comprehensive assessment of innovative security in the region and presents the relevant calculations for the regions of the Ural Federal District (UFD). It identifies the factors of innovative sustainability that are the most critical for these regions. The authors present the forecast and built long-term forecast trajectories for the level of innovative security in UFD by using the modernized Hurst method. They define the main barriers to the innovative development of Russian regions. The article presents the methodological approaches to substantiating the priority areas for building the innovative systems of regions by taking into account the characteristics of their science and manufacturing complexes. The authors propose a methodology to formally assess the priority of establishing in the region the centers of innovative activity aimed at supporting the competitiveness of industries with different levels of technology intensity. The paper presents the results of calculations with regard to priority of establishing the centers of innovative activity aimed at supporting the development of industries with different level of technology intensity using the example of UFD, one of the leading Russian regions in terms of innovation and production capacity.The article has been supported by the Russian Humanitarian Science Foundation, Project 14-02-00331 “Innovative and technology development of the region: assessment, forecasting and ways of progressing”

    Frictionless Authentication Systems: Emerging Trends, Research Challenges and Opportunities

    Get PDF
    Authentication and authorization are critical security layers to protect a wide range of online systems, services and content. However, the increased prevalence of wearable and mobile devices, the expectations of a frictionless experience and the diverse user environments will challenge the way users are authenticated. Consumers demand secure and privacy-aware access from any device, whenever and wherever they are, without any obstacles. This paper reviews emerging trends and challenges with frictionless authentication systems and identifies opportunities for further research related to the enrollment of users, the usability of authentication schemes, as well as security and privacy trade-offs of mobile and wearable continuous authentication systems.Comment: published at the 11th International Conference on Emerging Security Information, Systems and Technologies (SECURWARE 2017
    corecore