4 research outputs found
Cognitive load and subjective time pressure: How contextual factors impact the quality of cyber-security decision making
The quality of decision-making goes beyond simply considering outcomes as it is also determined by the suitability of the decision-making framework in the given circumstances, the probability of outcomes coming true, combined with the quality of the information available being utilised. However, with contextual pressures such as cognitive load and time pressure posing a threat to decision-making in cyber-security – how do people know whether they are making good decisions? This thesis aimed to examine the impact of cognitive load, how it applies to cyber-security decision-making quality, and subsequently how research to address this could be utilised in the development of tools and user-centric interventions to reduce risky cyber-security decision making. From theoretical cognitive science approaches to applied cyberpsychology research, 10 novel studies were developed, supported by systematic literature reviewing, with data collected from over 2000 participants. From this work, it was found increases in task difficulty could potentially increase insider threat when people are given the opportunity to act dishonestly, but this risk could be reduced by increasing awareness of time pressure. Sources of subjective time pressure, such as time urgency cues in emails, were found to increase susceptibility to cyber incidents – although, risk of such factors varies depending upon the perception of risk probability and outcomes. Whilst measures for individual differences in subjective time pressure were found to have a limited ability to predict safe cyber-security practices, other individual difference predictors were capable of explaining up to 43.5% of cyber-security behaviour variance. Through indicating when and where risky decision-making results in maladaptive behaviour, gain in knowledge has culminated in the creation of a new phishing susceptibility tool, based upon Expected Utility Theory, which could accurately explain 68.5% of behaviour. By highlighting risks in the overarching decision-making process, metacognitive interventions could be targeted to support quality cyber-security decision-making
A new hope: human-centric cybersecurity research embedded within organizations
Humans are and have been the weakest link in the cybersecurity chain (e.g., [1, 2, 3]). Not all systems are adequately protected and even for those that are, individuals can still fall prey to cyber-attack attempts (e.g., phishing, malware, ransomware) that occasionally break through, and/or engage in other cyber risky behaviors (e.g., not adequately securing devices) that put even the most secure systems at risk. Such susceptibility can be due to one or a number of factors, including individual differences, environmental factors, maladaptive behaviors, and influence techniques. This is particularly concerning at an organizational level where the costs of a successful cyber-attack can be colossal (e.g., financial, safety, reputational). Cyber criminals’ intent on infiltrating organization accounts/networks to inflict damage, steal data, and/or make financial gains will continue to try and exploit these human vulnerabilities unless we are able to act fast and do something about them. Is there any hope for human resistance? We argue that technological solutions alone rooted in software and hardware will not win this battle. The ‘human’ element of any digital system is as important to its enduring security posture. More research is needed to better understand human cybersecurity vulnerabilities within organizations. This will inform the development of methods (including those rooted in HCI) to decrease cyber risky and enhance cyber safe decisions and behaviors: to fight back, showing how humans, with the right support, can be the best line of cybersecurity defense.In this paper, we assert that in order to achieve the highest positive impactful benefits from such research efforts, more human-centric cybersecurity research needs to be conducted with expert teams embedded within industrial organizations driving forward the research. This cannot be an issue addressed through laboratory-based research alone. Industrial organizations need to move towards more holistic – human- and systems- centric – cybersecurity research and solutions that will create safer and more secure employees and organizations; working in harmony to better defend against cyber-attack attempts. One such example is the Airbus Accelerator in Human-Centric Cyber Security (H2CS), which is discussed as a case study example within the current paper