5,990 research outputs found

    Cyber Situational Awareness and Cyber Curiosity Taxonomy for Understanding Susceptibility of Social Engineering Attacks in the Maritime Industry

    Get PDF
    The maritime information system (IS) user has to be prepared to deal with a potential safety and environmental risk that can be caused by an unanticipated failure to a cyber system used onboard a vessel. A hacker leveraging a maritime IS user’s Cyber Curiosity can lead to a successful cyber-attack by enticing a user to click on a malicious Web link sent through an email and/or posted on a social media website. At worst, a successful cyber-attack can impact the integrity of a ship’s cyber systems potentially causing disruption or human harm. A lack of awareness of social engineering attacks can increase the susceptibility of a successful cyber-attack against any organization. A combination of limited cyber situational awareness (SA) of social engineering attacks used against IS users and the user’s natural curiosity create significant threats to organizations. The theoretical framework for this research study consists of four interrelated constructs and theories: social engineering, Cyber Curiosity, Cyber Situational Awareness, and activity theory. This study focused its investigation on two constructs, Cyber Situational Awareness and Cyber Curiosity. These constructs reflect user behavior and decision-making associated with being a victim of a social engineering cyber-attack. This study designed an interactive Web-based experiment to measure an IS user’s Cyber Situational Awareness and Cyber Curiosity to further understand the relationship between these two constructs in the context of cyber risk to organizations. The quantitative and qualitative data analysis from the experiment consisting of 174 IS users (120 maritime & 54 shoreside) were used to empirically assess if there are any significant differences in the maritime IS user’s level of Cyber SA, Cyber Curiosity, and position in the developed Cyber Risk taxonomy when controlled for demographic indicators. To ensure validity and reliability of the proposed measures and the experimental procedures, a panel of nine subject matter experts (SMEs) reviewed the proposed measures/scores of Cyber SA and Cyber Curiosity. The SMEs’ responses were incorporated into the proposed measures and scores including the Web-based experiment. Furthermore, a pilot test was conducted of the Web-based experiment to assess measures of Cyber SA and Cyber Curiosity. This research validated that the developed Cyber Risk taxonomy could be used to assess the susceptibility of an IS user being a victim of a social engineering attack. Identifying a possible link in how both Cyber SA and Cyber Curiosity can help predict the susceptibility of a social engineering attack can be beneficial to the IS research community. In addition, potentially reducing the likelihood of an IS user being a victim of a cyber-attack by identifying factors that improve Cyber SA can reduce risks to organizations. The discussions and implications for future research opportunities are provided to aid the maritime cybersecurity research and practice communities

    Federated Embedded Systems – a review of the literature in related fields

    Get PDF
    This report is concerned with the vision of smart interconnected objects, a vision that has attracted much attention lately. In this paper, embedded, interconnected, open, and heterogeneous control systems are in focus, formally referred to as Federated Embedded Systems. To place FES into a context, a review of some related research directions is presented. This review includes such concepts as systems of systems, cyber-physical systems, ubiquitous computing, internet of things, and multi-agent systems. Interestingly, the reviewed fields seem to overlap with each other in an increasing number of ways

    Designing Systems for Risk Based Decision Making

    Get PDF
    A common challenge security analysts face, is in making decisions on risk when facing uncertain conditions especially when risk management procedures are inapplicable. Though analysts may have experience and their intuition to depend upon, these have sometimes proven to be insufficient and it has also been identified that risk and uncertainty may be magnified by personal, system and environmental factors. Risk decision making is an integral part of security analysis, a role facilitated by system automation and design. However, designing for usable security has not sufficiently considered the implications of design to risk decision making and more so in relation to security analysts. The research aims to address this by coming up with recommendations for design, for risk based decision making

    Reducing human error in cyber security using the Human Factors Analysis Classification System (HFACS).

    Get PDF
    For several decades, researchers have stated that human error is a significant cause of information security breaches, yet it still remains to be a major issue today. Quantifying the effects of security incidents is often a difficult task because studies often understate or overstate the costs involved. Human error has always been a cause of failure in many industries and professions that is overlooked or ignored as an inevitability. The problem with human error is further exacerbated by the fact that the systems that are set up to keep networks secure are managed by humans. There are several causes of a security breach related human error such as poor situational awareness, lack of training, boredom, and lack of risk perception. Part of the problem is that people who usually make great decisions offline make deplorable decisions online due to incorrect assumptions of how computer transactions operate. Human error can be unintentional because of the incorrect execution of a plan (slips/lapses) or from correctly following an inadequate plan (mistakes). Whether intentional or unintentional, errors can lead to vulnerabilities and security breaches. Regardless, humans remain the weak link in the process of interfacing with the machines they operate and in keeping information secure. These errors can have detrimental effects both physically and socially. Hackers exploit these weaknesses to gain unauthorized entry into computer systems. Security errors and violations, however, are not limited to users. Administrators of systems are also at fault. If there is not an adequate level of awareness, many of the security techniques are likely to be misused or misinterpreted by the users rendering adequate security mechanisms useless. Corporations also play a factor in information security loss, because of the reactive management approaches that they use in security incidents. Undependable user interfaces can also play a role for the security breaches due to flaws in the design. System design and human interaction both play a role in how often human error occurs particularly when there is a slight mismatch between the system design and the person operating it. One major problem with systems design is that they designed for simplicity, which can lead a normally conscious person to make bad security decisions. Human error is a complex and elusive security problem that has generally defied creation of a structured and standardized classification scheme. While Human error may never be completely eliminated from the tasks, they perform due to poor situational awareness, or a lack of adequate training, the first step to make improvements over the status quo is to establish a unified scheme to classify such security errors. With this background, I, intend to develop a tool to gather data and apply the Human Factors Analysis and Classification System (HFACS), a tool developed for aviation accidents, to see if there are any latent organizational conditions that led to the error. HFACS analyzes historical data to find common trends that can identify areas that need to be addressed in an organization to the goal of reducing the frequency of the errors

    Real-Time Cybersecurity Situation Awareness Through a User-Centered Network Security Visualization

    Get PDF
    One of the most common problems amongst cybersecurity defenders is lack of network visibility, leading to decreased situation awareness and overlooked indicators of compromise. This presents an opportunity for the use of information visualization in the field of cybersecurity. Prior research has looked at applying visual analytics to computer network defense, which has led to the development of visualizations for a variety of use cases in the security field. However, many of these visualizations do not consider user needs and requirements or require some predetermined user knowledge about the network to create the visuals, leading to low adoption in practice. With this in mind, I took a bottom-up, user-centered approach using interviews to gather user-desired components for the design, development, and evaluation of a network security visualization tool, called Riverside. I designed a visualization that attempts to balance providing a comprehensive view of an environment while supplying details-on-demand. Riverside’s key contribution is a data-driven, dynamic view of a network’s security state over time, meant to supplement an analyst’s real-time situation awareness of their network. Riverside’s system automatically partitions internal from external network components to visualize potential attack vectors across the entire environment. This research supports the need for further incorporation of users into the cybersecurity visualization development lifecycle. I call attention to key requirements for creating effective cybersecurity visualizations and specific use cases where visualizations can be leveraged to augment operational cybersecurity capabilities

    Cyber Threat Observatory: Design and Evaluation of an Interactive Dashboard for Computer Emergency Response Teams

    Get PDF
    Computer emergency response teams (CERTs) of the public sector provide preventive and reactive cybersecurity services for authorities, citizens, and enterprises. However, their tasks of monitoring, analyzing, and communicating threats to establish cyber situational awareness are getting more complex due to the increasing information volume disseminated through public channels. Besides the time-consuming data collection for incident handling and daily reporting, CERTs are often confronted with irrelevant, redundant, or incredible information, exacerbating the time-critical prevention of and response to cyber threats. Thus, this design science research paper presents the user-centered design and evaluation of the Cyber Threat Observatory, which is an automatic, cross-platform and real-time cybersecurity dashboard. Based on expert scenario-based walkthroughs and semi-structured interviews (N=12), it discusses six design implications, including customizability and filtering, data source modularity, cross-platform interrelations, content assessment algorithms, integration with existing software, as well as export and communication capabilities

    Evaluating Resilience of Cyber-Physical-Social Systems

    Get PDF
    Nowadays, protecting the network is not the only security concern. Still, in cyber security, websites and servers are becoming more popular as targets due to the ease with which they can be accessed when compared to communication networks. Another threat in cyber physical social systems with human interactions is that they can be attacked and manipulated not only by technical hacking through networks, but also by manipulating people and stealing users’ credentials. Therefore, systems should be evaluated beyond cy- ber security, which means measuring their resilience as a piece of evidence that a system works properly under cyber-attacks or incidents. In that way, cyber resilience is increas- ingly discussed and described as the capacity of a system to maintain state awareness for detecting cyber-attacks. All the tasks for making a system resilient should proactively maintain a safe level of operational normalcy through rapid system reconfiguration to detect attacks that would impact system performance. In this work, we broadly studied a new paradigm of cyber physical social systems and defined a uniform definition of it. To overcome the complexity of evaluating cyber resilience, especially in these inhomo- geneous systems, we proposed a framework including applying Attack Tree refinements and Hierarchical Timed Coloured Petri Nets to model intruder and defender behaviors and evaluate the impact of each action on the behavior and performance of the system.Hoje em dia, proteger a rede não é a única preocupação de segurança. Ainda assim, na segurança cibernética, sites e servidores estão se tornando mais populares como alvos devido à facilidade com que podem ser acessados quando comparados às redes de comu- nicação. Outra ameaça em sistemas sociais ciberfisicos com interações humanas é que eles podem ser atacados e manipulados não apenas por hackers técnicos através de redes, mas também pela manipulação de pessoas e roubo de credenciais de utilizadores. Portanto, os sistemas devem ser avaliados para além da segurança cibernética, o que significa medir sua resiliência como uma evidência de que um sistema funciona adequadamente sob ataques ou incidentes cibernéticos. Dessa forma, a resiliência cibernética é cada vez mais discutida e descrita como a capacidade de um sistema manter a consciência do estado para detectar ataques cibernéticos. Todas as tarefas para tornar um sistema resiliente devem manter proativamente um nível seguro de normalidade operacional por meio da reconfi- guração rápida do sistema para detectar ataques que afetariam o desempenho do sistema. Neste trabalho, um novo paradigma de sistemas sociais ciberfisicos é amplamente estu- dado e uma definição uniforme é proposta. Para superar a complexidade de avaliar a resiliência cibernética, especialmente nesses sistemas não homogéneos, é proposta uma estrutura que inclui a aplicação de refinamentos de Árvores de Ataque e Redes de Petri Coloridas Temporizadas Hierárquicas para modelar comportamentos de invasores e de- fensores e avaliar o impacto de cada ação no comportamento e desempenho do sistema

    System Design Considerations for Risk Perception

    Get PDF
    The perception of risk is a driver for security analysts’ decision making. However, security analysts may have conflicting views of a risk based on personal, system and environmental factors. This difference in perception and opinion, may impact effective decision making. In this paper, we propose a model that highlights areas contributing to the perception of risk in a socio-technical environment and their implication to system design. We validate the model through the use of a hypothetical scenario, which is grounded in both the literature and empirical data
    corecore