45 research outputs found

    e-Government Technical Security Controls Taxonomy for Information Assurance Contractors - A Relational Approach

    Get PDF
    When project managers consider risks that may affect a project, they rarely consider risks associated with the use of information systems. The Federal Information Security Management Act (FISMA) of 2002 recognizes the importance of information security to the economic and national security of the Unites States. The requirements of FISMA are addressed using the NIST Special Publication 800-53 Rev 3, which has improved the way organizations practice information assurance. The NIST SP 800-53 Rev 3 takes a hierarchical approach to information assurance, which has resulted in the duplication and subsequent withdrawal and merging of fifteen security controls. In addition, the security controls are not associated with the appropriate information systems. The current security assessment model often results in a waste of resources, since controls that are not applicable to an information system have to be addressed. This research developed and tested the value of using an information system breakdown structure (ISBS) model for identification of project information system resources. It also assessed the value of using an e-Government Relational Technical Security Controls Model for mapping the ISBS to the applicable relational technical security controls. A questionnaire containing ninety-five items was developed and emailed to twenty-four information security contractors of which twenty-two efficiently completed questionnaires were received. The questionnaire assessed the value of using the ISBS, and the relationships of the e-Government Relational Technical Security Controls model. Literature review and industry experts opinion was used to triangulate the research results and establish their validity. Cronbach's Alpha coefficient for the four sections of the questionnaire established its reliability. The results of the research indicated that the ISBS model is an invaluable, customizable, living tool that should be used for identification of information system resources on projects. It can also be used for assigning responsibility for the different information systems and for security classification. The study also indicated that using the e-Government Relational Technical Security Controls provides a relational and fully integrated approach to information assurance while reducing the likelihood of duplicating security controls. This study could help project managers identify and mitigate risks associated with the use of information systems on projects

    A Federated Architecture for Heuristics Packet Filtering in Cloud Networks

    Get PDF
    The rapid expansion in networking has provided tremendous opportunities to access an unparalleled amount of information. Everyone connects to a network to gain access and to share this information. However when someone connects to a public network, his private network and information becomes vulnerable to hackers and all kinds of security threats. Today, all networks needs to be secured, and one of the best security policies is firewall implementation. Firewalls can be hardware or cloud based. Hardware based firewalls offer the advantage of faster response time, whereas cloud based firewalls are more flexible. In reality the best form of firewall protection is the combination of both hardware and cloud firewall. In this thesis, we implemented and configured a federated architecture using both firewalls, the Cisco ASA 5510 and Vyatta VC6.6 Cloud Based Firewall. Performance evaluation of both firewalls were conducted and analyzed based on two scenarios; spike and endurance test. Throughputs were also compared, along with some mathematical calculations using statistics. Different forms of packets were sent using a specialized tool designed for load testing known as JMeter. After collecting the results and analyzing it thoroughly, this thesis is concluded by presenting a heuristics method on how packet filtering would fall back to the cloud based firewall when the hardware based firewall becomes stressed and over loaded, thus allowing efficient packet flow and optimized performance. The result of this thesis can be used by Information Security Analyst, students, organizations and IT experts to have an idea on how to implement a secured network architecture to protect digital information

    The global vulnerability discovery and disclosure system: a thematic system dynamics approach

    Get PDF
    Vulnerabilities within software are the fundamental issue that provide both the means, and opportunity for malicious threat actors to compromise critical IT systems (Younis et al., 2016). Consequentially, the reduction of vulnerabilities within software should be of paramount importance, however, it is argued that software development practitioners have historically failed in reducing the risks associated with software vulnerabilities. This failure is illustrated in, and by the growth of software vulnerabilities over the past 20 years. This increase which is both unprecedented and unwelcome has led to an acknowledgement that novel and radical approaches to both understand the vulnerability discovery and disclosure system (VDDS) and to mitigate the risks associate with software vulnerability centred risk is needed (Bradbury, 2015; Marconato et al., 2012). The findings from this research show that whilst technological mitigations are vital, the social and economic features of the VDDS are of critical importance. For example, hitherto unknown systemic themes identified by this research are of key and include; Perception of Punishment; Vendor Interactions; Disclosure Stance; Ethical Considerations; Economic factors for Discovery and Disclosure and Emergence of New Vulnerability Markets. Each theme uniquely impacts the system, and ultimately the scale of vulnerability based risks. Within the research each theme within the VDDS is represented by several key variables which interact and shape the system. Specifically: Vender Sentiment; Vulnerability Removal Rate; Time to fix; Market Share; Participants within VDDS, Full and Coordinated Disclosure Ratio and Participant Activity. Each variable is quantified and explored, defining both the parameter space and progression over time. These variables are utilised within a system dynamic model to simulate differing policy strategies and assess the impact of these policies upon the VDDS. Three simulated vulnerability disclosure futures are hypothesised and are presented, characterised as depletion, steady and exponential with each scenario dependent upon the parameter space within the key variables

    Automated Security Analysis of Virtualized Infrastructures

    Get PDF
    Virtualization enables the increasing efficiency and elasticity of modern IT infrastructures, including Infrastructure as a Service. However, the operational complexity of virtualized infrastructures is high, due to their dynamics, multi-tenancy, and size. Misconfigurations and insider attacks carry significant operational and security risks, such as breaches in tenant isolation, which put both the infrastructure provider and tenants at risk. In this thesis we study the question if it is possible to model and analyze complex, scalable, and dynamic virtualized infrastructures with regard to user-defined security and operational policies in an automated way. We establish a new practical and automated security analysis framework for virtualized infrastructures. First, we propose a novel tool that automatically extracts the configuration of heterogeneous environments and builds up a unified graph model of the configuration and topology. The tool is further extended with a monitoring component and a set of algorithms that translates system changes to graph model changes. The benefits of maintaining such a dynamic model are time reduction for model population and closing the gap for transient security violations. Our analysis is the first that lifts static information flow analysis to the entire virtualized infrastructure, in order to detect isolation failures between tenants on all resources. The analysis is configurable using customized rules to reflect the different trust assumptions of the users. We apply and evaluate our analysis system on the production infrastructure of a global financial institution. For the information flow analysis of dynamic infrastructures we propose the concept of dynamic rule-based information flow graphs and develop a set of algorithms that maintain such information flow graphs for dynamic system models. We generalize the analysis of isolation properties and establish a new generic analysis platform for virtualized infrastructures that allows to express a diverse set of security and operational policies in a formal language. The policy requirements are studied in a case-study with a cloud service provider. We are the first to employ a variety of theorem provers and model checkers to verify the state of a virtualized infrastructure against its policies. Additionally, we analyze dynamic behavior such as VM migrations. For the analysis of dynamic infrastructures we pursue both a reactive as well as a proactive approach. A reactive analysis system is developed that reduces the time between system change and analysis result. The system monitors the infrastructure for changes and employs dynamic information flow graphs to verify, for instance, tenant isolation. For the proactive analysis we propose a new model, the Operations Transition Model, which captures the changes of operations in the virtualized infrastructure as graph transformations. We build a novel analysis system using this model that performs automated run-time analysis of operations and also offers change planning. The operations transition model forms the basis for further research in model checking of virtualized infrastructures

    Information Security for BYOD in ABB

    Get PDF
    BYOD (Bring Your Own Device) is the future policy in companies that is going to replace the old UWYT (Use What You Are Told) way of thinking. This new policy has a lot of issues both security wisely and policy wisely that needs to get solved before we can fully implement this policy into larger companies. Thanks to large interest in the subject a lot of companies have already come up with solutions to this issue and started to use BYOD policy within their companies. The main target of this Master´s Thesis “Information Security for BYOD in ABB” was to create a working information security system for future BYOD policy use in ABB. For the Thesis we used six different test users with different portable devices and statuses and tried to create a policy that fits well with their job and fulfills the security requirements of ABB. We also discuss a little about cloud computing and how it is good to be included into the final solution for the BYOD security plan.fi=Opinnäytetyö kokotekstinä PDF-muodossa.|en=Thesis fulltext in PDF format.|sv=Lärdomsprov tillgängligt som fulltext i PDF-format

    Protecting Against Compromised Controllers in Software Defined Networks Using an Efficient Byzantine Fault Preventing Control Plane

    Get PDF
    Software Defined Networking (SDN) is a modern approach to computer networks that involves the separation of the control and forwarding planes. Using this approach, control is achieved through the use of an SDN controller, which enables the delivery of far more intelligent, efficient and resilient networks. Whilst the use of an SDN controller offers many potential benefits, the centralisation of network control introduces a single point of failure - if the SDN controller develops a fault, or is under attack, then the network can be severely disrupted. From a security perspective, the SDN controller represents a tempting target for an attacker - if the attacker can gain control over the controller then they can act as a malicious insider, gaining control over the operation of the whole network. The actions of a compromised SDN controller can be seen as an occurrence of byzantine (or arbitrary) faults. By introducing a byzantine fault tolerant (BFT) element to the control plane, insider attacks can be prevented. This thesis explores the impact of a compromised SDN controller, and provides a defence called SDBFT: Software Defined Byzantine Fault prevenTing control. I reduce fault tolerance to fault preventing, which means fault detecting with recovery. SDBFT prevents a compromised SDN controller from performing malicious actions in a network. Within this thesis, I first analyse and demonstrate a number of attacks that can be performed from a compromised controller, including an exploration of the impact of such attacks on a real-world scenario involving Industrial Control Systems (ICS). I then propose, implement and evaluate the SDBFT system, using novel algorithms that are able to protect against faulty controllers. I demonstrate through extensive experimentation that the SDBFT system far outperforms approaches built upon a traditional BFT model, and only represents a modest reduction in controller performance compared to the traditional SDN architecture

    Information technology social engineering: an academic definition and study of social engineering - analyzing the human firewall

    Get PDF
    People have knowledge and people control knowledge, whether through a computer, papers or memory, people are ultimately in charge and people are a hole in security. In order to fully understand security, people must be understood, specifically people\u27s relationship with information technology networks. The most common attack against people on information technology networks is called `social engineering.\u27 When social engineering is explored many psychological concepts arise including Neuro-Linguistic Programming and even historical parallels with the Nazi government. Exploring these ideas with the slant of information technology networks helps define and organize the problem of social engineering. If the problem of social engineering across information technology networks can be understood, eventually solutions can exist, which increase the security of knowledge and eliminate the hole people create

    A General Methodology to Optimize and Benchmark Edge Devices

    Get PDF
    The explosion of Internet Of Things (IoT), embedded and “smart” devices has also seen the addition of “general purpose” single board computers also referred to as “edge devices.” Determining if one of these generic devices meets the need of a new given task however can be challenging. Software generically written to be portable or plug and play may be too bloated to work properly without significant modification due to much tighter hardware resources. Previous work in this area has been focused on micro or chip-level benchmarking which is mainly useful for chip designers or low level system integrators. A higher or macro level method is needed to not only observe the behavior of these devices under a load but ensure they are appropriately configured for the new task, especially as they begin being integrated on platforms with higher cost of failure like self driving cars or drones. In this research we propose a macro level methodology that iteratively benchmarks and optimizes specific workloads on edge devices. With automation provided by Ansible, a multi stage 2k full factorial experiment and robust analysis process ensures the test workload is maximizing the use of available resources before establishing a final benchmark score. By framing the validation tests with a family of network security monitoring applications an end to end scenario fully exercises and validates the developed process. This also provides an additional vector for future research in the realm of network security. The analysis of the results show the developed process met its original design goals and intentions, with the added fact that the latest edge devices like the XAVIER, TX2 and RPi4 can easily perform as an edge network sensor

    Measuring the accuracy of software vulnerability assessments: experiments with students and professionals

    Get PDF
    Assessing the risks of software vulnerabilities is a key process of software development and security management. This assessment requires to consider multiple factors (technical features, operational environment, involved assets, status of the vulnerability lifecycle, etc.) and may depend from the assessor's knowledge and skills. In this work, we tackle with an important part of this problem by measuring the accuracy of technical vulnerability assessments by assessors with dierent level and type of knowledge. We report an experiment to compare how accurately students with dierent technical education and security professionals are able to assess the severity of software vulnerabilities with the Common Vulnerability Scoring System (v3) industry methodology. Our results could be useful for increasing awareness about the intrinsic subtleties of vulnerability risk assessment and possibly better compliance with regulations. With respect to academic education, professional training and human resources selections our work suggests that measuring the effects of knowledge and expertise on the accuracy of software security assessments is feasible albeit not easy
    corecore