11 research outputs found

    A Systems Engineering Methodology for Wide Area Network Selection using an Analytical Hierarchy Process

    Get PDF
    In this paper, we apply a systems engineering methodology to select the most appropriate wide area network (WAN) media suite, according to organizational technical requirements, using an Analytic Hierarchy Process (AHP). AHP is a mathematical decision modeling tool that utilizes decomposition, determination, and synthesis to solve complex engineering decision problems. AHP can deal with the universal modeling of process engineering decision-making, which is difficult to describe quantitatively, by integrating quantitative and qualitative analysis. We formulate and apply AHP to a hypothetical case study in order to examine its feasibility for the WAN media selection problem. The results indicate that our model can improve the decision-making process by evaluating and comparing all alternative WANs. This shows that AHP can support and assist an organization in choosing the most effective solution according to its demands. AHP is an effective resource-saver from many perspectives—it gives high performance, economic, and high quality solutions. Keywords: Analytical Hierarchy Process, Wide Area Network, AHP Consistency, WAN alternatives

    Proactive risk management in information systems

    Get PDF
    Managing security risks is one of the major challenges in modern information systems. Threats often come via the World Wide Web and are therefore difficult to predict. Thus, attackers can always be a step ahead of us and reactive approach based on known security incidents is not sufficient. A much higher security level can be achieved by active detection and neutralization of software vulnerabilities. When a large number of vulnerabilities are present in the system, they have to be prioritized for removal according to their severity. With a proactive approach, where we foresee which vulnerabilities will be more likely exploited in practice, the highest level of security can be assured. A widely used prioritization policy based upon a CVSS (Common Vulnerability Scoring System) score is frequently criticised for bad effectiveness. The main reason is that the CVSS score alone is not a good predictor of vulnerability exploitation in the wild. One of the key challenges in this area is therefore to identify the indicators of exploitation. Since the exploitation of vulnerability is basically a human threat, it is reasonable to take into account the characteristics of typical attackers. We propose several methods for setting priorities that take this into account. Methods have to be compared according to their effectiveness in risk mitigation. To this end, we have developed a valuation model that allows such comparisons. Proposed methods, which take into account human threats, were compared with the most popular existing methods. In the experiment we used vulnerability data from publicly available databases. Experimental results show that methods which take into account the characteristics of attackers are generally more effective than existing methods. The effectiveness was also confirmed in some real cases of information systems in practice

    Proactive risk management in information systems

    Get PDF
    Managing security risks is one of the major challenges in modern information systems. Threats often come via the World Wide Web and are therefore difficult to predict. Thus, attackers can always be a step ahead of us and reactive approach based on known security incidents is not sufficient. A much higher security level can be achieved by active detection and neutralization of software vulnerabilities. When a large number of vulnerabilities are present in the system, they have to be prioritized for removal according to their severity. With a proactive approach, where we foresee which vulnerabilities will be more likely exploited in practice, the highest level of security can be assured. A widely used prioritization policy based upon a CVSS (Common Vulnerability Scoring System) score is frequently criticised for bad effectiveness. The main reason is that the CVSS score alone is not a good predictor of vulnerability exploitation in the wild. One of the key challenges in this area is therefore to identify the indicators of exploitation. Since the exploitation of vulnerability is basically a human threat, it is reasonable to take into account the characteristics of typical attackers. We propose several methods for setting priorities that take this into account. Methods have to be compared according to their effectiveness in risk mitigation. To this end, we have developed a valuation model that allows such comparisons. Proposed methods, which take into account human threats, were compared with the most popular existing methods. In the experiment we used vulnerability data from publicly available databases. Experimental results show that methods which take into account the characteristics of attackers are generally more effective than existing methods. The effectiveness was also confirmed in some real cases of information systems in practice

    A Hybrid Graph Neural Network Approach for Detecting PHP Vulnerabilities

    Full text link
    This paper presents DeepTective, a deep learning approach to detect vulnerabilities in PHP source code. Our approach implements a novel hybrid technique that combines Gated Recurrent Units and Graph Convolutional Networks to detect SQLi, XSS and OSCI vulnerabilities leveraging both syntactic and semantic information. We evaluate DeepTective and compare it to the state of the art on an established synthetic dataset and on a novel real-world dataset collected from GitHub. Experimental results show that DeepTective achieves near perfect classification on the synthetic dataset, and an F1 score of 88.12% on the realistic dataset, outperforming related approaches. We validate DeepTective in the wild by discovering 4 novel vulnerabilities in established WordPress plugins.Comment: A poster version of this paper appeared as https://doi.org/10.1145/3412841.344213

    Enhancing Trust –A Unified Meta-Model for Software Security Vulnerability Analysis

    Get PDF
    Over the last decade, a globalization of the software industry has taken place which has facilitated the sharing and reuse of code across existing project boundaries. At the same time, such global reuse also introduces new challenges to the Software Engineering community, with not only code implementation being shared across systems but also any vulnerabilities it is exposed to as well. Hence, vulnerabilities found in APIs no longer affect only individual projects but instead might spread across projects and even global software ecosystem borders. Tracing such vulnerabilities on a global scale becomes an inherently difficult task, with many of the resources required for the analysis not only growing at unprecedented rates but also being spread across heterogeneous resources. Software developers are struggling to identify and locate the required data to take full advantage of these resources. The Semantic Web and its supporting technology stack have been widely promoted to model, integrate, and support interoperability among heterogeneous data sources. This dissertation introduces four major contributions to address these challenges: (1) It provides a literature review of the use of software vulnerabilities databases (SVDBs) in the Software Engineering community. (2) Based on findings from this literature review, we present SEVONT, a Semantic Web based modeling approach to support a formal and semi-automated approach for unifying vulnerability information resources. SEVONT introduces a multi-layer knowledge model which not only provides a unified knowledge representation, but also captures software vulnerability information at different abstract levels to allow for seamless integration, analysis, and reuse of the modeled knowledge. The modeling approach takes advantage of Formal Concept Analysis (FCA) to guide knowledge engineers in identifying reusable knowledge concepts and modeling them. (3) A Security Vulnerability Analysis Framework (SV-AF) is introduced, which is an instantiation of the SEVONT knowledge model to support evidence-based vulnerability detection. The framework integrates vulnerability ontologies (and data) with existing Software Engineering ontologies allowing for the use of Semantic Web reasoning services to trace and assess the impact of security vulnerabilities across project boundaries. Several case studies are presented to illustrate the applicability and flexibility of our modelling approach, demonstrating that the presented knowledge modeling approach cannot only unify heterogeneous vulnerability data sources but also enables new types of vulnerability analysis

    Cyber Security and Critical Infrastructures 2nd Volume

    Get PDF
    The second volume of the book contains the manuscripts that were accepted for publication in the MDPI Special Topic "Cyber Security and Critical Infrastructure" after a rigorous peer-review process. Authors from academia, government and industry contributed their innovative solutions, consistent with the interdisciplinary nature of cybersecurity. The book contains 16 articles, including an editorial that explains the current challenges, innovative solutions and real-world experiences that include critical infrastructure and 15 original papers that present state-of-the-art innovative solutions to attacks on critical systems

    Nordic LifeWatch cooperation, final report: A joint initiative from Denmark, Iceland, Finland, Norway and Sweden

    Get PDF
    The main goal of the present report is to outline the possibilities for an enhanced cooperation between the Nordic countries within eScience and biodiversity. LifeWatch is one of several ESFRI projects which aim to establish eInfrastructures and databases in the field of biodiversity and ecosystem research. Similarities between Nordic countries are extensive in relation to a number of biodiversity related issues. Most species in Nordic countries are common, and frequently the same challenges concerning biodiversity and ecosystem services are addressed in the different countries. The present report has been developed by establishing a Nordic LifeWatch network with delegates from each of the Nordic countries. The report has been written jointly by the delegates, and the work was organized by establishing working groups with the following themes: strategic issues, technical development, legal framework and communication. Written during two workshops, Skype meetings and emailing, the following main issues are discussed in the present report: * Scientific needs for improved access to biodiversity data and advanced eScience research infrastructure in the Nordic countries. * Future challenges and priorities facing the international biodiversity research community. * Scientific potential of openly accessible biodiversity and environmental data for individual researchers and institutions. * Spin-off effects of open access for the general public. * Internationally standardized Nordic metadata inventory. * Legal framework and challenges associated with environmental-, climate-, and biodiversity data sharing, communication, training and scientific needs. * Finally, some strategic steps towards realizing a Nordic LifeWatch construction and operational phase are discussed. Easy access to open data on biodiversity and the environment is crucial for many researchers and research institutions, as well as environmental administration. Easy access to data from different fields of science creates an environment for new scientific ideas to emerge. This potential of generating new, interdisciplinary approaches to pre-existing problems is one of the key features of open-access data platforms that unify diverse data sources. Interdisciplinary elements, access to data over larger gradients, compatible eSystems and eTools to handle large amounts of data are extremely important and, if further developed, represent significant steps towards analysis of biological effects of climate change, human impact and development of operational ecosystem service assessment techniques. It is concluded that significant benefits regarding both scientific potential, technical developments and financial investments can be obtained by constructing a common Nordic LifeWatch eInfrastructure. Several steps concerning organizing and funding of a future Nordic LifeWatch are discussed, and an action plan towards 2020 is suggested. To analyze the potential for future Nordic LifeWatch in detail, our main conclusion is to arrange a Nordic LifeWatch conference as soon as possible. This conference should involve Nordic research councils, scientists and relevant stakeholders. The national delegates from the participating countries in the Nordic LifeWatch project are prepared to present details from the report and developments so far as a basis for further development of Nordic LifeWatch. The present work is financed by NordForsk and in-kind contributions from participating institutions
    corecore