15 research outputs found

    Automated CVE Analysis for Threat Prioritization and Impact Prediction

    Full text link
    The Common Vulnerabilities and Exposures (CVE) are pivotal information for proactive cybersecurity measures, including service patching, security hardening, and more. However, CVEs typically offer low-level, product-oriented descriptions of publicly disclosed cybersecurity vulnerabilities, often lacking the essential attack semantic information required for comprehensive weakness characterization and threat impact estimation. This critical insight is essential for CVE prioritization and the identification of potential countermeasures, particularly when dealing with a large number of CVEs. Current industry practices involve manual evaluation of CVEs to assess their attack severities using the Common Vulnerability Scoring System (CVSS) and mapping them to Common Weakness Enumeration (CWE) for potential mitigation identification. Unfortunately, this manual analysis presents a major bottleneck in the vulnerability analysis process, leading to slowdowns in proactive cybersecurity efforts and the potential for inaccuracies due to human errors. In this research, we introduce our novel predictive model and tool (called CVEDrill) which revolutionizes CVE analysis and threat prioritization. CVEDrill accurately estimates the CVSS vector for precise threat mitigation and priority ranking and seamlessly automates the classification of CVEs into the appropriate CWE hierarchy classes. By harnessing CVEDrill, organizations can now implement cybersecurity countermeasure mitigation with unparalleled accuracy and timeliness, surpassing in this domain the capabilities of state-of-the-art tools like ChaptGPT

    EPOS Security & GDPR Compliance

    Get PDF
    Since May 2018, companies have been required to comply with the General Data Protection Regulation (GDPR). This means that many companies had to change their methods of collecting and processing EU citizens’ data. The compliance process can be very expensive, for example, more specialized human resources are needed, who need to study the regulations and then implement the changes in the IT applications and infrastructures. As a result, new measures and methods need to be developed and implemented, making this process expensive. This project is part of the EPOS project. EPOS allows data on earth sciences from various research institutes in Europe to be shared and used. The data is stored in a database and in some file systems and in addition, there is web services for data mining and control. The EPOS project is a complex distributed system and therefore it is important to guarantee not only its security, but also that it is compatible with GDPR. The need to automate and facilitate this compliance and verification process was identified, in particular the need to develop a tool capable of analyzing applications web. This tool can provide companies in general an easier and faster way to check the degree of compliance with the GDPR in order to assess and implement any necessary changes. With this, PADRES was developed that contains the main points of GDPR organized by principles in the form of checklist which are answered manually. When submitted, a security analysis is also performed based on NMAP and ZAP together with the cookie analyzer. Finally, a report is generated with the information obtained together with a set of suggestions based on the responses obtained from the checklist. Applying this tool to EPOS, most of the points related to GDPR were answered as being in compliance although the rest of the suggestions were generated to help improve the level of compliance and also improve general data management. In the exploitation of vulnerabilities, some were found to be classified as high risk, but most were found to be classified as medium risk.Desde maio de 2018 que as empresas precisam de cumprir o Regulamento Geral de Proteção de Dados (GDPR). Isso significa que muitas empresas tiveram que mudar seus métodos de como recolhem e processam os dados dos cidadãos da UE. O processo de conformidade pode ser muito caro, por exemplo, são necessários recursos humanos mais especializados, que precisam estudar os regulamentos e depois implementar as alterações nos aplicativos e infraestruturas de TI. Com isso novas medidas e métodos precisam ser desenvolvidos e implementados, tornando esse processo caro. Este projeto está inserido no projeto European Plate Observing System (EPOS). O EPOS permite que dados sobre ciências da terra de vários institutos de pesquisa na Europa sejam compartilhados e usados. Os dados são armazenados em base de dados e em alguns sistema de ficheiros e além disso, existem web services para controle e mineração de dados. O projeto EPOS é um sistema distribuído complexo e portanto, é importante garantir não apenas sua segurança, mas também que seja compatível com o GDPR. Foi identificada a necessidade de automatizar e facilitar esse processo, em particular a necessidade de desenvolver uma ferramenta capaz de analisar aplicações web. Essa ferramenta, chamada PrivAcy, Data REgulation and Security (PADRES) pode fornecer às empresas uma maneira mais fácil e rápida de verificar o grau de conformidade com o GDPR com o objetivo de avaliar e implementar quaisquer alterações necessárias. Com isto, esta ferramenta contém os pontos principais do General Data Protection Regulation (GDPR) organizado por princípios em forma duma lista de verificação, os quais são respondidos manualmente. Como os conceitos de privacidade e segurança se complementam, foi também incluída a procura por vulnerabilidades em aplicações web. Ao integrar as ferramentas de código aberto como o Network Mapper (NMAP) ou Zed Attack Proxy (ZAP), é possível então testar a aplicações contra as vulnerabilidades mais frequentes segundo o Open Web Application Security Project (OWASP) Top 10. Aplicando esta ferramenta no EPOS, a maioria dos pontos relativos ao GDPR foram respondidos como estando em conformidade apesar de nos restantes terem sido geradas as respetivas sugestões para ajudar a melhorar o nível de conformidade e também melhorar o gerenciamento geral dos dados. Na exploração das vulnerabilidades foram encontradas algumas classificadas com risco elevado mas na maioria foram encontradas mais com classificação média

    Robustness of buffer allocation in multi-product multi-batch deterministic flow lines

    Get PDF
    Ankara : Department of Management and Graduate School of Business Administration of Bilkent University, 1993.Thesis (Master's) -- Bilkent University, 1993.Includes bibliographical references leaves 83-87.Today in industry flow lines are not just for a single end product. There is a stochasticity, such that there are various demand scenarios at hand, to be satisfied by the flow line. The performance of the flow line should not be very sensitive to demand changes. Aim of this study is to develop buffer allocation guidelines to help flow line designers.Kurucu, A AkınM.S

    Multi-project scheduling under mode duration uncertainties

    Get PDF
    In this study, we investigate the multi-mode multi-project resource constrained project scheduling problem under uncertainty. We assume a multi-objective setting with 2 objectives : minimizing multi-project makespan and minimizing total sum of absolute deviations of scheduled starting times of activities from their earliest starting times found through simulation. We develop two multi-objective genetic algorithm (MOGA) solution approaches. The first one, called decomposition MOGA, decomposes the problem into two-stages and the other one, called holistic MOGA, combines all activities of each project into one big network and does not require that activities of a project are scheduled consecutively as a benchmark. Decomposition MOGA starts with an initial step of a 2-stage decomposition where each project is reduced to a single macro-activity by systematicaly using artificial budget values and expected project durations. Generated macro-activities may have one or more processing modes called macro-modes. Deterministic macromodes are transformed into random variables by generating disruption cases via simulation. For fitness computation of each MOGA two similar 2-stage heuristics are developed. In both heuristics, a minimum target makespan of overall projects is determined. In the second stage minimum total sum of absolute deviations model is solved in order to find solution robust starting times of activities for each project. The objective value of this model is taken as the second objective of the MOGA's. Computational studies measuring performance of the two proposed solution approaches are performed for different datasets in different parameter settings. When non-dominated solutions of each approach are combined to a final population, overall results show that a larger ratio of these solutions are genetared by decomposition MOGA. Additionally, required computational effort for decompositon MOGA is much less than holistic approach as expected

    Allocation of Ground Handling Resources at Copenhagen Airport

    Get PDF

    A Feature-based Configurtor for CAM

    Get PDF

    KARTAL: Web Application Vulnerability Hunting Using Large Language Models

    Get PDF
    Broken Access Control is the most serious web application security risk as published by Open Worldwide Application Security Project (OWASP). This category has highly complex vulnerabilities such as Broken Object Level Authorization (BOLA) and Exposure of Sensitive Information. Finding such critical vulnerabilities in large software systems requires intelligent and automated tools. State-of-the-art (SOTA) research including hybrid application security testing tools, algorithmic bruteforcers, and artificial intelligence has shown great promise in detection. Nevertheless, there exists a gap in research for reliably identifying logical and context-dependant Broken Access Control vulnerabilities. We propose KARTAL, a novel method for web application vulnerability detection using a Large Language Model (LLM). It consists of 3 components: Fuzzer, Prompter, and Detector. The Fuzzer is responsible for methodically collecting application behaviour. The Prompter processes the data from the Fuzzer and formulates a prompt. The Detector uses an LLM which we have finetuned for detecting vulnerabilities. In the study, we investigate the performance, key factors, and limitations of the proposed method. We experiment with finetuning three types of decoder-only pre-trained transformers for detecting two sophisticated vulnerabilities. Our best model attained an accuracy of 87.19%, with an F1 score of 0.82. By using hardware acceleration on a consumer-grade laptop, our fastest model can make up to 539 predictions per second. The experiments on varying the training sample size demonstrated the great learning capabilities of our model. Every 400 samples added to training resulted in an average MCC score improvement of 19.58%. Furthermore, the dynamic properties of KARTAL enable inference-time adaption to the application domain, resulting in reduced false positives

    Mixed integer programming on transputers

    Get PDF
    Mixed Integer Programming (MIP) problems occur in many industries and their practical solution can be challenging in terms of both time and effort. Although faster computer hardware has allowed the solution of more MIP problems in reasonable times, there will come a point when the hardware cannot be speeded up any more. One way of improving the solution times of MIP problems without further speeding up the hardware is to improve the effectiveness of the solution algorithm used. The advent of accessible parallel processing technology and techniques provides the opportunity to exploit any parallelism within MIP solving algorithms in order to accelerate the solution of MIP problems. Many of the MIP problem solving algorithms in the literature contain a degree of exploitable parallelism. Several algorithms were considered as candidates for parallelisation within the constraints imposed by the currently available parallel hardware and techniques. A parallel Branch and Bound algorithm was designed for and implemented on an array of transputers hosted by a PC. The parallel algorithm was designed to operate as a process farm, with a master passing work to various slave processors. A message-passing harness was developed to allow full control of the slaves and the work sent to them. The effects of using various node selection techniques were studied and a default node selection strategy decided upon for the parallel algorithm. The parallel algorithm was also designed to take full advantage of the structure of MIP problems formulated using global entities such as general integers and special ordered sets. The presence of parallel processors makes practicable the idea of performing more than two branches on an unsatisfied global entity. Experiments were carried out using multiway branching strategies and a default branching strategy decided upon for appropriate types of MIP problem
    corecore