1,885 research outputs found

    Time for Addressing Software Security Issues: Prediction Models and Impacting Factors

    Get PDF
    Finding and fixing software vulnerabilities have become a major struggle for most software development companies. While generally without alternative, such fixing efforts are a major cost factor, which is why companies have a vital interest in focusing their secure software development activities such that they obtain an optimal return on this investment. We investigate, in this paper, quantitatively the major factors that impact the time it takes to fix a given security issue based on data collected automatically within SAP’s secure development process, and we show how the issue fix time could be used to monitor the fixing process. We use three machine learning methods and evaluate their predictive power in predicting the time to fix issues. Interestingly, the models indicate that vulnerability type has less dominant impact on issue fix time than previously believed. The time it takes to fix an issue instead seems much more related to the component in which the potential vulnerability resides, the project related to the issue, the development groups that address the issue, and the closeness of the software release date. This indicates that the software structure, the fixing processes, and the development groups are the dominant factors that impact the time spent to address security issues. SAP can use the models to implement a continuous improvement of its secure software development process and to measure the impact of individual improvements. The development teams at SAP develop different types of software, adopt different internal development processes, use different programming languages and platforms, and are located in different cities and countries. Other organizations, may use the results—with precaution—and be learning organizations

    A Cautionary Tale: On the Role of Reference Data in Empirical Privacy Defenses

    Full text link
    Within the realm of privacy-preserving machine learning, empirical privacy defenses have been proposed as a solution to achieve satisfactory levels of training data privacy without a significant drop in model utility. Most existing defenses against membership inference attacks assume access to reference data, defined as an additional dataset coming from the same (or a similar) underlying distribution as training data. Despite the common use of reference data, previous works are notably reticent about defining and evaluating reference data privacy. As gains in model utility and/or training data privacy may come at the expense of reference data privacy, it is essential that all three aspects are duly considered. In this paper, we first examine the availability of reference data and its privacy treatment in previous works and demonstrate its necessity for fairly comparing defenses. Second, we propose a baseline defense that enables the utility-privacy tradeoff with respect to both training and reference data to be easily understood. Our method is formulated as an empirical risk minimization with a constraint on the generalization error, which, in practice, can be evaluated as a weighted empirical risk minimization (WERM) over the training and reference datasets. Although we conceived of WERM as a simple baseline, our experiments show that, surprisingly, it outperforms the most well-studied and current state-of-the-art empirical privacy defenses using reference data for nearly all relative privacy levels of reference and training data. Our investigation also reveals that these existing methods are unable to effectively trade off reference data privacy for model utility and/or training data privacy. Overall, our work highlights the need for a proper evaluation of the triad model utility / training data privacy / reference data privacy when comparing privacy defenses

    Quantification of the performance of iterative and non-iterative computational methods of locating partial discharges using RF measurement techniques

    Get PDF
    Partial discharge (PD) is an electrical discharge phenomenon that occurs when the insulation materialof high voltage equipment is subjected to high electric field stress. Its occurrence can be an indication ofincipient failure within power equipment such as power transformers, underground transmission cableor switchgear. Radio frequency measurement methods can be used to detect and locate discharge sourcesby measuring the propagated electromagnetic wave arising as a result of ionic charge acceleration. Anarray of at least four receiving antennas may be employed to detect any radiated discharge signals, thenthe three dimensional position of the discharge source can be calculated using different algorithms. These algorithms fall into two categories; iterative or non-iterative. This paper evaluates, through simulation, the location performance of an iterative method (the standardleast squares method) and a non-iterative method (the Bancroft algorithm). Simulations were carried outusing (i) a "Y" shaped antenna array and (ii) a square shaped antenna array, each consisting of a four-antennas. The results show that PD location accuracy is influenced by the algorithm's error bound, thenumber of iterations and the initial values for the iterative algorithms, as well as the antenna arrangement for both the non-iterative and iterative algorithms. Furthermore, this research proposes a novel approachfor selecting adequate error bounds and number of iterations using results of the non-iterative method, thus solving some of the iterative method dependencies

    Integration of security standards in DevOps pipelines: An industry case study

    Get PDF
    In the last decade, companies adopted DevOps as a fast path to deliver software products according to customer expectations, with well aligned teams and in continuous cycles. As a basic practice, DevOps relies on pipelines that simulate factory swim-lanes. The more automation in the pipeline, the shorter a lead time is supposed to be. However, applying DevOps is challenging, particularly for industrial control systems (ICS) that support critical infrastructures and that must obey to rigorous requirements from security regulations and standards. Current research on security compliant DevOps presents open gaps for this particular domain and in general for systematic application of security standards. In this paper, we present a systematic approach to integrate standard-based security activities into DevOps pipelines and highlight their automation potential. Our intention is to share our experiences and help practitioners to overcome the trade-off between adding security activities into the development process and keeping a short lead time. We conducted an evaluation of our approach at a large industrial company considering the IEC 62443-4-1 security standard that regulates ICS. The results strengthen our confidence in the usefulness of our approach and artefacts, and in that they can support practitioners to achieve security compliance while preserving agility including short lead times.info:eu-repo/semantics/acceptedVersio

    Optimization of the aptamers’ immobilization conditions for maximizing the response of a dualaptasensor for cancer biomarker detection

    Get PDF
    Osteopontin (OPN) is a protein that is present in several body fluids and has been reported as a possible cancer biomarker, being its overexpression associated with tumour progression and metastasis [1,2]. A simple and sensitive method that allows the simultaneous detection of single or multiple cancer biomarkers is envisaged and may be an important tool in cancer diagnosis. In this work, two bioreceptors specific for OPN, a RNA aptamer (OPN-R3) previously described by Mi and co-workers [3] and a DNA aptamer (C10K2) developed by our research group, were biotinylated and immobilized on a dual-screen printed gold electrode through streptavidin-biotin interaction. The voltammetric signals generated by the dual-aptasensor array, after the formation of the aptamers-protein complex, were monitored using cyclic voltammetry (CV) and square- wave voltammetry (SWV), using [Fe(CN)6]−3/−4 as a redox probe. The optimal immobilization conditions for the dual-aptasensor array were established by response surface methodology. The maximum voltammetric response was obtained for a 0.5 μM aptamer concentration after 20 min of aptamers’ immobilization and 30 min of aptamer- OPN interaction time at an incubation temperature of 4ºC. The satisfactory preliminary results obtained, although needing further confirmation for synthetic or real human samples, point out that the proposed electrochemical dual-aptasensor array could be a simple and sensitive tool for the detection of OPN, as well as for other potential cancer biomarkers and therefore, may be applied in the future for cancer disease monitoring.This work was also financially supported by Project POCI-01–0145-FEDER-006984 – Associate Laboratory LSRE-LCM funded by FEDER through COMPETE2020 and by national funds through FCT, Portugal. S. Meirinho also acknowledges the research grant provided by Project UID/EQU/50020/2013.info:eu-repo/semantics/publishedVersio

    CARS – A Spatio-Temporal BDI Recommender System: Time, Space and Uncertainty

    Get PDF
    International audienceAgent-based recommender systems have been exploited in the last years to provide informative suggestions to users, showing the advantage of exploiting components like beliefs, goals and trust in the recommenda-tions' computation. However, many real-world scenarios, like the traffic one, require the additional feature of representing and reasoning about spatial and temporal knowledge, considering also their vague connotation. This paper tackles this challenge and introduces CARS, a spatio-temporal agent-based recommender system based on the Belief-Desire-Intention (BDI) architecture. Our approach extends the BDI model with spatial and temporal information to represent and reason about fuzzy beliefs and desires dynamics. An experimental evaluation about spatio-temporal reasoning in the traffic domain is carried out using the NetLogo platform, showing the improvements our recommender system introduces to support agents in achieving their goals

    Myths and Facts About Static Application Security Testing Tools: An Action Research at Telenor Digital

    Get PDF
    It is claimed that integrating agile and security in practice is challenging. There is the notion that security is a heavy process, requires expertise, and consumes developers’ time. These contrast with the agile vision. Regardless of these challenges, it is important for organizations to address security within their agile processes since critical assets must be protected against attacks. One way is to integrate tools that could help to identify security weaknesses during implementation and suggest methods to refactor them. We used quantitative and qualitative approaches to investigate the efficiency of the tools and what they mean to the actual users (i.e. developers) at Telenor Digital. Our findings, although not surprising, show that several barriers exist both in terms of tool’s performance and developers’ perceptions. We suggest practical ways for improvement.publishedVersio

    On the Security Cost of Using a Free and Open Source Component in a Proprietary Product

    Get PDF
    The work presented in this paper is motivated by the need to estimate the security effort of consuming Free and Open Source Software (FOSS) components within a proprietary software supply chain of a large European software vendor. To this extent we have identified three different cost models: centralized (the company checks each component and propagates changes to the different product groups), distributed (each product group is in charge of evaluating and fixing its consumed FOSS components), and hybrid (only the least used components are checked individually by each development team). We investigated publicly available factors (\eg, development activity such as commits, code size, or fraction of code size in different programming languages) to identify which one has the major impact on the security effort of using a FOSS component in a larger software product

    Serum osteoprotegerin level, carotid-femoral pulse wave velocity and cardiovascular survival in haemodialysis patients

    Get PDF
    BACKGROUND: Osteoprotegerin (OPG) is a marker and regulator of arterial calcification, and it is related to cardiovascular survival in haemodialysis patients. The link between OPG and aortic stiffening--a consequence of arterial calcification--has not been previously evaluated in this population, and it is not known whether OPG-related mortality risk is mediated by arterial stiffening. METHODS: At baseline, OPG and aortic pulse wave velocity (PWV) were measured in 98 chronic haemodialysis patients who were followed for a median of 24 months. The relationship between OPG and PWV was assessed by multivariate linear regression. The role of PWV in mediating OPG related cardiovascular mortality was evaluated by including both OPG and PWV in the same survival model. RESULTS: At baseline mean (standard deviation) PWV was 11.2 (3.3) m/s and median OPG (interquartile range) was 11.1 (7.5-15.9) pmol/L. There was a strong, positive, linear relationship between PWV and lnOPG (P = 0.009, model R(2) = 0.540) independent of covariates. During follow-up 23 patients died of cardiovascular causes. In separate univariate survival models both PWV and lnOPG were related to cardiovascular mortality [hazard ratios 1.31 (1.14-1.50) and 8.96 (3.07-26.16), respectively]. When both PWV and lnOPG were entered into the same model, only lnOPG remained significantly associated with cardiovascular mortality [hazard ratio 1.11 (0.93-1.33) and 7.18 (1.89-27.25), respectively). CONCLUSION: In haemodialysis patients OPG is strongly related to PWV and OPG related cardiovascular mortality risk is, in part, mediated by increased PWV

    Mapping quantitative trait loci (QTL) in sheep. II. Meta-assembly and identification of novel QTL for milk production traits in sheep

    Get PDF
    An (Awassi × Merino) × Merino backcross family of 172 ewes was used to map quantitative trait loci (QTL) for different milk production traits on a framework map of 200 loci across all autosomes. From five previously proposed mathematical models describing lactation curves, the Wood model was considered the most appropriate due to its simplicity and its ability to determine ovine lactation curve characteristics. Derived milk traits for milk, fat, protein and lactose yield, as well as percentage composition and somatic cell score were used for single and two-QTL approaches using maximum likelihood estimation and regression analysis. A total of 15 significant (P < 0.01) and additional 25 suggestive (P < 0.05) QTL were detected across both single QTL methods and all traits. In preparation of a meta-analysis, all QTL results were compared with a meta-assembly of QTL for milk production traits in dairy ewes from various public domain sources and can be found on the ReproGen ovine gbrowser http://crcidp.vetsci.usyd.edu.au/cgi-bin/gbrowse/oaries_genome/. Many of the QTL for milk production traits have been reported on chromosomes 1, 3, 6, 16 and 20. Those on chromosomes 3 and 20 are in strong agreement with the results reported here. In addition, novel QTL were found on chromosomes 7, 8, 9, 14, 22 and 24. In a cross-species comparison, we extended the meta-assembly by comparing QTL regions of sheep and cattle, which provided strong evidence for synteny conservation of QTL regions for milk, fat, protein and somatic cell score data between cattle and sheep
    • …
    corecore