10 research outputs found

    New Approaches to Software Security Metrics and Measurements

    Get PDF
    Meaningful metrics and methods for measuring software security would greatly improve the security of software ecosystems. Such means would make security an observable attribute, helping users make informed choices and allowing vendors to ‘charge’ for it—thus, providing strong incentives for more security investment. This dissertation presents three empirical measurement studies introducing new approaches to measuring aspects of software security, focusing on Free/Libre and Open Source Software (FLOSS). First, to revisit the fundamental question of whether software is maturing over time, we study the vulnerability rate of packages in stable releases of the Debian GNU/Linux software distribution. Measuring the vulnerability rate through the lens of Debian stable: (a) provides a natural time frame to test for maturing behavior, (b) reduces noise and bias in the data (only CVEs with a Debian Security Advisory), and (c) provides a best-case assessment of maturity (as the Debian release cycle is rather conservative). Overall, our results do not support the hypothesis that software in Debian is maturing over time, suggesting that vulnerability finding-and-fixing does not scale and more effort should be invested in significantly reducing the introduction rate of vulnerabilities, e.g. via ‘security by design’ approaches like memory-safe programming languages. Second, to gain insights beyond the number of reported vulnerabilities, we study how long vulnerabilities remain in the code of popular FLOSS projects (i.e. their lifetimes). We provide the first, to the best of our knowledge, method for automatically estimating the mean lifetime of a set of vulnerabilities based on information in vulnerability-fixing commits. Using this method, we study the lifetimes of ~6 000 CVEs in 11 popular FLOSS projects. Among a number of findings, we identify two quantities of particular interest for software security metrics: (a) the spread between mean vulnerability lifetime and mean code age at the time of fix, and (b) the rate of change of the aforementioned spread. Third, to gain insights into the important human aspect of the vulnerability finding process, we study the characteristics of vulnerability reporters for 4 popular FLOSS projects. We provide the first, to the best of our knowledge, method to create a large dataset of vulnerability reporters (>2 000 reporters for >4 500 CVEs) by combining information from a number of publicly available online sources. We proceed to analyze the dataset and identify a number of quantities that, suitably combined, can provide indications regarding the health of a project’s vulnerability finding ecosystem. Overall, we showed that measurement studies carefully designed to target crucial aspects of the software security ecosystem can provide valuable insights and indications regarding the ‘quality of security’ of software. However, the road to good security metrics is still long. New approaches covering other important aspects of the process are needed, while the approaches introduced in this dissertation should be further developed and improved

    Big Data: Learning, Analytics, and Applications

    Get PDF
    With the rise of autonomous systems, the automation of faults detection and localization becomes critical to their reliability. An automated strategy that can provide a ranked list of faulty modules or files with respect to how likely they contain the root cause of the problem would help in the automation bug localization. Learning from the history if previously located bugs in general, and extracting the dependencies between these bugs in particular, helps in building models to accurately localize any potentially detected bugs. In this study, we propose a novel fault localization solution based on a learning-to-rank strategy, using the history of previously localized bugs and their dependencies as features, to rank files in terms of their likelihood of being a root cause of a bug. The evaluation of our approach has shown its efficiency in localizing dependent bugs

    Characterizing and Diagnosing Architectural Degeneration of Software Systems from Defect Perspective

    Get PDF
    The architecture of a software system is known to degrade as the system evolves over time due to change upon change, a phenomenon that is termed architectural degeneration. Previous research has focused largely on structural deviations of an architecture from its baseline. However, another angle to observe architectural degeneration is software defects, especially those that are architecturally related. Such an angle has not been scientifically explored until now. Here, we ask two relevant questions: (1) What do defects indicate about architectural degeneration? and (2) How can architectural degeneration be diagnosed from the defect perspective? To answer question (1), we conducted an exploratory case study analyzing defect data over six releases of a large legacy system (of size approximately 20 million source lines of code and age over 20 years). The relevant defects here are those that span multiple components in the system (called multiple-component defects - MCDs). This case study found that MCDs require more changes to fix and are more persistent across development phases and releases than other types of defects. To answer question (2), we developed an approach (called Diagnosing Architectural Degeneration - DAD) from the defect perspective, and validated it in another, confirmatory, case study involving three releases of a commercial system (of size over 1.5 million source lines of code and age over 13 years). This case study found that components of the system tend to persistently have an impact on architectural degeneration over releases. Especially, such impact of a few components is substantially greater than that of other components. These results are new and they add to the current knowledge on architectural degeneration. The key conclusions from these results are: (i) analysis of MCDs is a viable approach to characterizing architectural degeneration; and (ii) a method such as DAD can be developed for diagnosing architectural degeneration

    Open Source Licensing in Mixed Markets, or Why Open Source Software Does Not Succeed

    Get PDF
    The rivalry between developers of open source and proprietary software encourages open source developers to court users and respond to their needs. If the open source developer wants to promote her own open source standard and solutions, she may choose liberal license terms such as those of the Berkeley Software Distribution as proprietary developers will then find it easier to adopt her standard in their products. If she wants to promote the use of open source software per se, she may use more restrictive license terms such as the General Public License to discourage proprietary appropriation of her effort. I show that open source software that comes late into a market will be less likely than more innovative open source software to be compatible with proprietary software, but is also more likely to be made more accessible to inexperienced users

    Anales del XIII Congreso Argentino de Ciencias de la Computación (CACIC)

    Get PDF
    Contenido: Arquitecturas de computadoras Sistemas embebidos Arquitecturas orientadas a servicios (SOA) Redes de comunicaciones Redes heterogéneas Redes de Avanzada Redes inalámbricas Redes móviles Redes activas Administración y monitoreo de redes y servicios Calidad de Servicio (QoS, SLAs) Seguridad informática y autenticación, privacidad Infraestructura para firma digital y certificados digitales Análisis y detección de vulnerabilidades Sistemas operativos Sistemas P2P Middleware Infraestructura para grid Servicios de integración (Web Services o .Net)Red de Universidades con Carreras en Informática (RedUNCI

    Cardiovascular risk prediction: how useful are web-based tools and do risk representation formats matter?

    Get PDF
    Cardiovascular risk prediction tools are becoming increasing available on the web for people to use at home. However, research into the most effective ways of communicating cardiovascular risk has been limited. This thesis examined how well web-based cardiovascular risk prediction tools present cardiovascular risk and encourage risk reduction. Variation was found in both the quality of the risk communication and the number of features incorporated into the tools to facilitate decisions about lifestyle change and treatment. Additionally, past literature into the effectiveness of cardiovascular risk representation formats was systematically reviewed. This highlighted the need for more methodologically sound studies, using actual risk assessment rather than hypothetical risk scenarios. This thesis also described a four-armed web-based randomised controlled trial (RCT) conducted to examine the effects of different cardiovascular risk representation formats on patient-based outcomes. It comprised a cardiovascular risk formatter that presented risk in one of three formats: bar graph, pictogram and metonym (e.g. image depicting the seriousness of having a myocardial infarction). There were two control groups to examine the Hawthorne effect. In total, 903 respondents took part in the trial. The most successful recruitment methods were web-based, including staff electronic noticeboards and social networking sites. The RCT found that viewing cardiovascular risk significantly reduces negative emotions in the 'worried well', thus helping to correct inaccurate risk perceptions. There were no main effects of risk representation formats, suggesting that the way risk is presented had little influence on the population that were recruited, in terms of motivating behaviour change, facilitating understanding of risk information or altering emotion. However, a possible type II error occurred as the sample was unrepresentative, highly educated and biased towards those of low cardiovascular risk. Further research is needed to reach target audiences and engage those who would benefit the most from using risk assessment tools
    corecore