17,138 research outputs found

    Danger is My Middle Name: Experimenting with SSL Vulnerabilities in Android Apps

    Get PDF
    This paper presents a measurement study of information leakage and SSL vulnerabilities in popular Android apps. We perform static and dynamic analysis on 100 apps, downloaded at least 10M times, that request full network access. Our experiments show that, although prior work has drawn a lot of attention to SSL implementations on mobile platforms, several popular apps (32/100) accept all certificates and all hostnames, and four actually transmit sensitive data unencrypted. We set up an experimental testbed simulating man-in-the-middle attacks and find that many apps (up to 91% when the adversary has a certificate installed on the victim's device) are vulnerable, allowing the attacker to access sensitive information, including credentials, files, personal details, and credit card numbers. Finally, we provide a few recommendations to app developers and highlight several open research problems.Comment: A preliminary version of this paper appears in the Proceedings of ACM WiSec 2015. This is the full versio

    Investigating Automatic Static Analysis Results to Identify Quality Problems: an Inductive Study

    Get PDF
    Background: Automatic static analysis (ASA) tools examine source code to discover "issues", i.e. code patterns that are symptoms of bad programming practices and that can lead to defective behavior. Studies in the literature have shown that these tools find defects earlier than other verification activities, but they produce a substantial number of false positive warnings. For this reason, an alternative approach is to use the set of ASA issues to identify defect prone files and components rather than focusing on the individual issues. Aim: We conducted an exploratory study to investigate whether ASA issues can be used as early indicators of faulty files and components and, for the first time, whether they point to a decay of specific software quality attributes, such as maintainability or functionality. Our aim is to understand the critical parameters and feasibility of such an approach to feed into future research on more specific quality and defect prediction models. Method: We analyzed an industrial C# web application using the Resharper ASA tool and explored if significant correlations exist in such a data set. Results: We found promising results when predicting defect-prone files. A set of specific Resharper categories are better indicators of faulty files than common software metrics or the collection of issues of all issue categories, and these categories correlate to different software quality attributes. Conclusions: Our advice for future research is to perform analysis on file rather component level and to evaluate the generalizability of categories. We also recommend using larger datasets as we learned that data sparseness can lead to challenges in the proposed analysis proces

    Assessing the impact of affective feedback on end-user security awareness

    Get PDF
    A lack of awareness regarding online security behaviour can leave users and their devices vulnerable to compromise. This paper highlights potential areas where users may fall victim to online attacks, and reviews existing tools developed to raise users’ awareness of security behaviour. An ongoing research project is described, which provides a combined monitoring solution and affective feedback system, designed to provide affective feedback on automatic detection of risky security behaviour within a web browser. Results gained from the research conclude an affective feedback mechanism in a browser-based environment, can promote general awareness of online security

    Quieting the Static: A Study of Static Analysis Alert Suppressions

    Full text link
    Static analysis tools are commonly used to detect defects before the code is released. Previous research has focused on their overall effectiveness and their ability to detect defects. However, little is known about the usage patterns of warning suppressions: the configurations developers set up in order to prevent the appearance of specific warnings. We address this gap by analyzing how often are warning suppression features used, which warning suppression features are used and for what purpose, and also how could the use of warning suppression annotations be avoided. To answer these questions we examine 1\,425 open-source Java-based projects that utilize Findbugs or Spotbugs for warning-suppressing configurations and source code annotations. We find that although most warnings are suppressed, only a small portion of them get frequently suppressed. Contrary to expectations, false positives account for a minor proportion of suppressions. A significant number of suppressions introduce technical debt, suggesting potential disregard for code quality or a lack of appropriate guidance from the tool. Misleading suggestions and incorrect assumptions also lead to suppressions. Findings underscore the need for better communication and education related to the use of static analysis tools, improved bug pattern definitions, and better code annotation. Future research can extend these findings to other static analysis tools, and apply them to improve the effectiveness of static analysis.Comment: 11 pages, 4 figure

    Myths and Facts About Static Application Security Testing Tools: An Action Research at Telenor Digital

    Get PDF
    It is claimed that integrating agile and security in practice is challenging. There is the notion that security is a heavy process, requires expertise, and consumes developers’ time. These contrast with the agile vision. Regardless of these challenges, it is important for organizations to address security within their agile processes since critical assets must be protected against attacks. One way is to integrate tools that could help to identify security weaknesses during implementation and suggest methods to refactor them. We used quantitative and qualitative approaches to investigate the efficiency of the tools and what they mean to the actual users (i.e. developers) at Telenor Digital. Our findings, although not surprising, show that several barriers exist both in terms of tool’s performance and developers’ perceptions. We suggest practical ways for improvement.publishedVersio

    What to Fix? Distinguishing between design and non-design rules in automated tools

    Full text link
    Technical debt---design shortcuts taken to optimize for delivery speed---is a critical part of long-term software costs. Consequently, automatically detecting technical debt is a high priority for software practitioners. Software quality tool vendors have responded to this need by positioning their tools to detect and manage technical debt. While these tools bundle a number of rules, it is hard for users to understand which rules identify design issues, as opposed to syntactic quality. This is important, since previous studies have revealed the most significant technical debt is related to design issues. Other research has focused on comparing these tools on open source projects, but these comparisons have not looked at whether the rules were relevant to design. We conducted an empirical study using a structured categorization approach, and manually classify 466 software quality rules from three industry tools---CAST, SonarQube, and NDepend. We found that most of these rules were easily labeled as either not design (55%) or design (19%). The remainder (26%) resulted in disagreements among the labelers. Our results are a first step in formalizing a definition of a design rule, in order to support automatic detection.Comment: Long version of accepted short paper at International Conference on Software Architecture 2017 (Gothenburg, SE

    Mining Fix Patterns for FindBugs Violations

    Get PDF
    In this paper, we first collect and track a large number of fixed and unfixed violations across revisions of software. The empirical analyses reveal that there are discrepancies in the distributions of violations that are detected and those that are fixed, in terms of occurrences, spread and categories, which can provide insights into prioritizing violations. To automatically identify patterns in violations and their fixes, we propose an approach that utilizes convolutional neural networks to learn features and clustering to regroup similar instances. We then evaluate the usefulness of the identified fix patterns by applying them to unfixed violations. The results show that developers will accept and merge a majority (69/116) of fixes generated from the inferred fix patterns. It is also noteworthy that the yielded patterns are applicable to four real bugs in the Defects4J major benchmark for software testing and automated repair.Comment: Accepted for IEEE Transactions on Software Engineerin

    Using Static Analysis and Static Measurement for Industrial Software Quality Evaluation

    Get PDF
    Business organizations that outsource software development need to evaluate the quality of the code delivered by suppliers. In this paper, we illustrate an experience in setting up and using a toolset for evaluating code quality for a company that outsources software development. The selected tools perform static code analysis and static measurement, and provide evidence of possible quality issues. To verify whether the issues reported by tools are associated to real problems, code inspections were carried out. The combination of automated analysis and inspections proved effective, in that several types of defects were identiïŹed. Based on our ïŹndings, the business company was able to learn what are the most frequent and dangerous types of defects that affect the acquired code: this knowledge is now being used on a regular basis to perform focused veriïŹcation activities
    • 

    corecore