8 research outputs found

    Automated identification and qualitative characterization of safety concerns reported in UAV software platforms

    Get PDF
    Unmanned Aerial Vehicles (UAVs) are nowadays used in a variety of applications. Given the cyber-physical nature of UAVs, software defects in these systems can cause issues with safety-critical implications. An important aspect of the lifecycle of UAV software is to minimize the possibility of harming humans or damaging properties through a continuous process of hazard identification and safety risk management. Specifically, safety-related concerns typically emerge during the operation of UAV systems, reported by end-users and developers in the form of issue reports and pull requests. However, popular UAV systems daily receive tens or hundreds of reports of varying types and quality. To help developers timely identifying and triaging safety-critical UAV issues, we (i) experiment with automated approaches (previously used for issue classification) for detecting the safety-related matters appearing in the titles and descriptions of issues and pull requests reported in UAV platforms, and (ii) propose a categorization of the main hazards and accidents discussed in such issues. Our results (i) show that shallow machine learning-based approaches can identify safety-related sentences with precision, recall, and F-measure values of about 80\%; and (ii) provide a categorization and description of the relationships between safety issue hazards and accidents

    An NLP-based tool for software artifacts analysis

    Get PDF
    Software developers rely on various repositories and communication channels to exchange relevant information about their ongoing tasks and the status of overall project progress. In this context, semi-structured and unstructured software artifacts have been leveraged by researchers to build recommender systems aimed at supporting developers in different tasks, such as transforming user feedback in maintenance and evolution tasks, suggesting experts, or generating software documentation. More specifically, Natural Language (NL) parsing techniques have been successfully leveraged to automatically identify (or extract) the relevant information embedded in unstructured software artifacts. However, such techniques require the manual identification of patterns to be used for classification purposes. To reduce such a manual effort, we propose an NL parsingbased tool for software artifacts analysis named NEON that can automate the mining of such rules, minimizing the manual effort of developers and researchers. Through a small study involving human subjects with NL processing and parsing expertise, we assess the performance of NEON in identifying rules useful to classify app reviews for software maintenance purposes. Our results show that more than one-third of the rules inferred by NEON are relevant for the proposed task. Demo webpage: https://github.com/adisorbo/NEON too

    Can Twitter be used to Acquire Reliable Alerts against Novel Cyber Attacks?

    Full text link
    Time-relevant and accurate threat information from public domains are essential for cyber security. In a constantly evolving threat landscape, such information assists security researchers in thwarting attack strategies. In this work, we collect and analyze threat-related information from Twitter to extract intelligence for proactive security. We first use a convolutional neural network to classify the tweets as containing or not valuable threat indicators. In particular, to gather threat intelligence from social media, the proposed approach collects pertinent Indicators of Compromise (IoCs) from tweets, such as IP addresses, URLs, File hashes, domain addresses, and CVE IDs. Then, we analyze the IoCs to confirm whether they are reliable and valuable for threat intelligence using performance indicators, such as correctness, timeliness, and overlap. We also evaluate how fast Twitter shares IoCs compared to existing threat intelligence services. Furthermore, through machine learning models, we classify Twitter accounts as either automated or human-operated and delve into the role of bot accounts in disseminating cyber threat information on social media. Our results demonstrate that Twitter is growing into a powerful platform for gathering precise and pertinent malware IoCs and a reliable source for mining threat intelligence

    SURF: Replication Package for the paper entitled "Summarizing App Reviews for Planning Future Change Tasks"

    No full text
    Description of the content of folder "SURF_replication_package": 1) "Experiment I" contains: a) the folder "summaries" which contains all the html summaries generated through SURF and browsed by study participants involved in the Experiment I. b) the folder "XMLreviews" which contains, for each of the apps involved in the Experiment I, the corresponding XML file containing all the collected reviews for that app. These xml files have been used as input files for the SURF tool for generating the summaries contained in the "summaries" folder c) "Experiment_I_results.xlsx" which contains all the answers to our survey collected from the Experiment I participants. 2) "Experiment II" contains: a) the folder "summaries" which contains the two html summaries generated through SURF and browsed by study participants in the Experiment II. b) the folder "XMLreviews" which contains, for each of the two apps involved in the Experiment II, the corresponding XML file containing all the collected reviews for that app. These xml files have been used as input of the SURF tool for generating the summaries contained in the "summaries" folder. c) "Experiment_II_results.xlsx" which contains all the user feedbacks extracted/validated by survey participants in the two sub-experiments. d) "Experiment_II_survey_answers.xlsx" which contains all the answers to our survey collected in the Experiment II participants. 3) "Survey.pdf" which contains the pdf version of the survey performed by the participants 4) "SURF_tool.zip" contains: a) "SURF.jar", which contains the class files of a prototypical implementation of SURF b) "README.txt" which contains the instructions to run the SURF tool c) the "lib" folder, which contains all the java libraries needed for running SURF

    Replication Package of the paper entitled "Summarizing App Reviews for Planning Future Change Tasks"

    No full text
    <p>Description of the content of folder "SURF_replication_package":</p> <p>1) "Experiment I" contains: a) the folder "summaries" which contains all the html summaries generated through SURF and browsed by study participants involved in the Experiment I. b) the folder "XMLreviews" which contains, for each app involved in the Experiment I, the corresponding reviews collected in the xml file. These xml files have been used as input files for the SURF tool for generating the summaries contained in the "summaries" folder c) "Experiment_I_results.xlsx" which contains all the answers to our survey collected from the Experiment I participants.</p> <p>2) "Experiment II" contains: a) the folder "summaries" which contains the two html summaries generated through SURF and browsed by study participants in the Experiment II. b) the folder "XMLreviews" which contains, for each of the two app involved in the Experiment I, the reviews collected in the corresponding xml files. These xml files have been used as input of the SURF tool for generating the summaries contained in the "summaries" folder. c) "Experiment_II_results.xlsx" which contains all the user feedbacks extracted/validated by survey participants in the two sub-experiments. d) "Experiment_II_survey_answers.xlsx" which contains all the answers to our survey collected in the Experiment II participants.</p> <p>3) "Survey.pdf" which contains the pdf version of the survey performed by the participants</p
    corecore