6,184 research outputs found

    Analysis and evaluation of SafeDroid v2.0, a framework for detecting malicious Android applications

    Get PDF
    Android smartphones have become a vital component of the daily routine of millions of people, running a plethora of applications available in the official and alternative marketplaces. Although there are many security mechanisms to scan and filter malicious applications, malware is still able to reach the devices of many end-users. In this paper, we introduce the SafeDroid v2.0 framework, that is a flexible, robust, and versatile open-source solution for statically analysing Android applications, based on machine learning techniques. The main goal of our work, besides the automated production of fully sufficient prediction and classification models in terms of maximum accuracy scores and minimum negative errors, is to offer an out-of-the-box framework that can be employed by the Android security researchers to efficiently experiment to find effective solutions: the SafeDroid v2.0 framework makes it possible to test many different combinations of machine learning classifiers, with a high degree of freedom and flexibility in the choice of features to consider, such as dataset balance and dataset selection. The framework also provides a server, for generating experiment reports, and an Android application, for the verification of the produced models in real-life scenarios. An extensive campaign of experiments is also presented to show how it is possible to efficiently find competitive solutions: the results of our experiments confirm that SafeDroid v2.0 can reach very good performances, even with highly unbalanced dataset inputs and always with a very limited overhead

    Translating Video Recordings of Mobile App Usages into Replayable Scenarios

    Full text link
    Screen recordings of mobile applications are easy to obtain and capture a wealth of information pertinent to software developers (e.g., bugs or feature requests), making them a popular mechanism for crowdsourced app feedback. Thus, these videos are becoming a common artifact that developers must manage. In light of unique mobile development constraints, including swift release cycles and rapidly evolving platforms, automated techniques for analyzing all types of rich software artifacts provide benefit to mobile developers. Unfortunately, automatically analyzing screen recordings presents serious challenges, due to their graphical nature, compared to other types of (textual) artifacts. To address these challenges, this paper introduces V2S, a lightweight, automated approach for translating video recordings of Android app usages into replayable scenarios. V2S is based primarily on computer vision techniques and adapts recent solutions for object detection and image classification to detect and classify user actions captured in a video, and convert these into a replayable test scenario. We performed an extensive evaluation of V2S involving 175 videos depicting 3,534 GUI-based actions collected from users exercising features and reproducing bugs from over 80 popular Android apps. Our results illustrate that V2S can accurately replay scenarios from screen recordings, and is capable of reproducing ≈\approx 89% of our collected videos with minimal overhead. A case study with three industrial partners illustrates the potential usefulness of V2S from the viewpoint of developers.Comment: In proceedings of the 42nd International Conference on Software Engineering (ICSE'20), 13 page

    Automated Test Selection for Android Apps Based on APK and Activity Classification

    Get PDF
    Several techniques exist for mobile test automation, from script-based techniques to automated test generation based on GUI models. Most techniques fall short in being adopted extensively by practitioners because of the very costly definition (and maintenance) of test cases. We present a novel testing framework for Android apps that allows a developer to write effective test scripts without having to know the implementation details and the user interface of the app under test. The main goal of the framework is to generate adaptive tests that can be executed on a significant number of apps, or different releases of the same app, without manual editing of the tests. The frameworks consists of: (1) a Test Scripting Language, that allows the tester to write generic test scripts tailored to activity and app categories; (2) a State Graph Modeler, that creates a model of the app’s GUI, identifying activities (i.e., screens) and widgets; (3) an app classifier that determines the type of application under test; (4) an activity classifier that determines the purpose of each screen; (5) a test adapter that executes test scripts that are compatible with the specific app and activity, automatically tailoring the test scripts to the classes of the app and the activities under test. We evaluated empirically the components of our testing framework. The classifiers were able to outperform available approaches in the literature. The developed testing framework was able to correctly adapt high-level test cases to 28 out of 32 applications, and to reduce the LOCs of the test scripts of around 90%. We conclude that machine learning can be fruitfully applied to the creation of high-level, adaptive test cases for Android apps. Our framework is modular in nature and allows expansions through the addition of new commands to be executed on the classified apps and activities

    Integration of mobile devices in home automation with use of machine learning for object recognition

    Get PDF
    The concept of smart homes is increasingly expanding and the number of objects we have at home that are connected grows exponentially. The so-called internet of things is increasingly englobing more home devices and the need to control them is also growing. However, there are numerous platforms that integrate numerous protocols and devices in many ways, many of them being unintuitive. Something that we always carry with us is our mobile devices and with the evolution of technology, they have become increasingly powerful and equipped with lots of sensors. One of the bridges to the real world in these devices is the camera and its many potentials. The amount of information gathered can be used in a variety of ways and one topic that has also gathered tremendous relevance is Artificial Intelligence and Machine Learning algorithms. Thus, with the correct processing, data collected by the sensors could be used intuitively to interact with such devices present at home. This dissertation presents the prototype of a system that integrates mobile devices in home automation platforms by detecting objects in the information collected by their cameras, consequently allowing the user to interact with them in an intuitive way. The main contribution of the work developed is the non-explored until then integration, in the home automation context, of cutting-edge algorithms capable of easily outperforming humans into analyzing and processing data acquired by our mobile devices. Throughout the dissertation the referred concepts are explored as well as the potentiality of this integration and the results obtained.O conceito de casas inteligentes está cada vez mais em constante expansão e o número de objetos que temos em casa que estão conectados cresce exponencialmente. A tão chamada internet das coisas abrange cada vez mais dispositivos domésticos crescendo também a necessidade de os controlar. No entanto existem inúmeras plataformas que integram inúmeros protocolos e dispositivos, de inúmeras maneiras, muitas delas pouco intuitivas. Algo que transportamos sempre connosco são os nossos dispositivos móveis e com a evolução da tecnologia, estes vieram-se tornando cada vez mais potentes e munidos de variados sensores. Uma das portas para o mundo real nestes dispositivos é a câmara e as suas inúmeras potencialidades. Uma temática que tem vindo também a ganhar enorme relevância é a Inteligência Artificial e os algoritmos de Aprendizagem Máquina. Assim, com o processamento correto os dados recolhidos pelos sensores poderiam ser utilizados de maneira intuitiva para interagir com os tais dispositivos presentes em casa. Nesta dissertação é apresentado o protótipo de um sistema que integra os dispositivos móveis nas plataformas de automação de casas através da deteção de objetos na informação recolhida pela câmara dos mesmos, permitindo assim ao utilizador interagir com eles de forma intuitiva. A principal contribuição do trabalho desenvolvido é a integração não explorada até então, no contexto da automação de casas, de algoritmos de ponta capazes de superar facilmente os seres humanos na análise e processamento de dados adquiridos pelos nossos dispositivos móveis. Ao longo da dissertação são explorados os conceitos referidos, bem como a potencialidade dessa integração e os resultados obtidos
    • …
    corecore