41 research outputs found

    A privacy awareness system for software design

    Get PDF
    There have been concerting policy and legal initiatives to mitigate the privacy harm resulting from badly designed software technology. But one main challenge to realizing these initiatives is the difficulty in translating proposed principles and regulations into concrete and verifiable evidence in technology. This is partly due to the lack of systematic techniques and tools to address privacy in the software design, hence making it difficult for the designer to measure disclosure risk in a more intuitive way, taking into account the privacy objective that matters to each end user. To bridge this gap, we propose a framework for verifying the satisfaction of user privacy objectives in software design. Our approach is based on the (un)awareness that users acquire when information is disclosed, as it relates to the communication properties of objects in a design. This property is used to determine the expected privacy utility that users will derive from the design for a specified privacy objective. We demonstrate through case studies how this approach can help designers determine which design decision undermines users’ privacy expectations and better design alternatives

    Permission-based Risk Signals for App Behaviour Characterization in Android Apps

    Get PDF
    With the parallel growth of the Android operating system and mobile malware, one of the ways to stay protected from mobile malware is by observing the permissions requested. However, without careful consideration of these permissions, users run the risk of an installed app being malware, without any warning that might characterize its nature. We propose a permission-based risk signal using a taxonomy of sensitive permissions. Firstly, we analyse the risk of an app based on the permissions it requests, using a permission sensitivity index computed from a risky permission set. Secondly, we evaluate permission mismatch by checking what an app requires against what it requests. Thirdly, we evaluate security rules based on our metrics to evaluate corresponding risks. We evaluate these factors using datasets of benign and malicious apps (43580 apps) and our result demonstrates that the proposed framework can be used to improve risk signalling of Android apps with a 95% accuracy

    Implement a model for describing and maximising security knowledge sharing

    Get PDF
    Employees play a crucial role in improving information security in their enterprise, and this requires everyone having the requisite security knowledge. To maximise knowledge, organisations should facilitate and encourage Security Knowledge Sharing (SKS) between employees. This paper reports on the design and implementation of a mobile game to enhance the delivery of information security training to help employees to protect themselves against security attacks. The collaborative Transactive Memory System (TMS) theory was used to model organisational knowledge sharing. We then satisfy the self-determination needs of employees to maximise intrinsic motivation to share knowledge at the individual level, via an Educational Security Game. An empirical study evaluated the intervention, an application that facilitates and encourages Information Security Knowledge Sharing. The results are still in progress

    Towards using unstructured user input request for malware detection

    Get PDF
    Privacy analysis techniques for mobile apps are mostly based on system-centric data originating from well-defined system API calls. But these apps may also collect sensitive information via their unstructured input sources that elude privacy analysis. The consequence is that users are unable to determine the extent to which apps may impact on their privacy when downloaded and installed on mobile devices. To this end, we present a privacy analysis framework for unstructured input. Our approach leverages app meta-data descriptions and taxonomy of sensitive information, to identify sensitive unstructured user input. The outcome is an understanding of the level of user privacy risk posed by an app based on its unstructured user input request. Subsequently, we evaluate the usefulness of the unstructured sensitive user input for malware detection. We evaluate our methods using 175K benign apps and 175K malware APKs. The outcome highlights that malicious app detector built on unstructured sensitive user achieve an average balance accuracy of 0.996 demonstrated with Trojan-Banker and Trojan-SMS when the malware family and target applications are known and balanced accuracy of 0.70 with generic malware

    The case for privacy awareness requirements

    Get PDF
    Privacy awareness is a core determinant of the success or failure of privacy infrastructures: if systems and users are not aware of potential privacy concerns, they cannot effectively discover, use or judge the effectiveness of privacy management capabilities. Yet, privacy awareness is only implicitly described or implemented during the privacy engineering of software systems. In this paper, the author advocates a systematic approach to considering privacy awareness. He characterizes privacy awareness and illustrate its benefits to preserving privacy in a smart mobile environment. The author proposes privacy awareness requirements to anchor the consideration of privacy awareness needs of software systems. Based on these needs, an initial process framework for the identification of privacy awareness issues is proposed. He also argues that a systematic route to privacy awareness necessitates the investigation of an appropriate representation language, analysis mechanisms and understanding the socio-technical factors that impact the manner in which we regulate our privacy

    Privacy Engineering in Dynamic Settings

    No full text
    —Modern distributed software platforms are linking smart objects such as smartphones, cars and health devices to the internet. A frequent challenge in the design of such platforms is determining the appropriate information disclosure protocol to use when one object interacts with another. For example, how can a software architect verify that when the platform constrains the sender to obtain consent from the subject before disclosure or notifying the subject after disclosure, then the privacy needs of the subject are addressed? To this end, this research presents an analysis framework for privacy engineering. We demonstrate how the framework’s outputs can help software architects achieve privacy-by-design of software platforms for smart objects

    Analysing Privacy Conflicts in Web-Based Systems

    Get PDF
    Data protection Impact Assessments (DPIA) are used to assess how well a series of design choices safeguard the privacy concerns of data subjects, but they don’t address how to analyse privacy conflicts. The challenge with current work on privacy conflict is the necessity to understand the perceived levels of sensitivity to facilitate negotiations. It is unclear how this can be achieved in DPIA procedure. In this work we introduce our model checking tool along with our method to address privacy conflict. We present our evaluation plan before concluding with our research roadmap

    Security-oriented view of app behaviour using textual descriptions and user-granted permission requests

    Get PDF
    One of the major Android security mechanisms for enforcing restrictions on the core facilities of a device that an app can access is permission control. However, there is an enormous amount of risk with regards to granting permissions since 97% of malicious mobile malware targets Android. As malware is becoming more complicated, recent research proposed a promising approach that checks implemented app behaviour against advertised app behaviour for inconsistencies. In this paper, we investigate such inconsistencies by matching the permission an app requests with the natural language descriptions of the app which gives an intuitive idea of user expected behaviour of the app. Then, we propose exploiting an enhanced app description to improve malware detection based on app descriptions and permissions. To evaluate the performance, we carried out various experiments with 56K apks. Our proposed enhancement reduces the false positives of the state-of-the-art approaches, Whyper, AutoCog, CHABADA by at least 87%, and TAPVerifier by at least 57%. We proposed a novel approach for evaluating the robustness of textual descriptions for permission-based malware detection. Our experimental results demonstrate a high detection recall rate of 98.72% on 71 up-to-date malware families and a precision of 90% on obfuscated samples of benign and malware apks. Our results also show that analysing sensitive permissions requested and UI textual descriptions provides a promising avenue for sustainable Android malware detection
    corecore