2 research outputs found

    Examining artifacts generated by setting Facebook Messenger as a default SMS application on Android: implication for personal data privacy

    Get PDF
    The use of mobile devices and social media applications in organized crime is increasingly increasing. Facebook Messenger is the most popular social media applications used globally. Unprecedented time is spent by many interacting globally with known and unknown individuals using Facebook. During their interaction, personal information is uploaded. Thus, crafting a myriad of privacy trepidation to users. While there are researches performed on the forensic artifacts’ extraction from Facebook, no research is conducted on setting Facebook Messenger applications as a default messaging application on Android. Two Android mobile devices were used for data generation and Facebook Messenger account was created. Disc imaging and data partition were examined and accessed to identify changes in the orca database of the application package using DB browser. The data were then generated using unique words which were used for conducting key searches. The research discovered that mqtt_log_event0.txt of the Com.Facebook.orca/Cache directory stores chat when messenger is set as a default messaging app. The research finding shows that chats are recorded under messages tab together with SMS of data/data/com.facebook.orca/databases/smstakeover_db and data/data/com.facebook.orca/databases/threads_db. This indicates that only smstakeover_db stores SMS messaging information when using messenger application. It is observed that once the user deletes a sent SMS message, the phone number and the deleted time stamp remained in the data/data/com.facebook.orca/databases/smstakeover_db database in the address_table are recoverable. The results suggest that anonymization of data is essential if Facebook chats are to be shared for further research into social media conten

    Preserving data privacy in machine learning systems

    Get PDF
    peer reviewedThe wide adoption of Machine Learning to solve a large set of real-life problems came with the need to collect and process large volumes of data, some of which are considered personal and sensitive, raising serious concerns about data protection. Privacy-enhancing technologies (PETs) are often indicated as a solution to protect personal data and to achieve a general trustworthiness as required by current EU regulations on data protection and AI. However, an off-the-shelf application of PETs is insufficient to ensure a high-quality of data protection, which one needs to understand. This work systematically discusses the risks against data protection in modern Machine Learning systems taking the original perspective of the data owners, who are those who hold the various data sets, data models, or both, throughout the machine learning life cycle and considering the different Machine Learning architectures. It argues that the origin of the threats, the risks against the data, and the level of protection offered by PETs depend on the data processing phase, the role of the parties involved, and the architecture where the machine learning systems are deployed. By offering a framework in which to discuss privacy and confidentiality risks for data owners and by identifying and assessing privacy-preserving countermeasures for machine learning, this work could facilitate the discussion about compliance with EU regulations and directives. We discuss current challenges and research questions that are still unsolved in the field. In this respect, this paper provides researchers and developers working on machine learning with a comprehensive body of knowledge to let them advance in the science of data protection in machine learning field as well as in closely related fields such as Artificial Intelligence
    corecore