24 research outputs found

    ID Theft: A computer forensics\u27 investigation framework

    Get PDF
    The exposure of online identities grows rapidly nowadays and so does the threat of having even more impersonated identities. Internet users provide their private information on multiple web-based agents for a number of reasons, online shopping, memberships, social networking, and many others. However, the number of ID Theft victims grows as well, resulting to the growth of the number of incidents that require computer forensics investigation in order to resolve this type of crime. For this reason, it appears of value to provide a systematic approach for the computer forensics investigators aiming to resolve such type of computer based ID Theft incidents. The issues that demand individual examinations of this type of crime are discussed and the plan of an ID Theft computer forensics investigation framework is presented

    Analysis of digital evidence in identity theft investigations

    Get PDF
    Identity Theft could be currently considered as a significant problem in the modern internet driven era. This type of computer crime can be achieved in a number of different ways; various statistical figures suggest it is on the increase. It intimidates individual privacy and self assurance, while efforts for increased security and protection measures appear inadequate to prevent it. A forensic analysis of the digital evidence should be able to provide precise findings after the investigation of Identity Theft incidents. At present, the investigation of Internet based Identity Theft is performed on an ad hoc and unstructured basis, in relation to the digital evidence. This research work aims to construct a formalised and structured approach to digital Identity Theft investigations that would improve the current computer forensic investigative practice. The research hypothesis is to create an analytical framework to facilitate the investigation of Internet Identity Theft cases and the processing of the related digital evidence. This research work makes two key contributions to the subject: a) proposing the approach of examining different computer crimes using a process specifically based on their nature and b) to differentiate the examination procedure between the victim’s and the fraudster’s side, depending on the ownership of the digital media. The background research on the existing investigation methods supports the need of moving towards an individual framework that supports Identity Theft investigations. The presented investigation framework is designed based on the structure of the existing computer forensic frameworks. It is a flexible, conceptual tool that will assist the investigator’s work and analyse incidents related to this type of crime. The research outcome has been presented in detail, with supporting relevant material for the investigator. The intention is to offer a coherent tool that could be used by computer forensics investigators. Therefore, the research outcome will not only be evaluated from a laboratory experiment, but also strengthened and improved based on an evaluation feedback by experts from law enforcement. While personal identities are increasingly being stored and shared on digital media, the threat of personal and private information that is used fraudulently cannot be eliminated. However, when such incidents are precisely examined, then the nature of the problem can be more clearly understood

    An Academic Approach to Digital Forensics

    Get PDF
    This is the accepted manuscript version of the following article: O. Angelopoulou, and S. Vidalis, “An academic approach to digital forensics”, Journal of Information Warfare, Vol. 13(4), 2015. The final published version is available at: https://www.jinfowar.com/journal/volume-13-issue-4/academic-approach-digital-forensics © Copyright 2017 Journal of Information Warfare. All Rights Reserved.Digital forensics as a field of study creates a number of challenges when it comes to the academic environment. The aim of this paper is to explore these challenges in relation to the learning and teaching theories. We discuss our approach and methods of educating digital forensic investigators based on the learning axioms and models, and we also present the learning environments we develop for our scholarsPeer reviewe

    Assessing Identity Theft in the Internet of Things

    Get PDF
    Published by Innovative Information Science & Technology Research Group (ISYOU)In the Internet of Things everything is interconnected. In the same context that “man-made fire” got the party started for human civilisation, “man-made TCP” enabled computing devices to participate in our lives. Today we live in a socially-driven knowledge centred computing era and we are happy in living our lives based on what an Internet alias have said or done. We are prepared to accept any reality as long as it is presented to us in a digitised manner. The Internet of Things is an emerging technology introduced in Smart Devices that will need to be intergrated with the current Information Technology infrastructure in terms of its application and security considerations. In this paper we explore the identity cyberattacks that can be related to Internet of Things and we raise our concerns. We also present a vulnerability assessment model that attempts to predict how an environment can be influenced by this type of attacks.Peer reviewe

    Extracting Intelligence from Digital Forensic Artefacts

    Get PDF
    Stilianos Vidalis, Olga Angelopoulou, Andrew Jones, ‘Extracting Intelligence from Digital Forensic Artefacts’, paper presented at the 15th European Conference on Cyber Warfare and Security, Munich, Germany, 7-8 July, 2016.Forensic science and in particular digital forensics as a business process has predominantly been focusing on generating evidence for court proceedings. It is argued that in today’s socially-driven, knowledge-centric, virtual-computing era, this is not resource effective. In past cases it has been discovered retrospectively that the necessary information for a successful identification and extraction of evidence was previously available in a database or within previously analysed files. Such evidence could have been proactively used in order to solve a particular case, a number of linked cases or to better understand the criminal activity as a whole. This paper will present a conceptual architecture for a distributed system that will allow forensic analysts to forensically fuse and semantically analyse digital evidence for the extraction of intelligence that could lead to the accumulation of knowledge necessary for a successful prosecution.Peer reviewe

    A Study of the Data Remaining on Second-Hand Mobile Devices in the UK

    Get PDF
    This study was carried out intending to identify the level and type of information that remained on portable devices that were purchased from the second-hand market in the UK over the last few years. The sample for this study consisted of 100 second hand mobile phones and tablets. The aim of the study was to determine the proportion of devices that still contained data and the type of data that they contained. Where data was identified, the study attempted to determine the level of personal identifiable information that is associated with the previous owner. The research showed that when sensitive and personal data was present on a mobile device, in most of the cases there had been no attempt to remove it. However, fifty two percent of the mobile devices had been reset to the factory settings or had had all of the data erased, which demonstrates the previous owner’s attempt to permanently remove personal identifiable information. Twenty eight percent of the devices that were sold were not functional or recognized by the software used in the research. Twenty percent of the devices that contained data contained data that gave away the identity of the previous owner

    A Hybrid Spam Detection Method Based on Unstructured Datasets

    Get PDF
    This document is the accepted manuscript version of the following article: Shao, Y., Trovati, M., Shi, Q. et al. Soft Comput (2017) 21: 233. The final publication is available at Springer via http://dx.doi.org/10.1007/s00500-015-1959-z. © Springer-Verlag Berlin Heidelberg 2015.The identification of non-genuine or malicious messages poses a variety of challenges due to the continuous changes in the techniques utilised by cyber-criminals. In this article, we propose a hybrid detection method based on a combination of image and text spam recognition techniques. In particular, the former is based on sparse representation-based classification, which focuses on the global and local image features, and a dictionary learning technique to achieve a spam and a ham sub-dictionary. On the other hand, the textual analysis is based on semantic properties of documents to assess the level of maliciousness. More specifically, we are able to distinguish between meta-spam and real spam. Experimental results show the accuracy and potential of our approach.Peer reviewedFinal Accepted Versio

    ID Theft: A Computer Forensics ’ Investigation Framework

    No full text
    The exposure of online identities grows rapidly nowadays and so does the threat of having even more impersonated identities. Internet users provide their private information on multiple web-based agents for a number of reasons, online shopping, memberships, social networking, and many others. However, the number of ID Theft victims grows as well, resulting to the growth of the number of incidents that require computer forensics investigation in order to resolve this type of crime. For this reason, it appears of value to provide a systematic approach for the computer forensics investigators aiming to resolve such type of computer based ID Theft incidents. The issues that demand individual examinations of this type of crime are discussed and the plan of an ID Theft computer forensics investigation framework is presented

    Analysis of digital evidence in identity theft investigations

    Get PDF
    Identity theft could be currently considered as a significant problem in the modern internet driven era. This type of computer crime can be achieved in a number of different ways; various statistical figures suggest it is on the increase. It intimidates individual privacy and self assurance, while efforts for increased security and protection measures appear inadequate to prevent it. A forensic analysis of the digital evidence should be able to provide precise findings after the investigation of Identity Theft incidents. At present, the investigation of Internet based Identity Theft is performed on an ad hoc and unstructured basis, in relation to the digital evidence. This research work aims to construct a formalised and structured approach to digital Identity Theft investigations that would improve the current computer forensic investigative practice. The research hypothesis is to create an analytical framework to facilitate the investigation of Internet Identity Theft cases and the processing of the related digital evidence. This research work makes two key contributions to the subject: a) proposing the approach of examining different computer crimes using a process specifically based on their nature and b) to differentiate the examination procedure between the victim’s and the fraudster’s side, depending on the ownership of the digital media. The background research on the existing investigation methods supports the need of moving towards an individual framework that supports Identity Theft investigations. The presented investigation framework is designed based on the structure of the existing computer forensic frameworks. It is a flexible, conceptual tool that will assist the investigator’s work and analyse incidents related to this type of crime. The research outcome has been presented in detail, with supporting relevant material for the investigator. The intention is to offer a coherent tool that could be used by computer forensics investigators. Therefore, the research outcome will not only be evaluated from a laboratory experiment, but also strengthened and improved based on an evaluation feedback by experts from law enforcement. While personal identities are increasingly being stored and shared on digital media, the threat of personal and private information that is used fraudulently cannot be eliminated. However, when such incidents are precisely examined, then the nature of the problem can be more clearly understood.EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    Digital Continuity: Record Classification and Retention on Shared Drives and Email Vaults

    Get PDF
    In 2007 the UK government identified several objectives for improving the storage of public sector information. In particular, and of direct relevance to this project, it wanted to: improve the responsiveness to demands for public sector information ensure the most appropriate supply of information for reuse improve the supply of information for reuse promote the innovative use of public sector information. The aim of this project was to mine, categorise and classify information from a heterogeneous large-scale computer infrastructure and then store the search results in a forensically sound manner. Duplicate information was to identified for destruction and the process designed so that it could be implemented without disrupting staff operations. The test data was a a 217Gb (810,000 files) sample taken from the Welsh Government (WG) shared drives and email vault. The records concerned largely related to the work of the Department of Education and Skills though 25% of the sample were taken from the wider organisation in order to ensure that the classification system used were useful over a broad range of subjects. The test data was stored in an isolated test environment with virtualised structures. All development work within the project occurred within the test environment. De-duplication of the test data was achieved. Some 35.88% of the files were identified as duplicates. Removing these files resulted in a saving of 29.49% of physical space. After one pass of the data, it was possible to generate usable metadata for 75.7% of the de-duplicated data set. This became the rich data set. The retention policies of the WG were used to design queries and rules for analysing the rich data set. It was possible to extract 65% of the files in the rich data-set for long-term retention together with their metadata in a format that would allow transfer to the WG Electronic Document and Record Management System (ERDMS Know as iShare within the WG). This translates to 55% of the de-duplicated data set. Further analysis of the rich data set would have produced a better extraction rate. This would have been further facilitated by the use of knowledge extraction applications such as Pingar
    corecore