281 research outputs found
An Overview on Image Forensics
The aim of this survey is to provide a comprehensive overview of the state of the art in the area of image forensics. These techniques have been designed to identify the source of a digital image or to determine whether the content is authentic or modified, without the knowledge of any prior information about the image under analysis (and thus are defined as passive). All these tools work by detecting the presence, the absence, or the incongruence of some traces intrinsically tied to the digital image by the acquisition device and by any other operation after its creation. The paper has been organized by classifying the tools according to the position in the history of the digital image in which the relative footprint is left: acquisition-based methods, coding-based methods, and editing-based schemes
Secure Detection of Image Manipulation by means of Random Feature Selection
We address the problem of data-driven image manipulation detection in the
presence of an attacker with limited knowledge about the detector.
Specifically, we assume that the attacker knows the architecture of the
detector, the training data and the class of features V the detector can rely
on. In order to get an advantage in his race of arms with the attacker, the
analyst designs the detector by relying on a subset of features chosen at
random in V. Given its ignorance about the exact feature set, the adversary
attacks a version of the detector based on the entire feature set. In this way,
the effectiveness of the attack diminishes since there is no guarantee that
attacking a detector working in the full feature space will result in a
successful attack against the reduced-feature detector. We theoretically prove
that, thanks to random feature selection, the security of the detector
increases significantly at the expense of a negligible loss of performance in
the absence of attacks. We also provide an experimental validation of the
proposed procedure by focusing on the detection of two specific kinds of image
manipulations, namely adaptive histogram equalization and median filtering. The
experiments confirm the gain in security at the expense of a negligible loss of
performance in the absence of attacks
Proceedings of the 15th Australian Digital Forensics Conference, 5-6 December 2017, Edith Cowan University, Perth, Australia
Conference Foreword This is the sixth year that the Australian Digital Forensics Conference has been held under the banner of the Security Research Institute, which is in part due to the success of the security conference program at ECU. As with previous years, the conference continues to see a quality papers with a number from local and international authors. 8 papers were submitted and following a double blind peer review process, 5 were accepted for final presentation and publication. Conferences such as these are simply not possible without willing volunteers who follow through with the commitment they have initially made, and I would like to take this opportunity to thank the conference committee for their tireless efforts in this regard. These efforts have included but not been limited to the reviewing and editing of the conference papers, and helping with the planning, organisation and execution of the conference. Particular thanks go to those international reviewers who took the time to review papers for the conference, irrespective of the fact that they are unable to attend this year. To our sponsors and supporters a vote of thanks for both the financial and moral support provided to the conference. Finally, to the student volunteers and staff of the ECU Security Research Institute, your efforts as always are appreciated and invaluable.
Yours sincerely,
Conference ChairProfessor Craig ValliDirector, Security Research Institute
Congress Organising Committee Congress Chair: Professor Craig Valli
Committee Members: Professor Gary Kessler – Embry Riddle University, Florida, USA Professor Glenn Dardick – Embry Riddle University, Florida, USA Professor Ali Babar – University of Adelaide, Australia Dr Jason Smith – CERT Australia, Australia Associate Professor Mike Johnstone – Edith Cowan University, Australia Professor Joseph A. Cannataci – University of Malta, Malta Professor Nathan Clarke – University of Plymouth, Plymouth UK Professor Steven Furnell – University of Plymouth, Plymouth UK Professor Bill Hutchinson – Edith Cowan University, Perth, Australia Professor Andrew Jones – Khalifa University, Abu Dhabi, UAE Professor Iain Sutherland – Glamorgan University, Wales, UK Professor Matthew Warren – Deakin University, Melbourne
Australia Congress Coordinator: Ms Emma Burk
Image and Video Forensics
Nowadays, images and videos have become the main modalities of information being exchanged in everyday life, and their pervasiveness has led the image forensics community to question their reliability, integrity, confidentiality, and security. Multimedia contents are generated in many different ways through the use of consumer electronics and high-quality digital imaging devices, such as smartphones, digital cameras, tablets, and wearable and IoT devices. The ever-increasing convenience of image acquisition has facilitated instant distribution and sharing of digital images on digital social platforms, determining a great amount of exchange data. Moreover, the pervasiveness of powerful image editing tools has allowed the manipulation of digital images for malicious or criminal ends, up to the creation of synthesized images and videos with the use of deep learning techniques. In response to these threats, the multimedia forensics community has produced major research efforts regarding the identification of the source and the detection of manipulation. In all cases (e.g., forensic investigations, fake news debunking, information warfare, and cyberattacks) where images and videos serve as critical evidence, forensic technologies that help to determine the origin, authenticity, and integrity of multimedia content can become essential tools. This book aims to collect a diverse and complementary set of articles that demonstrate new developments and applications in image and video forensics to tackle new and serious challenges to ensure media authenticity
A forensics software toolkit for DNA steganalysis.
Recent advances in genetic engineering have allowed the insertion of artificial DNA strands into the living cells of organisms. Several methods have been developed to insert information into a DNA sequence for the purpose of data storage, watermarking, or communication of secret messages. The ability to detect, extract, and decode messages from DNA is important for forensic data collection and for data security. We have developed a software toolkit that is able to detect the presence of a hidden message within a DNA sequence, extract that message, and then decode it. The toolkit is able to detect, extract, and decode messages that have been encoded with a variety of different coding schemes. The goal of this project is to enable our software toolkit to determine with which coding scheme a message has been encoded in DNA and then to decode it. The software package is able to decode messages that have been encoded with every variation of most of the coding schemes described in this document. The software toolkit has two different options for decoding that can be selected by the user. The first is a frequency analysis approach that is very commonly used in cryptanalysis. This approach is very fast, but is unable to decode messages shorter than 200 words accurately. The second option is using a Genetic Algorithm (GA) in combination with a Wisdom of Artificial Crowds (WoAC) technique. This approach is very time consuming, but can decode shorter messages with much higher accuracy
Learning from Reality: Lessons from Centrifuge Models
International Symposium on Backwards Problem in Geotechnical Engineering and Monitoring of Geo-Construction, Green Hall, Kensetsu-Koryu-kan, 2011/07/14-15Time does not go round twice. Although we may expend great efforts in forensic engineering to determine the possible causes of disasters after they have occurred, we will never know the precise antecedents. But we should also ask ourselves how we can learn from failures in order that engineers in future may avoid similar problems. Centrifuge models can recreate geotechnical reality but at small scale, and in which the density and stress history of every piece of soil can be established. Models can be fully monitored as they fail. And model events are also practically repeatable except for controlled variations, so we can learn scientifically. Examples are given of slope and excavation failures. Lessons will be drawn which are relevant for the prevention of flowslides such as in Hong Kong at Sau Mau Ping, and the prevention of deep excavation failures such as occurred at Nicoll Highway in Singapore
The effect of weathering on the forensic comparison of disposable gloves
Disposable gloves are often used by the perpetrators of a crime to prevent the deposition of fingerprints and epithelial cells at a crime scene. When removed and discarded at the scene, these items of evidence are often analyzed by a Trace Evidence Unit. By evaluating basic physical and chemical characteristics, a comparison to a known glove can be made. However, it is unclear whether temperature and weather conditions at a crime scene can alter the characteristics of the glove, and have a detrimental effect on this evidence comparison.
In this study, a variety of disposable gloves made of nitrile rubber, natural rubber latex, and polyvinyl chloride were studied to assess the relationship between environmental conditions and polymer characteristics. Samples were placed in evidence envelopes or immersed in distilled water at three different temperatures, and were analyzed after 0, 3, and 6 weeks. Analysis included thickness measurements, stereomicroscopy, and Fourier-Transform Infrared Spectroscopy (FTIR).
Results demonstrate that disposable gloves are susceptible to physical changes when exposed to various conditions. A majority of gloves exhibited an increase in thickness measurements at a variety of temperature and moisture conditions. Several gloves — spanning all types and different brands — displayed subtle changes in surface texture and spectral data.
Analysis was complicated by the fact that no glove is 100% polymer, but instead contains a variety of additives, including stabilizers, plasticizers, and dyes. Additional characterization with a quantifiable separatory method, such as Pyrolysis-Gas Chromatography/Mass Spectrometry, is therefore recommended to further elucidate the changes that can occur
- …