5 research outputs found

    Data Fine-tuning

    Full text link
    In real-world applications, commercial off-the-shelf systems are utilized for performing automated facial analysis including face recognition, emotion recognition, and attribute prediction. However, a majority of these commercial systems act as black boxes due to the inaccessibility of the model parameters which makes it challenging to fine-tune the models for specific applications. Stimulated by the advances in adversarial perturbations, this research proposes the concept of Data Fine-tuning to improve the classification accuracy of a given model without changing the parameters of the model. This is accomplished by modeling it as data (image) perturbation problem. A small amount of "noise" is added to the input with the objective of minimizing the classification loss without affecting the (visual) appearance. Experiments performed on three publicly available datasets LFW, CelebA, and MUCT, demonstrate the effectiveness of the proposed concept.Comment: Accepted in AAAI 201

    DF 2.0: An Automated, Privacy Preserving, and Efficient Digital Forensic Framework That Leverages Machine Learning for Evidence Prediction and Privacy Evaluation

    Get PDF
    The current state of digital forensic investigation is continuously challenged by the rapid technological changes, the increase in the use of digital devices (both the heterogeneity and the count), and the sheer volume of data that these devices could contain. Although data privacy protection is not a performance measure, however, preventing privacy violations during the digital forensic investigation, is also a big challenge. With a perception that the completeness of investigation and the data privacy preservation are incompatible with each other, the researchers have provided solutions to address the above-stated challenges that either focus on the effectiveness of the investigation process or the data privacy preservation. However, a comprehensive approach that preserves data privacy without affecting the capabilities of the investigator or the overall efficiency of the investigation process is still an open problem. In the current work, the authors have proposed a digital forensic framework that uses case information, case profile data and expert knowledge for automation of the digital forensic analysis process; utilizes machine learning for finding most relevant pieces of evidence; and maintains data privacy of non-evidential private files. All these operations are coordinated in a way that the overall efficiency of the digital forensic investigation process increases while the integrity and admissibility of the evidence remain intact. The framework improves validation which boosts transparency in the investigation process. The framework also achieves a higher level of accountability by securely logging the investigation steps. As the proposed solution introduces notable enhancements to the current investigative practices more like the next version of Digital Forensics, the authors have named the framework `Digital Forensics 2.0\u27, or `DF 2.0\u27 in short
    corecore