29,542 research outputs found

    A Study on Tools And Techniques Used For Network Forensic In A Cloud Environment: An Investigation Perspective

    Get PDF
    The modern computer environment has moved past the local data center with a single entry and exit point to a global network comprising many data centers and hundreds of entry and exit points, commonly referred as Cloud Computing, used by all possible devices with numerous entry and exit point for transactions, online processing, request and responses traveling across the network, making the ever complex networks even more complex, making traversing, monitoring and detecting threats over such an environment a big challenge for Network forensic and investigation for cybercrimes. It has demanded in depth analysis using network tools and techniques to determine how best information can be extracted pertinent to an investigation. Data mining technique providing great aid in finding relevant clusters for predicting unusual activities, pattern matching and fraud detection in an environment, capable to deal with huge amount of data. The concept of network forensics in cloud computing requires a new mindset where some data will not be available, some data will be suspect, and some data will be court ready and can fit into the traditional network forensics model. From a network security viewpoint, all data traversing the cloud network backplane is visible and accessible by the cloud service provider. It is not possible to think now that one physical device will only have one operating system that needs to be taken down for investigation. Without the network forensics investigator, understanding the architecture of the cloud environment systems and possible compromises will be overlooked or missed. In this paper, we focus on the role of Network Forensic in a cloud environment, its mapping few of the available tools and contribution of Data Mining in making analysis, and also to bring out the challenges in this field

    Hybrid LSTM and Encoder-Decoder Architecture for Detection of Image Forgeries

    Full text link
    With advanced image journaling tools, one can easily alter the semantic meaning of an image by exploiting certain manipulation techniques such as copy-clone, object splicing, and removal, which mislead the viewers. In contrast, the identification of these manipulations becomes a very challenging task as manipulated regions are not visually apparent. This paper proposes a high-confidence manipulation localization architecture which utilizes resampling features, Long-Short Term Memory (LSTM) cells, and encoder-decoder network to segment out manipulated regions from non-manipulated ones. Resampling features are used to capture artifacts like JPEG quality loss, upsampling, downsampling, rotation, and shearing. The proposed network exploits larger receptive fields (spatial maps) and frequency domain correlation to analyze the discriminative characteristics between manipulated and non-manipulated regions by incorporating encoder and LSTM network. Finally, decoder network learns the mapping from low-resolution feature maps to pixel-wise predictions for image tamper localization. With predicted mask provided by final layer (softmax) of the proposed architecture, end-to-end training is performed to learn the network parameters through back-propagation using ground-truth masks. Furthermore, a large image splicing dataset is introduced to guide the training process. The proposed method is capable of localizing image manipulations at pixel level with high precision, which is demonstrated through rigorous experimentation on three diverse datasets
    • …
    corecore