6 research outputs found

    Comparative Study and Design Light Weight Data Security System for Secure Data Transmission in Internet of Things

    Get PDF
    Internet of things is shortened as IoT. Today IoT is a key and abrogating subject of the specialized and social importance. Results of buyers, things and vehicles, industry based and fundamental segments, sensors, and other everyday items are converged with network of internet and the solid information abilities which guarantee to change the sort in which we work and live. The proposed work demonstrates the implementation of symmetric key lightweight algorithm for secured data transmission of images and text using image encryption system as well as reversible data hiding system. In this paper, implemented symmetric key cryptography for various formats of images, as well as real time image acquisition system has been designed in the form of graphical user interface. Reversible data hiding system has also been designed for secure data transmission system

    Improving the Watermarking Technique to Generate Blind Watermark by Using PCA & GLCM Algorithm

    Get PDF
    For making sure that the multimedia information is not accessed or modified by unauthorized users, several digital techniques have been proposed as per the growth of internet applications. However, the most commonly used technique is the watermarking technique. The spatial domain method and frequency domain method are the two broader categorizations of several watermarking techniques proposed over the time. The lower order bits of cover image are improved for embedding a watermark through the spatial domain technique. Minimizing the complexity and including minimum computational values are the major benefits achieved through this technique. However, in the presence of particular security attacks, the robustness of this technique is very high. Further, the techniques that use some invertible transformations such as Discrete Cosine Transform (DCT) are known as the frequency domain transform techniques. The image is hosted by applying Discrete Fourier transforms (DFT) and Discrete Wavelet Transform (DWT) techniques. The coefficient value of these transforms is modified as per the watermark for embedding the watermark within the image easily. Further, on the original image, the inverse transform is applied. The complexity of these techniques is very high. Also, the computational power required here is high. The security attacks are provided with more reverts through these methods. GLCM (Gray Level Co Occurrence Matrix) technique is better approach compare with other approach. In this work, GLCM (Gray Level Co Occurrence Matrix) and PCA (Principal Component Analysis) algorithms are used to improve the work capability of the neural networks by using watermarking techniques. PCA selects the extracted images and GLCM is used to choose the features extracted from the original image. The output of the PCA algorithm is defined by using scaling factor which is further used in the implementation. In this work, the proposed algorithm performs well in terms of PSNR (Peak Signal to Noise Ratio), MSE (Mean Squared Error), and Correlation Coefficient values. The proposed methods values are better from the previous work

    Numerical Simulation and Design of Copy Move Image Forgery Detection Using ORB and K Means Algorithm

    Get PDF
    Copy-move is a common technique for tampering with images in the digital realm. Therefore, image security authentication is of critical importance in our society. So copy move forgery detection (CMFD) is activated in order to identify the forged portion of a photograph. A combination of the Scaled ORB and the k-means++ algorithm is used to identify this object. The first step is to identify the space on a pyramid scale, which is critical for the next step. A region's defining feature is critical to its detection. Because of this, the ORB descriptor plays an important role. Extracting FAST key points and ORB features from each scale space. The coordinates of the FAST key points have been reversed in relation to the original image. The ORB descriptors are now subjected to the k-means++ algorithm. Hammering distance is used to match the clustered features every two key points. Then, the forged key points are discovered. This information is used to draw two circles on the forged and original regions. Moment must be calculated if the forged region is rotational invariant. Geometric transformation (scaling and rotation) is possible in this method. For images that have been rotated and smoothed, this work demonstrates a method for detecting the forged region. The running time of the proposed method is less than that of the previous method

    Numerical Simulation and Design of Machine Learning Based Real Time Fatigue Detection System

    Get PDF
    The proposed research is a step to implement real time image segmentation and drowsiness with help of machine learning methodologies. Image segmentation has been implemented in real time in which the segments of mouth and eyes have been segmented using image processing. Input can be provided by the help of real time image acquisition system such as webcam or internet of things based camera. From the video input, image frames has been extracted and processed to obtain real time features and using clustering algorithms segmentation has been achieved in real time. In the proposed work a Support Vector Machine (SVM) based machine learning method has been implemented emotion detection using facial expressions. The algorithm has been tested under variable luminance conditions and performed well with optimum accuracy as compared to contemporary research

    Design and Analysis of Reversible Data Hiding Using Hybrid Cryptographic and Steganographic approaches for Multiple Images

    Get PDF
    Data concealing is the process of including some helpful information on images. The majority of sensitive applications, such sending authentication data, benefit from data hiding. Reversible data hiding (RDH), also known as invertible or lossless data hiding in the field of signal processing, has been the subject of a lot of study. A piece of data that may be recovered from an image to disclose the original image is inserted into the image during the RDH process to generate a watermarked image. Lossless data hiding is being investigated as a strong and popular way to protect copyright in many sensitive applications, such as law enforcement, medical diagnostics, and remote sensing. Visible and invisible watermarking are the two types of watermarking algorithms. The watermark must be bold and clearly apparent in order to be visible. To be utilized for invisible watermarking, the watermark must be robust and visibly transparent. Reversible data hiding (RDH) creates a marked signal by encoding a piece of data into the host signal. Once the embedded data has been recovered, the original signal may be accurately retrieved. For photos shot in poor illumination, visual quality is more important than a high PSNR number. The DH method increases the contrast of the host picture while maintaining a high PSNR value. Histogram equalization may also be done concurrently by repeating the embedding process in order to relocate the top two bins in the input image's histogram for data embedding. It's critical to assess the images after data concealment to see how much the contrast has increased. Common picture quality assessments include peak signal to noise ratio (PSNR), relative structural similarity (RSS), relative mean brightness error (RMBE), relative entropy error (REE), relative contrast error (RCE), and global contrast factor (GCF). The main objective of this paper is to investigate the various quantitative metrics for evaluating contrast enhancement. The results show that the visual quality may be preserved by including a sufficient number of message bits in the input photographs

    Evaluation of Machine Learning Algorithms for Lake Ice Classification from Optical Remote Sensing Data

    Get PDF
    The topic of lake ice cover mapping from satellite remote sensing data has gained interest in recent years since it allows the extent of lake ice and the dynamics of ice phenology over large areas to be monitored. Mapping lake ice extent can record the loss of the perennial ice cover for lakes located in the High Arctic. Moreover, ice phenology dates, retrieved from lake ice maps, are useful for assessing long-term trends and variability in climate, particularly due to their sensitivity to changes in near-surface air temperature. However, existing knowledge-driven (threshold-based) retrieval algorithms for lake ice-water classification that use top-of-the-atmosphere (TOA) reflectance products do not perform well under the condition of large solar zenith angles, resulting in low TOA reflectance. Machine learning (ML) techniques have received considerable attention in the remote sensing field for the past several decades, but they have not yet been applied in lake ice classification from optical remote sensing imagery. Therefore, this research has evaluated the capability of ML classifiers to enhance lake ice mapping using multispectral optical remote sensing data (MODIS L1B (TOA) product). Chapter 3, the main manuscript of this thesis, presents an investigation of four ML classifiers (i.e. multinomial logistic regression, MLR; support vector machine, SVM; random forest, RF; gradient boosting trees, GBT) in lake ice classification. Results are reported using 17 lakes located in the Northern Hemisphere, which represent different characteristics regarding area, altitude, freezing frequency, and ice cover duration. According to the overall accuracy assessment using a random k-fold cross-validation (k = 100), all ML classifiers were able to produce classification accuracies above 94%, and RF and GBT provided above 98% classification accuracies. Moreover, the RF and GBT algorithms provided a more visually accurate depiction of lake ice cover under challenging conditions (i.e., high solar zenith angles, black ice, and thin cloud cover). The two tree-based classifiers were found to provide the most robust spatial transferability over the 17 lakes and performed consistently well across three ice seasons, better than the other classifiers. Moreover, RF was insensitive to the choice of the hyperparameters compared to the other three classifiers. The results demonstrate that RF and GBT provide a great potential to map accurately lake ice cover globally over a long time-series. Additionally, a case study applying a convolution neural network (CNN) model for ice classification in Great Slave Lake, Canada is presented in Appendix A. Eighteen images acquired during the the ice season of 2009-2010 were used in this study. The proposed CNN produced a 98.03% accuracy with the testing dataset; however, the accuracy dropped to 90.13% using an independent (out-of-sample) validation dataset. Results show the powerful learning performance of the proposed CNN with the testing data accuracy obtained. At the same time, the accuracy reduction of the validation dataset indicates the overfitting behavior of the proposed model. A follow-up investigation would be needed to improve its performance. This thesis investigated the capability of ML algorithms (both pixel-based and spatial-based) in lake ice classification from the MODIS L1B product. Overall, ML techniques showed promising performances for lake ice cover mapping from the optical remote sensing data. The tree-based classifiers (pixel-based) exhibited the potential to produce accurate lake ice classification at a large-scale over long time-series. In addition, more work would be of benefit for improving the application of CNN in lake ice cover mapping from optical remote sensing imagery
    corecore