105 research outputs found

    Skin Detection using a Markov Random Field and a New Color Space

    Get PDF
    In this paper, human skin detection is performed using a new color space coordinate and a Markov random field based approach. The proposed color space uses a variant of the principal component analysis technique to reduce the number of color components. The MRF model takes into account the spatial relations within the image that are included in the labeling process through statistical dependence among neighboring pixels. Since only two classes are considered the Ising model is used to perform the skin/non-skin classification proces

    End-to-end image steganography using deep convolutional autoencoders

    Get PDF
    Image steganography is used to hide a secret image inside a cover image in plain sight. Traditionally, the secret data is converted into binary bits and the cover image is manipulated statistically to embed the secret binary bits. Overloading the cover image may lead to distortions and the secret information may become visible. Hence the hiding capacity of the traditional methods are limited. In this paper, a light-weight yet simple deep convolutional autoencoder architecture is proposed to embed a secret image inside a cover image as well as to extract the embedded secret image from the stego image. The proposed method is evaluated using three datasets - COCO, CelebA and ImageNet. Peak Signal-to-Noise Ratio, hiding capacity and imperceptibility results on the test set are used to measure the performance. The proposed method has been evaluated using various images including Lena, airplane, baboon and peppers and compared against other traditional image steganography methods. The experimental results have demonstrated that the proposed method has higher hiding capacity, security and robustness, and imperceptibility performances than other deep learning image steganography methods

    A novel potential field model for perimeter and agent density control in multiagent swarms

    Get PDF
    parameters for the computation of control vectors. This restriction often limits the structures that can evolve, since agents are unable to modify their behaviour based on their structural role. This paper proposes an enhanced model that uses the perimeter status of agents in selecting control parameters. This allows a wider variety of emergent behaviours, many of which result in much improved swarm structures. The model is based upon equivalence classes of agent pairs, defined by their perimeter status. Array-valued parameters are introduced to allow each equivalence class to be given its own parameter values. The model also introduces a new control vector to ‘flatten’ reflex angles between neighbouring agents on the swarm perimeter, often leading to significantly improved swarm structure. Extensive experiments have been conducted that demonstrate how the new model causes a variety of useful behaviours to emerge from random swarm deployments. The results show that several important behaviours, such as shape control, void removal, perimeter packing and expansion, and perimeter rotation, can be produced without the need for explicit inter-agent communication. The approach is applicable to a variety of applications, including reconnaissance, area-coverage, and containment

    Multi-descriptor random sampling for patch-based face recognition

    Get PDF
    While there has been a massive increase in research into face recognition, it remains a challenging problem due to conditions present in real life. This paper focuses on the inherently present issue of partial occlusion distortions in real face recognition applications. We propose an approach to tackle this problem. First, face images are divided into multiple patches before local descriptors of Local Binary Patterns and Histograms of Oriented Gradients are applied on each patch. Next, the resulting histograms are concatenated, and their dimensionality is then reduced using Kernel Principle Component Analysis. Once completed, patches are randomly selected using the concept of random sampling to finally construct several sub-Support Vector Machine classifiers. The results obtained from these sub-classifiers are combined to generate the final recognition outcome. Experimental results based on the AR face database and the Extended Yale B database show the effectiveness of our proposed technique

    Computer Vision Based Kidney’s (HK-2) Damaged Cells Classification with Reconfigurable Hardware Accelerator (FPGA)

    Get PDF
    In medical and health sciences, detection of cell injury plays an important role in diagnosis, personal treatment and disease prevention. Despite recent advancements in tools and methods for image classification, it is challenging to classify cell images with higher precision and accuracy. Cell classification based on computer vision offers significant benefits in biomedicine and healthcare. There have been studies reported where cell classification techniques have been complemented by Artificial Intelligence-based classifiers such as Convolutional Neural Networks. These classifiers suffer from the drawback of the scale of computational resources required for training and hence do not offer real-time classification capabilities for an embedded system plat-form. Field Programmable Gate Arrays (FPGAs) offer the flexibility of hardware reconfiguration and have emerged as a viable platform for algorithm acceleration. Given that the logic resources and on-chip memory available on a single device are still limited, hardware/software co-design is proposed where image pre-processing and network training was performed in software and trained architectures were mapped onto an FPGA device (Nexys4DDR) for real-time cell classification. This paper demonstrates that the embedded hardware-based cell classifier performs with almost 100% accuracy in detecting different types of damaged kidney cells

    Distant Pedestrian Detection in the Wild using Single Shot Detector with Deep Convolutional Generative Adversarial Networks

    Get PDF
    In this work, we examine the feasibility of applying Deep Convolutional Generative Adversarial Networks (DCGANs) with Single Shot Detector (SSD) as data-processing technique to handle with the challenge of pedestrian detection in the wild. Specifically, we attempted to use in-fill completion to generate random transformations of images with missing pixels to expand existing labelled datasets. In our work, GAN's been trained intensively on low resolution images, in order to neutralize the challenges of the pedestrian detection in the wild, and considered humans, and few other classes for detection in smart cities. The object detector experiment performed by training GAN model along with SSD provided a substantial improvement in the results. This approach presents a very interesting overview in the current state of art on GAN networks for object detection. We used Canadian Institute for Advanced Research (CIFAR), Caltech, KITTI data set for training and testing the network under different resolutions and the experimental results with comparison been showed between DCGAN cascaded with SSD and SSD itself
    corecore