144,067 research outputs found

    An Innovative Skin Detection Approach Using Color Based Image Retrieval Technique

    Full text link
    From The late 90th, "Skin Detection" becomes one of the major problems in image processing. If "Skin Detection" will be done in high accuracy, it can be used in many cases as face recognition, Human Tracking and etc. Until now so many methods were presented for solving this problem. In most of these methods, color space was used to extract feature vector for classifying pixels, but the most of them have not good accuracy in detecting types of skin. The proposed approach in this paper is based on "Color based image retrieval" (CBIR) technique. In this method, first by means of CBIR method and image tiling and considering the relation between pixel and its neighbors, a feature vector would be defined and then with using a training step, detecting the skin in the test stage. The result shows that the presenting approach, in addition to its high accuracy in detecting type of skin, has no sensitivity to illumination intensity and moving face orientation.Comment: 9 Pages, 4 Figure

    Fatigue detection using computer vision

    Full text link
    Long duration driving is a significant cause of fatigue related accidents of cars, airplanes, trains and other means of transport. This paper presents a design of a detection system which can be used to detect fatigue in drivers. The system is based on computer vision with main focus on eye blink rate. We propose an algorithm for eye detection that is conducted through a process of extracting the face image from the video image followed by evaluating the eye region and then eventually detecting the iris of the eye using the binary image. The advantage of this system is that the algorithm works without any constraint of the background as the face is detected using a skin segmentation technique. The detection performance of this system was tested using video images which were recorded under laboratory conditions. The applicability of the system is discussed in light of fatigue detection for drivers

    Adaptive skin color classification technique for color-based face detection systems using integral image

    Get PDF
    Among the various features of human face, skin colour is a more powerful means of discerning face appearance. Numerous skin colour models, which model the human skin colours in different ways, have been proposed by researchers. Furthermore, there are a number of colour spaces which are adopted in skin colour modelling. In particular, the colour-based segmentation is a significant step in any colour-based face detection approach which uses skin-colour models to classify an image into skin and non-skin regions. Varying illumination is one of the most frequent challenges in face detection systems. A change in the light source distribution and in the illumination level (indoor, outdoor, highlights, shadows, non-white lights) affects the appearance of an object (such as human face) in a scene and produces changes in terms of object colour and shape. An adaptive skin colour classification technique, which considerably resolves around the above mentioned problems in case of illumination conditions and shadow, has been proposed and presented in this paper. The proposed method first identifies those pixels that have illumination problem using integral image and then the pixels are adjusted using an adaptive gamma intensity correction method to rectify negative effect of illumination problems. The experiments showed that the proposed method significantly improves the process of a color-based face detection system in terms of both detection rate and accuracy

    Driver Distraction Identification with an Ensemble of Convolutional Neural Networks

    Full text link
    The World Health Organization (WHO) reported 1.25 million deaths yearly due to road traffic accidents worldwide and the number has been continuously increasing over the last few years. Nearly fifth of these accidents are caused by distracted drivers. Existing work of distracted driver detection is concerned with a small set of distractions (mostly, cell phone usage). Unreliable ad-hoc methods are often used.In this paper, we present the first publicly available dataset for driver distraction identification with more distraction postures than existing alternatives. In addition, we propose a reliable deep learning-based solution that achieves a 90% accuracy. The system consists of a genetically-weighted ensemble of convolutional neural networks, we show that a weighted ensemble of classifiers using a genetic algorithm yields in a better classification confidence. We also study the effect of different visual elements in distraction detection by means of face and hand localizations, and skin segmentation. Finally, we present a thinned version of our ensemble that could achieve 84.64% classification accuracy and operate in a real-time environment.Comment: arXiv admin note: substantial text overlap with arXiv:1706.0949

    Efficient Human Facial Pose Estimation

    Get PDF
    Pose estimation has become an increasingly important area in computer vision and more specifically in human facial recognition and activity recognition for surveillance applications. Pose estimation is a process by which the rotation, pitch, or yaw of a human head is determined. Numerous methods already exist which can determine the angular change of a face, however, these methods vary in accuracy and their computational requirements tend to be too high for real-time applications. The objective of this thesis is to develop a method for pose estimation, which is computationally efficient, while still maintaining a reasonable degree of accuracy. In this thesis, a feature-based method is presented to determine the yaw angle of a human facial pose using a combination of artificial neural networks and template matching. The artificial neural networks are used for the feature detection portion of the algorithm along with skin detection and other image enhancement algorithms. The first head model, referred to as the Frontal Position Model, determines the pose of the face using two eyes and the mouth. The second model, referred to as the Side Position Model, is used when only one eye can be viewed and determines pose based on a single eye, the nose tip, and the mouth. The two models are presented to demonstrate the position change of facial features due to pose and to provide the means to determine the pose as these features change from the frontal position. The effectiveness of this pose estimation method is examined by looking at both the manual and automatic feature detection methods. Analysis is further performed on how errors in feature detection affect the resulting pose determination. The method resulted in the detection of facial pose from 30 to -30 degrees with an average error of 4.28 degrees for the Frontal Position Model and 5.79 degrees for the Side Position Model with correct feature detection. The Intel(R) Streaming SIMD Extensions (SSE) technology was employed to enhance the performance of floating point operations. The neural networks used in the feature detection process require a large amount of floating point calculations, due to the computation of the image data with weights and biases. With SSE optimization the algorithm becomes suitable for processing images in a real-time environment. The method is capable of determining features and estimating the pose at a rate of seven frames per second on a 1.8 GHz Pentium 4 computer

    Classification of Humans into Ayurvedic Prakruti Types using Computer Vision

    Get PDF
    Ayurveda, a 5000 years old Indian medical science, believes that the universe and hence humans are made up of five elements namely ether, fire, water, earth, and air. The three Doshas (Tridosha) Vata, Pitta, and Kapha originated from the combinations of these elements. Every person has a unique combination of Tridosha elements contributing to a person’s ‘Prakruti’. Prakruti governs the physiological and psychological tendencies in all living beings as well as the way they interact with the environment. This balance influences their physiological features like the texture and colour of skin, hair, eyes, length of fingers, the shape of the palm, body frame, strength of digestion and many more as well as the psychological features like their nature (introverted, extroverted, calm, excitable, intense, laidback), and their reaction to stress and diseases. All these features are coded in the constituents at the time of a person’s creation and do not change throughout their lifetime. Ayurvedic doctors analyze the Prakruti of a person either by assessing the physical features manually and/or by examining the nature of their heartbeat (pulse). Based on this analysis, they diagnose, prevent and cure the disease in patients by prescribing precision medicine. This project focuses on identifying Prakruti of a person by analysing his facial features like hair, eyes, nose, lips and skin colour using facial recognition techniques in computer vision. This is the first of its kind research in this problem area that attempts to bring image processing into the domain of Ayurveda
    corecore