75 research outputs found

    Human Detection and Tracking for Video Surveillance A Cognitive Science Approach

    Full text link
    With crimes on the rise all around the world, video surveillance is becoming more important day by day. Due to the lack of human resources to monitor this increasing number of cameras manually new computer vision algorithms to perform lower and higher level tasks are being developed. We have developed a new method incorporating the most acclaimed Histograms of Oriented Gradients the theory of Visual Saliency and the saliency prediction model Deep Multi Level Network to detect human beings in video sequences. Furthermore we implemented the k Means algorithm to cluster the HOG feature vectors of the positively detected windows and determined the path followed by a person in the video. We achieved a detection precision of 83.11% and a recall of 41.27%. We obtained these results 76.866 times faster than classification on normal images.Comment: ICCV 2017 Venice, Italy Pages 5 Figures

    Fabrication of the Kinect Remote-controlled Cars and Planning of the Motion Interaction Courses

    Get PDF
    AbstractThis paper describes the fabrication of Kinect remote-controlled cars, using PC, Kinect sensor, interface control circuit, embedded controller, and brake device, as well as the planning of motion interaction courses. The Kinect sensor first detects the body movement of the user, and converts it into control commands. Then, the PC sends the commands to Arduino control panel via XBee wireless communication modules. The interface circuit is used to control movement and direction of motors, including forward and backward, left and right. In order to develop the content of Kinect motion interaction courses, this study conducted literature review to understand the curriculum contents, and invited experts for interviews to collect data on learning background, teaching contents and unit contents. Based on the data, the teaching units and outlines are developed for reference of curriculums

    Robust human detection with occlusion handling by fusion of thermal and depth images from mobile robot

    Get PDF
    In this paper, a robust surveillance system to enable robots to detect humans in indoor environments is proposed. The proposed method is based on fusing information from thermal and depth images which allows the detection of human even under occlusion. The proposed method consists of three stages, pre-processing, ROI generation and object classification. A new dataset was developed to evaluate the performance of the proposed method. The experimental results show that the proposed method is able to detect multiple humans under occlusions and illumination variations

    Autonomous computational intelligence-based behaviour recognition in security and surveillance

    Get PDF
    This paper presents a novel approach to sensing both suspicious, and task-specific behaviours through the use of advanced computational intelligence techniques. Locating suspicious activity in surveillance camera networks is an intensive task due to the volume of information and large numbers of camera sources to monitor. This results in countless hours of video data being streamed to disk without being screened by a human operator. To address this need, there are emerging video analytics solutions that have introduced new metrics such as people counting and route monitoring, alongside more traditional alerts such as motion detection. There are however few solutions that are sufficiently robust to reduce the need for human operators in these environments, and new approaches are needed to address the uncertainty in identifying and classifying human behaviours, autonomously, from a video stream. In this work we present an approach to address the autonomous identification of human behaviours derived from human pose analysis. Behavioural recognition is a significant challenge due to the complex subtleties that often make up an action; the large overlap in cues results in high levels of classification uncertainty. False alarms are significant impairments to autonomous detection and alerting systems, and over reporting can lead to systems being muted, disabled, or decommissioned. We present results on a Computational-Intelligence based Behaviour Recognition (CIBR) that utilises artificial intelligence to learn, optimise, and classify human activity. We achieve this through extraction of skeleton recognition of human forms within an image. A type-2 Fuzzy logic classifier then converts the human skeletal forms into a set of base atomic poses (standing, walking, etc.), after which a Markov-chain model is used to order a pose sequence. Through this method we are able to identify, with good accuracy, several classes of human behaviour that correlate with known suspicious, or anomalous, behaviours

    A hybrid method using kinect depth and color data stream for hand blobs segmentation

    Get PDF
    The recently developed depth sensors such as Kinect have provided new potential for human-computer interaction (HCI) and hand gesture are one of main parts in recent researches. Hand segmentation procedure is performed to acquire hand gesture from a captured image. In this paper, a method is produced to segment hand blobs using both depth and color data frames. This method applies a body segmentation and an image threshold techniques to depth data frame using skeleton data and concurrently it uses SLIC super-pixel segmentation method to extract hand blobs from color data frame with the help of skeleton data. The proposed method has low computation time and shows significant results when basic assumption are fulfilled
    corecore