2 research outputs found

    Detection, Recognition and Tracking of Moving Objects from Real-time Video via SP Theory of Intelligence and Species Inspired PSO

    Full text link
    In this paper, we address the basic problem of recognizing moving objects in video images using SP Theory of Intelligence. The concept of SP Theory of Intelligence which is a framework of artificial intelligence, was first introduced by Gerard J Wolff, where S stands for Simplicity and P stands for Power. Using the concept of multiple alignment, we detect and recognize object of our interest in video frames with multilevel hierarchical parts and subparts, based on polythetic categories. We track the recognized objects using the species based Particle Swarm Optimization (PSO). First, we extract the multiple alignment of our object of interest from training images. In order to recognize accurately and handle occlusion, we use the polythetic concepts on raw data line to omit the redundant noise via searching for best alignment representing the features from the extracted alignments. We recognize the domain of interest from the video scenes in form of wide variety of multiple alignments to handle scene variability. Unsupervised learning is done in the SP model following the DONSVIC principle and natural structures are discovered via information compression and pattern analysis. After successful recognition of objects, we use species based PSO algorithm as the alignments of our object of interest is analogues to observation likelihood and fitness ability of species. Subsequently, we analyze the competition and repulsion among species with annealed Gaussian based PSO. We have tested our algorithms on David, Walking2, FaceOcc1, Jogging and Dudek, obtaining very satisfactory and competitive results

    A Proposed Artificial intelligence Model for Real-Time Human Action Localization and Tracking

    Full text link
    In recent years, artificial intelligence (AI) based on deep learning (DL) has sparked tremendous global interest. DL is widely used today and has expanded into various interesting areas. It is becoming more popular in cross-subject research, such as studies of smart city systems, which combine computer science with engineering applications. Human action detection is one of these areas. Human action detection is an interesting challenge due to its stringent requirements in terms of computing speed and accuracy. High-accuracy real-time object tracking is also considered a significant challenge. This paper integrates the YOLO detection network, which is considered a state-of-the-art tool for real-time object detection, with motion vectors and the Coyote Optimization Algorithm (COA) to construct a real-time human action localization and tracking system. The proposed system starts with the extraction of motion information from a compressed video stream and the extraction of appearance information from RGB frames using an object detector. Then, a fusion step between the two streams is performed, and the results are fed into the proposed action tracking model. The COA is used in object tracking due to its accuracy and fast convergence. The basic foundation of the proposed model is the utilization of motion vectors, which already exist in a compressed video bit stream and provide sufficient information to improve the localization of the target action without requiring high consumption of computational resources compared with other popular methods of extracting motion information, such as optical flows. This advantage allows the proposed approach to be implemented in challenging environments where the computational resources are limited, such as Internet of Things (IoT) systems.Comment: SUBMITTED TO IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEM
    corecore