11,574 research outputs found

    A single-chip FPGA implementation of real-time adaptive background model

    Get PDF
    This paper demonstrates the use of a single-chip FPGA for the extraction of highly accurate background models in real-time. The models are based on 24-bit RGB values and 8-bit grayscale intensity values. Three background models are presented, all using a camcorder, single FPGA chip, four blocks of RAM and a display unit. The architectures have been implemented and tested using a Panasonic NVDS60B digital video camera connected to a Celoxica RC300 Prototyping Platform with a Xilinx Virtex II XC2v6000 FPGA and 4 banks of onboard RAM. The novel FPGA architecture presented has the advantages of minimizing latency and the movement of large datasets, by conducting time critical processes on BlockRAM. The systems operate at clock rates ranging from 57MHz to 65MHz and are capable of performing pre-processing functions like temporal low-pass filtering on standard frame size of 640X480 pixels at up to 210 frames per second

    Auto-Encoding Scene Graphs for Image Captioning

    Full text link
    We propose Scene Graph Auto-Encoder (SGAE) that incorporates the language inductive bias into the encoder-decoder image captioning framework for more human-like captions. Intuitively, we humans use the inductive bias to compose collocations and contextual inference in discourse. For example, when we see the relation `person on bike', it is natural to replace `on' with `ride' and infer `person riding bike on a road' even the `road' is not evident. Therefore, exploiting such bias as a language prior is expected to help the conventional encoder-decoder models less likely overfit to the dataset bias and focus on reasoning. Specifically, we use the scene graph --- a directed graph (G\mathcal{G}) where an object node is connected by adjective nodes and relationship nodes --- to represent the complex structural layout of both image (I\mathcal{I}) and sentence (S\mathcal{S}). In the textual domain, we use SGAE to learn a dictionary (D\mathcal{D}) that helps to reconstruct sentences in the SGDS\mathcal{S}\rightarrow \mathcal{G} \rightarrow \mathcal{D} \rightarrow \mathcal{S} pipeline, where D\mathcal{D} encodes the desired language prior; in the vision-language domain, we use the shared D\mathcal{D} to guide the encoder-decoder in the IGDS\mathcal{I}\rightarrow \mathcal{G}\rightarrow \mathcal{D} \rightarrow \mathcal{S} pipeline. Thanks to the scene graph representation and shared dictionary, the inductive bias is transferred across domains in principle. We validate the effectiveness of SGAE on the challenging MS-COCO image captioning benchmark, e.g., our SGAE-based single-model achieves a new state-of-the-art 127.8127.8 CIDEr-D on the Karpathy split, and a competitive 125.5125.5 CIDEr-D (c40) on the official server even compared to other ensemble models

    Video foreground detection based on symmetric alpha-stable mixture models.

    Get PDF
    Background subtraction (BS) is an efficient technique for detecting moving objects in video sequences. A simple BS process involves building a model of the background and extracting regions of the foreground (moving objects) with the assumptions that the camera remains stationary and there exist no movements in the background. These assumptions restrict the applicability of BS methods to real-time object detection in video. In this paper, we propose an extended cluster BS technique with a mixture of symmetric alpha stable (SS) distributions. An on-line self-adaptive mechanism is presented that allows automated estimation of the model parameters using the log moment method. Results over real video sequences from indoor and outdoor environments, with data from static and moving video cameras are presented. The SS mixture model is shown to improve the detection performance compared with a cluster BS method using a Gaussian mixture model and the method of Li et al. [11]

    Accelerated hardware video object segmentation: From foreground detection to connected components labelling

    Get PDF
    This is the preprint version of the Article - Copyright @ 2010 ElsevierThis paper demonstrates the use of a single-chip FPGA for the segmentation of moving objects in a video sequence. The system maintains highly accurate background models, and integrates the detection of foreground pixels with the labelling of objects using a connected components algorithm. The background models are based on 24-bit RGB values and 8-bit gray scale intensity values. A multimodal background differencing algorithm is presented, using a single FPGA chip and four blocks of RAM. The real-time connected component labelling algorithm, also designed for FPGA implementation, run-length encodes the output of the background subtraction, and performs connected component analysis on this representation. The run-length encoding, together with other parts of the algorithm, is performed in parallel; sequential operations are minimized as the number of run-lengths are typically less than the number of pixels. The two algorithms are pipelined together for maximum efficiency
    corecore