47 research outputs found

    Implementation of wavelet codec by using Texas Instruments DSP TMS320C6701 EVM board

    Get PDF
    This paper describes the implementation of the wavelet codec: (encoder and decoder) by using the Texas Instruments DSP (digital signal processor) TMS320C6701 on the EVM (evaluation module) board. The wavelet codec is used to compress and decompress gray scale images for real time data compression. The wavelet codec algorithm has been transferred into C and assembly code in the Code Composer Studio in order to program the 'C6xx DSP. The capability of the 'C6xx to change the code easily, correct or update applications, reduces the development time, cost and power consumption. With. the development tools provided for the 'C6xx DSP platform, it creates an easy-to-use environment that optimizes the devices' performance and minimizes technical barriers to software and hardware desig

    Smart surveillance system based on stereo matching algorithms with IP and PTZ cameras

    Get PDF
    In this paper, we describe a system for smart surveillance using stereo images with applications to advanced video surveillance systems. The system utilizes two smart IP cameras to obtain the position and location of objects. In this case, the object target is human face. The position and location of the object are automatically extracted from two IP cameras and subsequently transmitted to an ACTi Pan-Tilt-Zoom (PTZ) camera, which then points and zooms to the exact position in space. This work involves video analytics for estimating the location of the object in a 3D environment and transmitting its positional coordinates to the PTZ camera. The research consists of algorithms development in surveillance system including face detection, block matching, location estimation and implementation with ACTi SDK tool. The final system allows the PTZ camera to track the objects and acquires images in high-resolution quality

    Low cost multi-view video system for wireless channel

    Get PDF
    With the advent in display technology, the 3DTV will provide a new viewing experience without the need of wearing special glasses to watch the 3D scenes. One of the key elements in 3DTV is the multi-view video coding, obtained from a set of synchronized cameras, capture the same scene from different view points. The video streams are synchronized and subsequently used to exploit the redundancy contained among video sources. A multi-view video consists of components for data acquisition, compression, transmission and display. This paper outlines the design and implementation of a multi-view video system for transmission over a wireless channel. Synchronized video sequences acquired from four separate cameras and coded with H.264/AVC. The video data is then transmitted over a simulated Rayleigh channel through digital video broadcasting -terrestrial (DVB-T) system with orthogonal frequency division multiplexing (OFDM)

    Co-operative surveillance cameras for high quality face acquisition in a real-time door monitoring system

    Get PDF
    The increasing number of CCTV cameras in use poses a problem of information overloading for end users. Smart technologies are used in video surveillance to automatically analyze and detect events of interest in real-time, through 2D and 3D video processing techniques called video analytics. This paper presents a smart surveillance stereo vision system for real-time intelligent door access monitoring. The system uses two IP cameras in a stereo configuration and a pan-tilt-zoom (PTZ) camera, to obtain real-time localised, high quality images of any triggering events

    Co-operative surveillance cameras for high quality face acquisition in a real-time door monitoring system

    Get PDF
    The increasing number of CCTV cameras in use poses a problem of information overloading for end users. Smart technologies are used in video surveillance to automatically analyze and detect events of interest in real-time, through 2D and 3D video processing techniques called video analytics. This paper presents a smart surveillance stereo vision system for real-time intelligent door access monitoring. The system uses two IP cameras in a stereo configuration and a pan-tilt-zoom (PTZ) camera, to obtain real-time localised, high quality images of any triggering events

    Disparity Refinement based on Depth Image Layers Separation for Stereo Matching Algorithms

    Get PDF
    This paper presents a method to improve the raw disparity maps in the disparity refinement stage for stereo matching algorithm. The proposed algorithm will use the disparity depth map from the stereo matching algorithm as initial disparity depth output with a basic similarity metric of SAD. The similarity metric finds the pixel points between the left and right under the fixed window (FW) searching process. With this approach, the raw disparity depth map obtained is not smooth and contained errors particularly with the depth discontinuities and unable to detect the uniform areas and repetitive patterns. The initial disparity depth will be used to identify the layers of disparity depth map by adapting the Depth Image Layers Separation (DILS) algorithm that separate the layers of depth based on disparity range. Each particular disparity depth map distributed along the disparity range and can be divided into several layers. The layer will be mapped to segmented reference image to refine the disparity depth map. This method will be known as the Depth Layer Refinement (DLR) that using the disparity depth layers to refine the disparity ma

    Disparity Depth Map Layers Representation for Image View Synthesis

    Get PDF
    This paper presents a method that jointly performs stereo matching and inter-view interpolation to obtain the depth map and virtual view image. A novel view synthesis method based on depth map layers representation of the stereo image pairs is proposed. The main idea of this approach is to separate the depth map into several layers of depth based on the disparity distance of the corresponding points. The novel view synthesis can be interpolated independently to each layer of depth by masking the particular depth layer. The final novel view synthesis obtained with the entire layers flattens into a single layer. Since the image view synthesis is performed in separate layers, the extractednew virtual object can be superimposed onto another 3D scene. The method is useful to imply free viewpoint video application with a small number of camera configurations. Based on the experimental results, it shows that the algorithm improve the efficiency of finding the depth map and to synthesis the new virtual view images

    Iris localisation using Fuzzy Centre Detection (FCD) scheme and active contour snake

    Get PDF
    Iris localisation is a crucial operation in iris recognition algorithm and also in applications, where irises are the main target object. This paper presents a new method to localise iris by using Fuzzy Centre Detection (FCD) scheme and active contour Snake. FCD scheme which consists of four fuzzy membership functions is purposely designed to find a centre of the iris. By using the centre of iris as the reference point, an active contour Snake algorithm is employed to localise the inner and outer of iris boundary. This proposed method is tested and validated with two categories of image database; iris databases and face database. For iris database, UBIRIS.v1, UBIRIS.v2, CASIA.v1, CASIA.v2, MMU1 and MMU2 are used. Whilst for face databases, MUCT, AT&T, Georgia Tech and ZJUblink databases are chosen to evaluate the capability of proposed method to deal with the small size of the iris in the image database. Based on the experimental result, the proposed method shows promising results for both types of databases, including comparison with the some existing iris localisation algorithm

    Interactive Objects for Augmented Reality by Using Oculus Rift and Motion Sensor

    Get PDF
    Augmented Reality (AR) is a technology to blend the digital computer such as audio, text, animation, 3D models seamlessly to the real-world environment. This technology is going to change the way how people imagine, see and learn in the future. This paper discussed the implementation and development of interactive objects for AR by integrating Oculus Rift, Leap Motion Controller (LMC) and camera. In this project, a video-displayed AR device is created to work together with a special designed AR book. A LMC is essential as a peripheral input device for the user to interact with the system. The application of LMC further enhances the interactivity with different well-designed hand gestures such as thumb up, down and pinching. The proposed design will enhance the user experience for interacting, engaging and responding to the information
    corecore