829 research outputs found

    Image enhancement using fuzzy intensity measure and adaptive clipping histogram equalization

    Get PDF
    Image enhancement aims at processing an input image so that the visual content of the output image is more pleasing or more useful for certain applications. Although histogram equalization is widely used in image enhancement due to its simplicity and effectiveness, it changes the mean brightness of the enhanced image and introduces a high level of noise and distortion. To address these problems, this paper proposes image enhancement using fuzzy intensity measure and adaptive clipping histogram equalization (FIMHE). FIMHE uses fuzzy intensity measure to first segment the histogram of the original image, and then clip the histogram adaptively in order to prevent excessive image enhancement. Experiments on the Berkeley database and CVF-UGR-Image database show that FIMHE outperforms state-of-the-art histogram equalization based methods

    3D Camouflaging Object using RGB-D Sensors

    Full text link
    This paper proposes a new optical camouflage system that uses RGB-D cameras, for acquiring point cloud of background scene, and tracking observers eyes. This system enables a user to conceal an object located behind a display that surrounded by 3D objects. If we considered here the tracked point of observer s eyes is a light source, the system will work on estimating shadow shape of the display device that falls on the objects in background. The system uses the 3d observer s eyes and the locations of display corners to predict their shadow points which have nearest neighbors in the constructed point cloud of background scene.Comment: 6 pages, 12 figures, 2017 IEEE International Conference on SM

    Global Motion Estimation and Its Applications

    Get PDF
    In this chapter, global motion estimation and its applications are given. Firstly we give the definitions of global motion and global motion estimation. Secondly, the parametric representations of global motion models are provided. Thirdly, global estimation approaches including pixel domain based global motion estimation, hierarchical globa

    High Quality of Service on Video Streaming in P2P Networks using FST-MDC

    Full text link
    Video streaming applications have newly attracted a large number of participants in a distribution network. Traditional client-server based video streaming solutions sustain precious bandwidth provision rate on the server. Recently, several P2P streaming systems have been organized to provide on-demand and live video streaming services on the wireless network at reduced server cost. Peer-to-Peer (P2P) computing is a new pattern to construct disseminated network applications. Typical error control techniques are not very well matched and on the other hand error prone channels has increased greatly for video transmission e.g., over wireless networks and IP. These two facts united together provided the essential motivation for the development of a new set of techniques (error concealment) capable of dealing with transmission errors in video systems. In this paper, we propose an flexible multiple description coding method named as Flexible Spatial-Temporal (FST) which improves error resilience in the sense of frame loss possibilities over independent paths. It introduces combination of both spatial and temporal concealment technique at the receiver and to conceal the lost frames more effectively. Experimental results show that, proposed approach attains reasonable quality of video performance over P2P wireless network.Comment: 11 pages, 8 figures, journa

    Cross-layer Optimized Wireless Video Surveillance

    Get PDF
    A wireless video surveillance system contains three major components, the video capture and preprocessing, the video compression and transmission over wireless sensor networks (WSNs), and the video analysis at the receiving end. The coordination of different components is important for improving the end-to-end video quality, especially under the communication resource constraint. Cross-layer control proves to be an efficient measure for optimal system configuration. In this dissertation, we address the problem of implementing cross-layer optimization in the wireless video surveillance system. The thesis work is based on three research projects. In the first project, a single PTU (pan-tilt-unit) camera is used for video object tracking. The problem studied is how to improve the quality of the received video by jointly considering the coding and transmission process. The cross-layer controller determines the optimal coding and transmission parameters, according to the dynamic channel condition and the transmission delay. Multiple error concealment strategies are developed utilizing the special property of the PTU camera motion. In the second project, the binocular PTU camera is adopted for video object tracking. The presented work studied the fast disparity estimation algorithm and the 3D video transcoding over the WSN for real-time applications. The disparity/depth information is estimated in a coarse-to-fine manner using both local and global methods. The transcoding is coordinated by the cross-layer controller based on the channel condition and the data rate constraint, in order to achieve the best view synthesis quality. The third project is applied for multi-camera motion capture in remote healthcare monitoring. The challenge is the resource allocation for multiple video sequences. The presented cross-layer design incorporates the delay sensitive, content-aware video coding and transmission, and the adaptive video coding and transmission to ensure the optimal and balanced quality for the multi-view videos. In these projects, interdisciplinary study is conducted to synergize the surveillance system under the cross-layer optimization framework. Experimental results demonstrate the efficiency of the proposed schemes. The challenges of cross-layer design in existing wireless video surveillance systems are also analyzed to enlighten the future work. Adviser: Song C

    Cross-layer Optimized Wireless Video Surveillance

    Get PDF
    A wireless video surveillance system contains three major components, the video capture and preprocessing, the video compression and transmission over wireless sensor networks (WSNs), and the video analysis at the receiving end. The coordination of different components is important for improving the end-to-end video quality, especially under the communication resource constraint. Cross-layer control proves to be an efficient measure for optimal system configuration. In this dissertation, we address the problem of implementing cross-layer optimization in the wireless video surveillance system. The thesis work is based on three research projects. In the first project, a single PTU (pan-tilt-unit) camera is used for video object tracking. The problem studied is how to improve the quality of the received video by jointly considering the coding and transmission process. The cross-layer controller determines the optimal coding and transmission parameters, according to the dynamic channel condition and the transmission delay. Multiple error concealment strategies are developed utilizing the special property of the PTU camera motion. In the second project, the binocular PTU camera is adopted for video object tracking. The presented work studied the fast disparity estimation algorithm and the 3D video transcoding over the WSN for real-time applications. The disparity/depth information is estimated in a coarse-to-fine manner using both local and global methods. The transcoding is coordinated by the cross-layer controller based on the channel condition and the data rate constraint, in order to achieve the best view synthesis quality. The third project is applied for multi-camera motion capture in remote healthcare monitoring. The challenge is the resource allocation for multiple video sequences. The presented cross-layer design incorporates the delay sensitive, content-aware video coding and transmission, and the adaptive video coding and transmission to ensure the optimal and balanced quality for the multi-view videos. In these projects, interdisciplinary study is conducted to synergize the surveillance system under the cross-layer optimization framework. Experimental results demonstrate the efficiency of the proposed schemes. The challenges of cross-layer design in existing wireless video surveillance systems are also analyzed to enlighten the future work. Adviser: Song C

    Segmentation Based Image Scanning

    Get PDF
    The submitted paper deals with separate scanning of individual image segments. A new image processing approach based on image segmentation and segment scanning is presented. The resulting individual segments 1-dimensional representation provides higher neighbor pixel similarity than the 1-dimensional representation of the original image. This increased adjacent pixel similarity was achieved even without application of different recursive 2-dimensional scanning methods [4], such as Peano-Hilbert scanning method [1]. The resulting 1-dimensional image representation provides a good base for applying lossless compression methods, such as the entropic coding. The paper contains also results analysis of the traditional method scanned segment pixels and adjacent pixel differences from the entropy point of view. As these results indicate the lossy compression methods could be applicable using this approach as well and might improve the final results as confirmed by simple prediction algorithm results presented in this paper. More complex and sophisticated lossy compression algorithms application will be a part of the future work

    Loss tolerant speech decoder for telecommunications

    Get PDF
    A method and device for extrapolating past signal-history data for insertion into missing data segments in order to conceal digital speech frame errors. The extrapolation method uses past-signal history that is stored in a buffer. The method is implemented with a device that utilizes a finite-impulse response (FIR) multi-layer feed-forward artificial neural network that is trained by back-propagation for one-step extrapolation of speech compression algorithm (SCA) parameters. Once a speech connection has been established, the speech compression algorithm device begins sending encoded speech frames. As the speech frames are received, they are decoded and converted back into speech signal voltages. During the normal decoding process, pre-processing of the required SCA parameters will occur and the results stored in the past-history buffer. If a speech frame is detected to be lost or in error, then extrapolation modules are executed and replacement SCA parameters are generated and sent as the parameters required by the SCA. In this way, the information transfer to the SCA is transparent, and the SCA processing continues as usual. The listener will not normally notice that a speech frame has been lost because of the smooth transition between the last-received, lost, and next-received speech frames
    corecore