5,885 research outputs found

    Combined wavelet domain and motion compensated filtering compliant with video codecs

    Get PDF
    In this paper, we introduce the idea of using motion estimation resources from a video codec for video denoising. This is not straightforward because the motion estimators aimed for video compression and coding, tolerate errors in the estimated motion field and hence are not directly applicable to video denoising. To solve this problem, we propose a novel motion field filtering step that refines the accuracy of the motion estimates to a degree that is required for denoising. We illustrate the use of the proposed motion estimation method within a wavelet-based video denoising scheme. The resulting video denoising method is of low-complexity and receives comparable results with respect to the latest video denoising methods

    Scalable video/image transmission using rate compatible PUM turbo codes

    Get PDF
    The robust delivery of video over emerging wireless networks poses many challenges due to the heterogeneity of access networks, the variations in streaming devices, and the expected variations in network conditions caused by interference and coexistence. The proposed approach exploits the joint optimization of a wavelet-based scalable video/image coding framework and a forward error correction method based on PUM turbo codes. The scheme minimizes the reconstructed image/video distortion at the decoder subject to a constraint on the overall transmission bitrate budget. The minimization is achieved by exploiting the rate optimization technique and the statistics of the transmission channel

    Video modeling via implicit motion representations

    Get PDF
    Video modeling refers to the development of analytical representations for explaining the intensity distribution in video signals. Based on the analytical representation, we can develop algorithms for accomplishing particular video-related tasks. Therefore video modeling provides us a foundation to bridge video data and related-tasks. Although there are many video models proposed in the past decades, the rise of new applications calls for more efficient and accurate video modeling approaches.;Most existing video modeling approaches are based on explicit motion representations, where motion information is explicitly expressed by correspondence-based representations (i.e., motion velocity or displacement). Although it is conceptually simple, the limitations of those representations and the suboptimum of motion estimation techniques can degrade such video modeling approaches, especially for handling complex motion or non-ideal observation video data. In this thesis, we propose to investigate video modeling without explicit motion representation. Motion information is implicitly embedded into the spatio-temporal dependency among pixels or patches instead of being explicitly described by motion vectors.;Firstly, we propose a parametric model based on a spatio-temporal adaptive localized learning (STALL). We formulate video modeling as a linear regression problem, in which motion information is embedded within the regression coefficients. The coefficients are adaptively learned within a local space-time window based on LMMSE criterion. Incorporating a spatio-temporal resampling and a Bayesian fusion scheme, we can enhance the modeling capability of STALL on more general videos. Under the framework of STALL, we can develop video processing algorithms for a variety of applications by adjusting model parameters (i.e., the size and topology of model support and training window). We apply STALL on three video processing problems. The simulation results show that motion information can be efficiently exploited by our implicit motion representation and the resampling and fusion do help to enhance the modeling capability of STALL.;Secondly, we propose a nonparametric video modeling approach, which is not dependent on explicit motion estimation. Assuming the video sequence is composed of many overlapping space-time patches, we propose to embed motion-related information into the relationships among video patches and develop a generic sparsity-based prior for typical video sequences. First, we extend block matching to more general kNN-based patch clustering, which provides an implicit and distributed representation for motion information. We propose to enforce the sparsity constraint on a higher-dimensional data array signal, which is generated by packing the patches in the similar patch set. Then we solve the inference problem by updating the kNN array and the wanted signal iteratively. Finally, we present a Bayesian fusion approach to fuse multiple-hypothesis inferences. Simulation results in video error concealment, denoising, and deartifacting are reported to demonstrate its modeling capability.;Finally, we summarize the proposed two video modeling approaches. We also point out the perspectives of implicit motion representations in applications ranging from low to high level problems

    On-line adaptive video sequence transmission based on generation and transmisiĂłn of descriptions

    Full text link
    Proceedings of the 26th Picture Coding Symposium, PCS 2007, Lisbon, Portugal, November 2007This paper presents a system to transmit the information from a static surveillance camera in an adaptive way, from low to higher bit-rate, based on the on-line generation of descriptions. The proposed system is based on a server/client model: the server is placed in the surveillance area and the client is placed in a user side. The server analyzes the video sequence to detect the regions of activity (motion analysis) and the corresponding descriptions (mainly MPEG-7 moving regions) are generated together with the textures of moving regions and the associated background image. Depending on the available bandwidth, different levels of transmission are specified, ranging from just sending the descriptions generated to a transmission with all the associated images corresponding to the moving objects and background.This work is partially supported by Cátedra Infoglobal-UAM para Nuevas Tecnologías de video aplicadas a la seguridad. This work is also supported by the Ministerio de Ciencia y Tecnología of the Spanish Government under project TIN2004-07860 (MEDUSA) and by the Comunidad de Madrid under project P-TIC-0223-0505 (PROMULTIDIS)

    Semantic Compression for Edge-Assisted Systems

    Full text link
    A novel semantic approach to data selection and compression is presented for the dynamic adaptation of IoT data processing and transmission within "wireless islands", where a set of sensing devices (sensors) are interconnected through one-hop wireless links to a computational resource via a local access point. The core of the proposed technique is a cooperative framework where local classifiers at the mobile nodes are dynamically crafted and updated based on the current state of the observed system, the global processing objective and the characteristics of the sensors and data streams. The edge processor plays a key role by establishing a link between content and operations within the distributed system. The local classifiers are designed to filter the data streams and provide only the needed information to the global classifier at the edge processor, thus minimizing bandwidth usage. However, the better the accuracy of these local classifiers, the larger the energy necessary to run them at the individual sensors. A formulation of the optimization problem for the dynamic construction of the classifiers under bandwidth and energy constraints is proposed and demonstrated on a synthetic example.Comment: Presented at the Information Theory and Applications Workshop (ITA), February 17, 201

    Push recovery with stepping strategy based on time-projection control

    Get PDF
    In this paper, we present a simple control framework for on-line push recovery with dynamic stepping properties. Due to relatively heavy legs in our robot, we need to take swing dynamics into account and thus use a linear model called 3LP which is composed of three pendulums to simulate swing and torso dynamics. Based on 3LP equations, we formulate discrete LQR controllers and use a particular time-projection method to adjust the next footstep location on-line during the motion continuously. This adjustment, which is found based on both pelvis and swing foot tracking errors, naturally takes the swing dynamics into account. Suggested adjustments are added to the Cartesian 3LP gaits and converted to joint-space trajectories through inverse kinematics. Fixed and adaptive foot lift strategies also ensure enough ground clearance in perturbed walking conditions. The proposed structure is robust, yet uses very simple state estimation and basic position tracking. We rely on the physical series elastic actuators to absorb impacts while introducing simple laws to compensate their tracking bias. Extensive experiments demonstrate the functionality of different control blocks and prove the effectiveness of time-projection in extreme push recovery scenarios. We also show self-produced and emergent walking gaits when the robot is subject to continuous dragging forces. These gaits feature dynamic walking robustness due to relatively soft springs in the ankles and avoiding any Zero Moment Point (ZMP) control in our proposed architecture.Comment: 20 pages journal pape

    Localisation of mobile nodes in wireless networks with correlated in time measurement noise.

    Get PDF
    Wireless sensor networks are an inherent part of decision making, object tracking and location awareness systems. This work is focused on simultaneous localisation of mobile nodes based on received signal strength indicators (RSSIs) with correlated in time measurement noises. Two approaches to deal with the correlated measurement noises are proposed in the framework of auxiliary particle filtering: with a noise augmented state vector and the second approach implements noise decorrelation. The performance of the two proposed multi model auxiliary particle filters (MM AUX-PFs) is validated over simulated and real RSSIs and high localisation accuracy is demonstrated
    • …
    corecore