86 research outputs found

    JNMR: Joint Non-linear Motion Regression for Video Frame Interpolation

    Full text link
    Video frame interpolation (VFI) aims to generate predictive frames by warping learnable motions from the bidirectional historical references. Most existing works utilize spatio-temporal semantic information extractor to realize motion estimation and interpolation modeling. However, they insufficiently consider the real mechanistic rationality of generated middle motions. In this paper, we reformulate VFI as a Joint Non-linear Motion Regression (JNMR) strategy to model the complicated motions of inter-frame. Specifically, the motion trajectory between the target frame and the multiple reference frames is regressed by a temporal concatenation of multi-stage quadratic models. ConvLSTM is adopted to construct this joint distribution of complete motions in temporal dimension. Moreover, the feature learning network is designed to optimize for the joint regression modeling. A coarse-to-fine synthesis enhancement module is also conducted to learn visual dynamics at different resolutions through repetitive regression and interpolation. Experimental results on VFI show that the effectiveness and significant improvement of joint motion regression compared with the state-of-the-art methods. The code is available at https://github.com/ruhig6/JNMR.Comment: Accepted by IEEE Transactions on Image Processing (TIP

    Event-Based Algorithms For Geometric Computer Vision

    Get PDF
    Event cameras are novel bio-inspired sensors which mimic the function of the human retina. Rather than directly capturing intensities to form synchronous images as in traditional cameras, event cameras asynchronously detect changes in log image intensity. When such a change is detected at a given pixel, the change is immediately sent to the host computer, where each event consists of the x,y pixel position of the change, a timestamp, accurate to tens of microseconds, and a polarity, indicating whether the pixel got brighter or darker. These cameras provide a number of useful benefits over traditional cameras, including the ability to track extremely fast motions, high dynamic range, and low power consumption. However, with a new sensing modality comes the need to develop novel algorithms. As these cameras do not capture photometric intensities, novel loss functions must be developed to replace the photoconsistency assumption which serves as the backbone of many classical computer vision algorithms. In addition, the relative novelty of these sensors means that there does not exist the wealth of data available for traditional images with which we can train learning based methods such as deep neural networks. In this work, we address both of these issues with two foundational principles. First, we show that the motion blur induced when the events are projected into the 2D image plane can be used as a suitable substitute for the classical photometric loss function. Second, we develop self-supervised learning methods which allow us to train convolutional neural networks to estimate motion without any labeled training data. We apply these principles to solve classical perception problems such as feature tracking, visual inertial odometry, optical flow and stereo depth estimation, as well as recognition tasks such as object detection and human pose estimation. We show that these solutions are able to utilize the benefits of event cameras, allowing us to operate in fast moving scenes with challenging lighting which would be incredibly difficult for traditional cameras

    Self-Supervised Scene Dynamic Recovery from Rolling Shutter Images and Events

    Full text link
    Scene Dynamic Recovery (SDR) by inverting distorted Rolling Shutter (RS) images to an undistorted high frame-rate Global Shutter (GS) video is a severely ill-posed problem, particularly when prior knowledge about camera/object motions is unavailable. Commonly used artificial assumptions on motion linearity and data-specific characteristics, regarding the temporal dynamics information embedded in the RS scanlines, are prone to producing sub-optimal solutions in real-world scenarios. To address this challenge, we propose an event-based RS2GS framework within a self-supervised learning paradigm that leverages the extremely high temporal resolution of event cameras to provide accurate inter/intra-frame information. % In this paper, we propose to leverage the event camera to provide inter/intra-frame information as the emitted events have an extremely high temporal resolution and learn an event-based RS2GS network within a self-supervised learning framework, where real-world events and RS images can be exploited to alleviate the performance degradation caused by the domain gap between the synthesized and real data. Specifically, an Event-based Inter/intra-frame Compensator (E-IC) is proposed to predict the per-pixel dynamic between arbitrary time intervals, including the temporal transition and spatial translation. Exploring connections in terms of RS-RS, RS-GS, and GS-RS, we explicitly formulate mutual constraints with the proposed E-IC, resulting in supervisions without ground-truth GS images. Extensive evaluations over synthetic and real datasets demonstrate that the proposed method achieves state-of-the-art and shows remarkable performance for event-based RS2GS inversion in real-world scenarios. The dataset and code are available at https://w3un.github.io/selfunroll/

    Event-based Vision: A Survey

    Get PDF
    Event cameras are bio-inspired sensors that differ from conventional frame cameras: Instead of capturing images at a fixed rate, they asynchronously measure per-pixel brightness changes, and output a stream of events that encode the time, location and sign of the brightness changes. Event cameras offer attractive properties compared to traditional cameras: high temporal resolution (in the order of microseconds), very high dynamic range (140 dB vs. 60 dB), low power consumption, and high pixel bandwidth (on the order of kHz) resulting in reduced motion blur. Hence, event cameras have a large potential for robotics and computer vision in challenging scenarios for traditional cameras, such as low-latency, high speed, and high dynamic range. However, novel methods are required to process the unconventional output of these sensors in order to unlock their potential. This paper provides a comprehensive overview of the emerging field of event-based vision, with a focus on the applications and the algorithms developed to unlock the outstanding properties of event cameras. We present event cameras from their working principle, the actual sensors that are available and the tasks that they have been used for, from low-level vision (feature detection and tracking, optic flow, etc.) to high-level vision (reconstruction, segmentation, recognition). We also discuss the techniques developed to process events, including learning-based techniques, as well as specialized processors for these novel sensors, such as spiking neural networks. Additionally, we highlight the challenges that remain to be tackled and the opportunities that lie ahead in the search for a more efficient, bio-inspired way for machines to perceive and interact with the world

    Learning Representations for Controllable Image Restoration

    Get PDF
    Deep Convolutional Neural Networks have sparked a renaissance in all the sub-fields of computer vision. Tremendous progress has been made in the area of image restoration. The research community has pushed the boundaries of image deblurring, super-resolution, and denoising. However, given a distorted image, most existing methods typically produce a single restored output. The tasks mentioned above are inherently ill-posed, leading to an infinite number of plausible solutions. This thesis focuses on designing image restoration techniques capable of producing multiple restored results and granting users more control over the restoration process. Towards this goal, we demonstrate how one could leverage the power of unsupervised representation learning. Image restoration is vital when applied to distorted images of human faces due to their social significance. Generative Adversarial Networks enable an unprecedented level of generated facial details combined with smooth latent space. We leverage the power of GANs towards the goal of learning controllable neural face representations. We demonstrate how to learn an inverse mapping from image space to these latent representations, tuning these representations towards a specific task, and finally manipulating latent codes in these spaces. For example, we show how GANs and their inverse mappings enable the restoration and editing of faces in the context of extreme face super-resolution and the generation of novel view sharp videos from a single motion-blurred image of a face. This thesis also addresses more general blind super-resolution, denoising, and scratch removal problems, where blur kernels and noise levels are unknown. We resort to contrastive representation learning and first learn the latent space of degradations. We demonstrate that the learned representation allows inference of ground-truth degradation parameters and can guide the restoration process. Moreover, it enables control over the amount of deblurring and denoising in the restoration via manipulation of latent degradation features

    Real-Time Magnetic Resonance Imaging

    Get PDF
    Real‐time magnetic resonance imaging (RT‐MRI) allows for imaging dynamic processes as they occur, without relying on any repetition or synchronization. This is made possible by modern MRI technology such as fast‐switching gradients and parallel imaging. It is compatible with many (but not all) MRI sequences, including spoiled gradient echo, balanced steady‐state free precession, and single‐shot rapid acquisition with relaxation enhancement. RT‐MRI has earned an important role in both diagnostic imaging and image guidance of invasive procedures. Its unique diagnostic value is prominent in areas of the body that undergo substantial and often irregular motion, such as the heart, gastrointestinal system, upper airway vocal tract, and joints. Its value in interventional procedure guidance is prominent for procedures that require multiple forms of soft‐tissue contrast, as well as flow information. In this review, we discuss the history of RT‐MRI, fundamental tradeoffs, enabling technology, established applications, and current trends

    Cardiac magnetic resonance assessment of central and peripheral vascular function in patients undergoing renal sympathetic denervation as predictor for blood pressure response

    Get PDF
    Background: Most trials regarding catheter-based renal sympathetic denervation (RDN) describe a proportion of patients without blood pressure response. Recently, we were able to show arterial stiffness, measured by invasive pulse wave velocity (IPWV), seems to be an excellent predictor for blood pressure response. However, given the invasiveness, IPWV is less suitable as a selection criterion for patients undergoing RDN. Consequently, we aimed to investigate the value of cardiac magnetic resonance (CMR) based measures of arterial stiffness in predicting the outcome of RDN compared to IPWV as reference. Methods: Patients underwent CMR prior to RDN to assess ascending aortic distensibility (AAD), total arterial compliance (TAC), and systemic vascular resistance (SVR). In a second step, central aortic blood pressure was estimated from ascending aortic area change and flow sequences and used to re-calculate total arterial compliance (cTAC). Additionally, IPWV was acquired. Results: Thirty-two patients (24 responders and 8 non-responders) were available for analysis. AAD, TAC and cTAC were higher in responders, IPWV was higher in non-responders. SVR was not different between the groups. Patients with AAD, cTAC or TAC above median and IPWV below median had significantly better BP response. Receiver operating characteristic (ROC) curves predicting blood pressure response for IPWV, AAD, cTAC and TAC revealed areas under the curve of 0.849, 0.828, 0.776 and 0.753 (p = 0.004, 0.006, 0.021 and 0.035). Conclusions: Beyond IPWV, AAD, cTAC and TAC appear as useful outcome predictors for RDN in patients with hypertension. CMR-derived markers of arterial stiffness might serve as non-invasive selection criteria for RDN

    Super Resolution of Wavelet-Encoded Images and Videos

    Get PDF
    In this dissertation, we address the multiframe super resolution reconstruction problem for wavelet-encoded images and videos. The goal of multiframe super resolution is to obtain one or more high resolution images by fusing a sequence of degraded or aliased low resolution images of the same scene. Since the low resolution images may be unaligned, a registration step is required before super resolution reconstruction. Therefore, we first explore in-band (i.e. in the wavelet-domain) image registration; then, investigate super resolution. Our motivation for analyzing the image registration and super resolution problems in the wavelet domain is the growing trend in wavelet-encoded imaging, and wavelet-encoding for image/video compression. Due to drawbacks of widely used discrete cosine transform in image and video compression, a considerable amount of literature is devoted to wavelet-based methods. However, since wavelets are shift-variant, existing methods cannot utilize wavelet subbands efficiently. In order to overcome this drawback, we establish and explore the direct relationship between the subbands under a translational shift, for image registration and super resolution. We then employ our devised in-band methodology, in a motion compensated video compression framework, to demonstrate the effective usage of wavelet subbands. Super resolution can also be used as a post-processing step in video compression in order to decrease the size of the video files to be compressed, with downsampling added as a pre-processing step. Therefore, we present a video compression scheme that utilizes super resolution to reconstruct the high frequency information lost during downsampling. In addition, super resolution is a crucial post-processing step for satellite imagery, due to the fact that it is hard to update imaging devices after a satellite is launched. Thus, we also demonstrate the usage of our devised methods in enhancing resolution of pansharpened multispectral images
    • 

    corecore