8 research outputs found

    Temporal Image Interpolation

    Get PDF
    Tato práce se zabývá problematikou interpolace sekvence obrázků mezi dvěma klíčovými snímky. Hlavním cílem je návrh a implementace aplikace, která provádí interpolaci pomocí odhadu optického toku Farnebäck metodou. Aplikace vypočítává snímky dvěma metodami, které používají obousměrnou interpolaci. První metoda vybírá obrazový bod s okolím a druhá metoda vybírá pouze bod a rozmazává jej do nového snímku. Testování proběhlo na datech zachycujících různé druhy pohybů. Pokud byl optický tok odhadnut správně, interpolace proběhla v pořádku, v opačném případě byly interpolované snímky nepřesné. Zejména se jednalo o klíčové snímky s malým gradientem či s neurčitým pohybem.This thesis deals with issues of image interpolation between two key frames. Main objectives of the work are design and implementation of application which interpolates images using optical flow estimation based on Farnebäck method. Application computes pictures by two methods, which use two-way interpolation. The first method selects a pixel with neighborhood and the second method selects only pixel and blurs it into a new frame. Testing was carried out on data describing the different types of movements. If the estimation of optical flow was correct, interpolation was successful, otherwise the interpolated pictures were inaccurate. Especially it was the key frames with a small gradient or with an indeterminate movement.

    Temporal Interpolation via Motion Field Prediction

    Full text link
    Navigated 2D multi-slice dynamic Magnetic Resonance (MR) imaging enables high contrast 4D MR imaging during free breathing and provides in-vivo observations for treatment planning and guidance. Navigator slices are vital for retrospective stacking of 2D data slices in this method. However, they also prolong the acquisition sessions. Temporal interpolation of navigator slices an be used to reduce the number of navigator acquisitions without degrading specificity in stacking. In this work, we propose a convolutional neural network (CNN) based method for temporal interpolation via motion field prediction. The proposed formulation incorporates the prior knowledge that a motion field underlies changes in the image intensities over time. Previous approaches that interpolate directly in the intensity space are prone to produce blurry images or even remove structures in the images. Our method avoids such problems and faithfully preserves the information in the image. Further, an important advantage of our formulation is that it provides an unsupervised estimation of bi-directional motion fields. We show that these motion fields can be used to halve the number of registrations required during 4D reconstruction, thus substantially reducing the reconstruction time.Comment: Submitted to 1st Conference on Medical Imaging with Deep Learning (MIDL 2018), Amsterdam, The Netherland

    Depth information in natural environments derived from optic flow by insect motion detection system: a model analysis

    Get PDF
    Knowing the depth structure of the environment is crucial for moving animals in many behavioral contexts, such as collision avoidance, targeting objects, or spatial navigation. An important source of depth information is motion parallax. This powerful cue is generated on the eyes during translatory self-motion with the retinal images of nearby objects moving faster than those of distant ones. To investigate how the visual motion pathway represents motion-based depth information we analyzed its responses to image sequences recorded in natural cluttered environments with a wide range of depth structures. The analysis was done on the basis of an experimentally validated model of the visual motion pathway of insects, with its core elements being correlation-type elementary motion detectors (EMDs). It is the key result of our analysis that the absolute EMD responses, i.e. the motion energy profile, represent the contrast-weighted nearness of environmental structures during translatory self-motion at a roughly constant velocity. In other words, the output of the EMD array highlights contours of nearby objects. This conclusion is largely independent of the scale over which EMDs are spatially pooled and was corroborated by scrutinizing the motion energy profile after eliminating the depth structure from the natural image sequences. Hence, the well-established dependence of correlation-type EMDs on both velocity and textural properties of motion stimuli appears to be advantageous for representing behaviorally relevant information about the environment in a computationally parsimonious way

    Die Virtuelle Videokamera: ein System zur Blickpunktsynthese in beliebigen, dynamischen Szenen

    Get PDF
    The Virtual Video Camera project strives to create free viewpoint video from casually captured multi-view data. Multiple video streams of a dynamic scene are captured with off-the-shelf camcorders, and the user can re-render the scene from novel perspectives. In this thesis the algorithmic core of the Virtual Video Camera is presented. This includes the algorithm for image correspondence estimation as well as the image-based renderer. Furthermore, its application in the context of an actual video production is showcased, and the rendering and image processing pipeline is extended to incorporate depth information.Das Virtual Video Camera Projekt dient der Erzeugung von Free Viewpoint Video Ansichten von Multi-View Aufnahmen: Material mehrerer Videoströme wird hierzu mit handelsüblichen Camcordern aufgezeichnet. Im Anschluss kann die Szene aus beliebigen, von den ursprünglichen Kameras nicht abgedeckten Blickwinkeln betrachtet werden. In dieser Dissertation wird der algorithmische Kern der Virtual Video Camera vorgestellt. Dies beinhaltet das Verfahren zur Bildkorrespondenzschätzung sowie den bildbasierten Renderer. Darüber hinaus wird die Anwendung im Kontext einer Videoproduktion beleuchtet. Dazu wird die bildbasierte Erzeugung neuer Blickpunkte um die Erzeugung und Einbindung von Tiefeninformationen erweitert

    H.264-based multiple description coding using motion compensated temporal interpolation

    No full text
    Multiple description coding is a framework adapted to noisy transmission environments. In this work, we use H.264 to create two descriptions of a video sequence, each of them assuring a minimum quality level. If both of them are received, a suitable algorithm is used to produce an improved quality sequence. The key technique is a temporal image interpolation using motion compensation, inspired to the distributed video coding context. The interpolated image blocks are weighted with the received blocks obtained from the other description. The optimal weights are computed at the encoder and efficiently sent to the decoder as side information. The proposed technique shows a remarkable gain for central decoding with respect to similar methods available in the state of the art. ©2010 IEEE
    corecore