48 research outputs found

    Schätzung dichter Korrespondenzfelder unter Verwendung mehrerer Bilder

    Get PDF
    Most optical flow algorithms assume pairs of images that are acquired with an ideal, short exposure time. We present two approaches, that use additional images of a scene to estimate highly accurate, dense correspondence fields. In our first approach we consider video sequences that are acquired with alternating exposure times so that a short-exposure image is followed by a long-exposure image that exhibits motion-blur. With the help of the two enframing short-exposure images, we can decipher not only the motion information encoded in the long-exposure image, but also estimate occlusion timings, which are a basis for artifact-free frame interpolation. In our second approach we consider the data modality of multi-view video sequences, as it commonly occurs, e.g., in stereoscopic video. As several images capture nearly the same data of a scene, this redundancy can be used to establish more robust and consistent correspondence fields than the consideration of two images permits.Die meisten Verfahren zur Schätzung des optischen Flusses verwenden zwei Bilder, die mit einer optimalen, kurzen Belichtungszeit aufgenommen wurden. Wir präsentieren zwei Methoden, die zusätzliche Bilder zur Schätzung von hochgenauen, dichten Korrespondenzfeldern verwenden. Die erste Methode betrachtet Videosequenzen, die mit alternierender Belichtungsdauer aufgenommen werden, so dass auf eine Kurzzeitbelichtung eine Langzeitbelichtung folgt, die Bewegungsunschärfe enthält. Mit der Hilfe von zwei benachbarten Kurzzeitbelichtungen können wir nicht nur die Bewegung schätzen, die in der Bewegungsunschärfe der Langzeitbelichtung verschlüsselt ist, sondern zusätzlich auch Verdeckungszeiten schätzen, die sich bei der Interpolation von Zwischenbildern als große Hilfe erweisen. Die zweite Methode betrachtet Videos, die eine Szene aus mehreren Ansichten aufzeichnen, wie z.B. Stereovideos. Dabei enthalten mehrere Bilder fast dieselbe Information über die Szene. Wir nutzen diese Redundanz aus, um konsistentere und robustere Bewegungsfelder zu bestimmen, als es mit zwei Bildern möglich ist

    Cluster Dynamical Mean Field Theories

    Full text link
    Cluster Dynamical Mean Field Theories are analyzed in terms of their semiclassical limit and their causality properties, and a translation invariant formulation of the cellular dynamical mean field theory, PCDMFT, is presented. The semiclassical limit of the cluster methods is analyzed by applying them to the Falikov-Kimball model in the limit of infinite Hubbard interaction U where they map to different classical cluster schemes for the Ising model. Furthermore the Cutkosky-t'Hooft-Veltman cutting equations are generalized and derived for non translation invariant systems using the Schwinger-Keldysh formalism. This provides a general setting to discuss causality properties of cluster methods. To illustrate the method, we prove that PCDMFT is causal while the nested cluster schemes (NCS) in general and the pair scheme in particular are not. Constraints on further extension of these schemes are discussed.Comment: 26 page

    Removing chambers in Bruhat-Tits buildings

    Full text link
    We introduce and study a family of countable groups constructed from Euclidean buildings by "removing" suitably chosen subsets of chambers

    Response of the PUI Distribution To Variable Solar Wind Conditions

    Get PDF
    We present the first systematic analysis to determine pickup ion (PUI) cutoff speed variations, during general compression regions identified by their structure, shock fronts, and times of highly variable solar wind (SW) speed or magnetic field strength. This study is motivated by the attempt to remove or correct for these effects on the determination of the longitude of the interstellar neutral gas flow from the flow pattern related variation of the PUI cutoff with ecliptic longitude. At the same time, this study sheds light on the physical mechanisms that lead to energy transfer between the SW and the embedded PUI population. Using 2007-2014 STEREO A PLASTIC observations we identify compression regions and shocks in the solar wind and analyze the PUI velocity distribution function (VDF). We developed a routine to identify stream interaction regions and CIRs, by locating the stream interface and the successive velocity increase in the solar wind speed and density. Characterizing these individual compression events and combining them in a superposed epoch analysis allows us to analyze the PUI population under similar conditions and find the local cutoff shift with adequate statistics. The result of this method yields substantial cutoff shifts in compression regions with large solar wind speed gradients. Additionally, through sorting the entire set of PUI VDFs at high time resolution, we obtain a noticeable correlation of the cutoff shift with gradients in the SW speed and interplanetary magnetic field strength. We discuss implications for the understanding of the PUI VDF evolution and the PUI cutoff analysis of the interstellar neutral gas flow

    The Ten Martini Problem

    Full text link
    We prove the conjecture (known as the ``Ten Martini Problem'' after Kac and Simon) that the spectrum of the almost Mathieu operator is a Cantor set for all non-zero values of the coupling and all irrational frequencies.Comment: 31 pages, no figure

    Deformable Kernel Networks for Joint Image Filtering

    Get PDF
    Joint image filters are used to transfer structural details from a guidance picture used as a prior to a target image, in tasks such as enhancing spatial resolution and suppressing noise. Previous methods based on convolutional neural networks (CNNs) combine nonlinear activations of spatially-invariant kernels to estimate structural details and regress the filtering result. In this paper, we instead learn explicitly sparse and spatially-variant kernels. We propose a CNN architecture and its efficient implementation, called the deformable kernel network (DKN), that outputs sets of neighbors and the corresponding weights adaptively for each pixel. The filtering result is then computed as a weighted average. We also propose a fast version of DKN that runs about seventeen times faster for an image of size 640 x 480. We demonstrate the effectiveness and flexibility of our models on the tasks of depth map upsampling, saliency map upsampling, cross-modality image restoration, texture removal, and semantic segmentation. In particular, we show that the weighted averaging process with sparsely sampled 3 x 3 kernels outperforms the state of the art by a significant margin in all cases.Comment: arXiv admin note: substantial text overlap with arXiv:1903.11286 (IJCV accepted
    corecore