1,654 research outputs found

    Are all things created equal? The incidental in archaeology

    Get PDF
    Archaeologists evince a strong tendency to impute significance to the material traces they study, a propensity that has been especially marked since the post-processual emphasis on meaning and that has taken on renewed vigour with the turn to materiality. But are there not situations in which things are rather incidental or insignificant? This set of essays emerged from a workshop held in Berlin in April 2018, in which a group of scholars was invited to discuss the place of the incidental in social life in general and in archaeology in particular. Rather than lengthy formal papers, we offer an introduction that presents a general set of reflections on the issue of the incidentalness of things, followed by essays that pursue particular directions raised by that introduction as well as our discussions in Berlin. It is our hope that these brief forays into a complex topic will stimulate further work on this subject

    Direction-dependent turning leads to anisotropic diffusion and persistence

    Get PDF
    Cells and organisms follow aligned structures in their environment, a process that can generate persistent migration paths. Kinetic transport equations are a popular modelling tool for describing biological movements at the mesoscopic level, yet their formulations usually assume a constant turning rate. Here we relax this simplification, extending to include a turning rate that varies according to the anisotropy of a heterogeneous environment. We extend known methods of parabolic and hyperbolic scaling and apply the results to cell movement on micropatterned domains. We show that inclusion of orientation dependence in the turning rate can lead to persistence of motion in an otherwise fully symmetric environment and generate enhanced diffusion in structured domains

    Towards Robust Blind Face Restoration with Codebook Lookup Transformer

    Full text link
    Blind face restoration is a highly ill-posed problem that often requires auxiliary guidance to 1) improve the mapping from degraded inputs to desired outputs, or 2) complement high-quality details lost in the inputs. In this paper, we demonstrate that a learned discrete codebook prior in a small proxy space largely reduces the uncertainty and ambiguity of restoration mapping by casting blind face restoration as a code prediction task, while providing rich visual atoms for generating high-quality faces. Under this paradigm, we propose a Transformer-based prediction network, named CodeFormer, to model the global composition and context of the low-quality faces for code prediction, enabling the discovery of natural faces that closely approximate the target faces even when the inputs are severely degraded. To enhance the adaptiveness for different degradation, we also propose a controllable feature transformation module that allows a flexible trade-off between fidelity and quality. Thanks to the expressive codebook prior and global modeling, CodeFormer outperforms the state of the arts in both quality and fidelity, showing superior robustness to degradation. Extensive experimental results on synthetic and real-world datasets verify the effectiveness of our method.Comment: Accepted by NeurIPS 2022. Code: https://github.com/sczhou/CodeForme

    Understanding Deformable Alignment in Video Super-Resolution

    Full text link
    Deformable convolution, originally proposed for the adaptation to geometric variations of objects, has recently shown compelling performance in aligning multiple frames and is increasingly adopted for video super-resolution. Despite its remarkable performance, its underlying mechanism for alignment remains unclear. In this study, we carefully investigate the relation between deformable alignment and the classic flow-based alignment. We show that deformable convolution can be decomposed into a combination of spatial warping and convolution. This decomposition reveals the commonality of deformable alignment and flow-based alignment in formulation, but with a key difference in their offset diversity. We further demonstrate through experiments that the increased diversity in deformable alignment yields better-aligned features, and hence significantly improves the quality of video super-resolution output. Based on our observations, we propose an offset-fidelity loss that guides the offset learning with optical flow. Experiments show that our loss successfully avoids the overflow of offsets and alleviates the instability problem of deformable alignment. Aside from the contributions to deformable alignment, our formulation inspires a more flexible approach to introduce offset diversity to flow-based alignment, improving its performance.Comment: Tech report, 15 pages, 19 figure
    • …
    corecore