688 research outputs found

    New Data on the Ancient Vod Culture

    Get PDF

    The Chud of the Vodskaja Pyatina

    Get PDF

    Elements Distribution in Soil and Plants of an Old Copper Slag Dump in the Middle Urals, Russia

    Get PDF
    The elements concentration in soil and accumulation in plants growing spontaneously on an old copper slag dump were studied. The research object was a landfill site of the Polevskoy copper smelter (Middle Ural, Russia), which is about 200 years old. We investigated composite samples, consisting of soil blocks (20 x 20 cm) with growing plants. Samples were selected on a transect of 4–5 m at equal intervals. The composite sample was divided into slag fractions: stone, gravel, fine soil (particles smaller than 1 mm); plant fractions: moss and roots, stems and leaves. The microelement analysis of the samples was carried out at an analytical center of the Institute of Geology and Geochemistry, Ural Branch of RAS. The analyses were performed by inductively coupled plasma mass-spectrometry using Elan-9000 ICP massspectrometer. The formation of technogenic soil with a thickness of 10–15 cm on the dump of cast copper slag has begun two hundred years ago. Fine soil constitutes more than one third of the technogenic soil mass and acts as a sorption geochemical barrier. Fine soil accumulates elements mobilized from slag. The concentration of most elements in fine soil is 1–2 orders of magnitude higher than their concentration in slag stone. Pb, Cd, Bi are particularly effectively retained in fine soil: their content is 700-1000 times higher than in slag stone. In the conditions of unlimited supply of elements released from slag, plant reaches the upper threshold of accumulation. The aboveground plant parts compared to litter (roots and moss) have a lower concentration of all elements, but they show the stronger ability to accumulate selenium

    Hypernymy Understanding Evaluation of Text-to-Image Models via WordNet Hierarchy

    Full text link
    Text-to-image synthesis has recently attracted widespread attention due to rapidly improving quality and numerous practical applications. However, the language understanding capabilities of text-to-image models are still poorly understood, which makes it difficult to reason about prompt formulations that a given model would understand well. In this work, we measure the capability of popular text-to-image models to understand hypernymy\textit{hypernymy}, or the "is-a" relation between words. We design two automatic metrics based on the WordNet semantic hierarchy and existing image classifiers pretrained on ImageNet. These metrics both enable broad quantitative comparison of linguistic capabilities for text-to-image models and offer a way of finding fine-grained qualitative differences, such as words that are unknown to models and thus are difficult for them to draw. We comprehensively evaluate popular text-to-image models, including GLIDE, Latent Diffusion, and Stable Diffusion, showing how our metrics can provide a better understanding of the individual strengths and weaknesses of these models

    Is This Loss Informative? Faster Text-to-Image Customization by Tracking Objective Dynamics

    Full text link
    Text-to-image generation models represent the next step of evolution in image synthesis, offering a natural way to achieve flexible yet fine-grained control over the result. One emerging area of research is the fast adaptation of large text-to-image models to smaller datasets or new visual concepts. However, many efficient methods of adaptation have a long training time, which limits their practical applications, slows down research experiments, and spends excessive GPU resources. In this work, we study the training dynamics of popular text-to-image personalization methods (such as Textual Inversion or DreamBooth), aiming to speed them up. We observe that most concepts are learned at early stages and do not improve in quality later, but standard model convergence metrics fail to indicate that. Instead, we propose a simple drop-in early stopping criterion that only requires computing the regular training objective on a fixed set of inputs for all training iterations. Our experiments on Stable Diffusion for a range of concepts and for three personalization methods demonstrate the competitive performance of our approach, making adaptation up to 8 times faster with no significant drops in quality.Comment: Code: https://github.com/yandex-research/DVAR. 19 pages, 14 figure
    • …
    corecore