886,477 research outputs found

    End-to-end Sampling Patterns

    Full text link
    Sample patterns have many uses in Computer Graphics, ranging from procedural object placement over Monte Carlo image synthesis to non-photorealistic depiction. Their properties such as discrepancy, spectra, anisotropy, or progressiveness have been analyzed extensively. However, designing methods to produce sampling patterns with certain properties can require substantial hand-crafting effort, both in coding, mathematical derivation and compute time. In particular, there is no systematic way to derive the best sampling algorithm for a specific end-task. Tackling this issue, we suggest another level of abstraction: a toolkit to end-to-end optimize over all sampling methods to find the one producing user-prescribed properties such as discrepancy or a spectrum that best fit the end-task. A user simply implements the forward losses and the sampling method is found automatically -- without coding or mathematical derivation -- by making use of back-propagation abilities of modern deep learning frameworks. While this optimization takes long, at deployment time the sampling method is quick to execute as iterated unstructured non-linear filtering using radial basis functions (RBFs) to represent high-dimensional kernels. Several important previous methods are special cases of this approach, which we compare to previous work and demonstrate its usefulness in several typical Computer Graphics applications. Finally, we propose sampling patterns with properties not shown before, such as high-dimensional blue noise with projective properties

    Assessing the importance of zooplankton sampling patterns with an ecosystem model

    Get PDF
    The copepod Calanus finmarchicus is the dominant species of mesozooplankton in the Norwegian Sea and an important food source for multiple commercially exploited pelagic fish stocks. In addition to the patchy distribution of species, the vast size of the Norwegian Sea makes synoptic zooplankton monitoring challenging. Monitoring includes relatively few sampling stations, and the number as well as the geographical location of these vary in time and space among years. In the present study, we explored the sampling patterns in 2 existing datasets: (1) for the period 1994-2004: size-fractionated zooplankton biomass, which allows for estimation of C. finmarchicus fractions, at irregularly spaced locations, and (2) for the period 1995-2017: non-size-fractionated zooplankton biomass data, gridded by objective analysis. We first assessed the C. finmarchicus data set by virtual sampling in C. finmarchicus spatial fields from the end-to-end ecosystem model NORWECOM.E2E. We found that non-consistent sampling patterns during the month of May caused the biomass estimate to be highly dependent on the chosen sampling strategy: sampling patterns from the first part of the period generally produced the highest biomass estimates. We then assessed the gridded zooplankton dataset by applying the 1995-2004 sampling patterns as well as a recent (2020) sampling pattern, which included regular and more numerous sampling locations, and found systematic differences. We conclude that the present May sampling pattern is much more robust and thereby also more likely to provide a good estimate of the interannual variability of the total biomass in the area. This study is an example of how models can be used to mechanistically interpret experimental datasets, and more specifically, how models can be used to assess sampling patterns and reveal their limitations.submittedVersio

    Ghost translation

    Full text link
    Artificial intelligence has recently been widely used in computational imaging. The deep neural network (DNN) improves the signal-to-noise ratio of the retrieved images, whose quality is otherwise corrupted due to the low sampling ratio or noisy environments. This work proposes a new computational imaging scheme based on the sequence transduction mechanism with the transformer network. The simulation database assists the network in achieving signal translation ability. The experimental single-pixel detector's signal will be `translated' into a 2D image in an end-to-end manner. High-quality images with no background noise can be retrieved at a sampling ratio as low as 2%. The illumination patterns can be either well-designed speckle patterns for sub-Nyquist imaging or random speckle patterns. Moreover, our method is robust to noise interference. This translation mechanism opens a new direction for DNN-assisted ghost imaging and can be used in various computational imaging scenarios.Comment: 10 pages, 8 figure

    Multiscale sampling model for motion integration

    Full text link
    Biologically plausible strategies for visual scene integration across spatial and temporal domains continues to be a challenging topic. The fundamental question we address is whether classical problems in motion integration, such as the aperture problem, can be solved in a model that samples the visual scene at multiple spatial and temporal scales in parallel. We hypothesize that fast interareal connections that allow feedback of information between cortical layers are the key processes that disambiguate motion direction. We developed a neural model showing how the aperture problem can be solved using different spatial sampling scales between LGN, V1 layer 4, V1 layer 6, and area MT. Our results suggest that multiscale sampling, rather than feedback explicitly, is the key process that gives rise to end-stopped cells in V1 and enables area MT to solve the aperture problem without the need for calculating intersecting constraints or crafting intricate patterns of spatiotemporal receptive fields. Furthermore, the model explains why end-stopped cells no longer emerge in the absence of V1 layer 6 activity (Bolz & Gilbert, 1986), why V1 layer 4 cells are significantly more end-stopped than V1 layer 6 cells (Pack, Livingstone, Duffy, & Born, 2003), and how it is possible to have a solution to the aperture problem in area MT with no solution in V1 in the presence of driving feedback. In summary, while much research in the field focuses on how a laminar architecture can give rise to complicated spatiotemporal receptive fields to solve problems in the motion domain, we show that one can reframe motion integration as an emergent property of multiscale sampling achieved concurrently within lamina and across multiple visual areas.This work was supported in part by CELEST, a National Science Foundation Science of Learning Center; NSF SBE-0354378 and OMA-0835976; ONR (N00014-11-1-0535); and AFOSR (FA9550-12-1-0436). (CELEST, a National Science Foundation Science of Learning Center; SBE-0354378 - NSF; OMA-0835976 - NSF; N00014-11-1-0535 - ONR; FA9550-12-1-0436 - AFOSR)Published versio

    FastDiff: A Fast Conditional Diffusion Model for High-Quality Speech Synthesis

    Get PDF
    Denoising diffusion probabilistic models (DDPMs) have recently achieved leading performances in many generative tasks. However, the inherited iterative sampling process costs hindered their applications to speech synthesis. This paper proposes FastDiff, a fast conditional diffusion model for high-quality speech synthesis. FastDiff employs a stack of time-aware location-variable convolutions of diverse receptive field patterns to efficiently model long-term time dependencies with adaptive conditions. A noise schedule predictor is also adopted to reduce the sampling steps without sacrificing the generation quality. Based on FastDiff, we design an end-to-end text-to-speech synthesizer, FastDiff-TTS, which generates high-fidelity speech waveforms without any intermediate feature (e.g., Mel-spectrogram). Our evaluation of FastDiff demonstrates the state-of-the-art results with higher-quality (MOS 4.28) speech samples. Also, FastDiff enables a sampling speed of 58x faster than real-time on a V100 GPU, making diffusion models practically applicable to speech synthesis deployment for the first time. We further show that FastDiff generalized well to the mel-spectrogram inversion of unseen speakers, and FastDiff-TTS outperformed other competing methods in end-to-end text-to-speech synthesis. Audio samples are available at \url{https://FastDiff.github.io/}.Comment: Accepted by IJCAI 202

    Sur la génération de schémas d'échantillonnage compressé en IRM

    Get PDF
    International audienceThis article contains two contributions. First, we describe the state-of-the-art theories in compressed sensing for Magnetic ResonanceImaging (MRI). This allows us to bring out important principles that should guide the generation of sampling patterns. Second, wedescribe an original methodology to design efficient sampling schemes. It consists of projecting a sampling density on the space of feasiblemeasures for MRI. We end up by proposing comparisons to current sampling strategies on simulated data. This illustrates the well-foundednessof our approach.Cet article a deux finalités. Premièrement, nous proposons un état de l'art des théories d'échantillonnage compressé pour l'Imagerie par résonance magnétique (IRM). Ceci nous permet de dégager quelques grands principes à suivre pour générer des schémas performants en termes de temps d'acquisition et de qualité de reconstruction. Dans une deuxième partie, nous proposons une méthodologie originale de conception de schémas qui repose sur des algorithmes de projection de densités sur des espaces de mesures. Nous proposons finalement des comparaisons avec des stratégies actuelles d'échantillonnage sur des simulations et montrons ainsi le bien-fondé de notre approche. Abstract – This article contains two contributions. First, we describe the state-of-the-art theories in compressed sensing for Magnetic Resonance Imaging (MRI). This allows us to bring out important principles that should guide the generation of sampling patterns. Second, we describe an original methodology to design efficient sampling schemes. It consists of projecting a sampling density on the space of feasible measures for MRI. We end up by proposing comparisons to current sampling strategies on simulated data. This illustrates the well-foundedness of our approach

    Dinosaur biogeographic structure and Mesozoic continental fragmentation: a network-based approach

    Get PDF
    Aim: To reconstruct dinosaur macro-biogeographical patterns through the Mesozoic Era using a network-based approach. We test how continental fragmentation affected dinosaur macro-biogeographical structure and evolutionary rates. Location: A global occurrence database of dinosaur families from the Late Triassic to the end-Cretaceous was used for this study. Methods: Biogeographical and geographical network models were constructed. Continental landmasses were linked by direct continental contact and sea level (SL)-conditioned connections in geographical networks, and by shared dinosaur families in biogeographical networks. Biogeographical networks were run with raw, novel and first-step connections for all dinosaur, ornithischian, theropod, and sauropodomorph taxa. Results: Geographical connectedness declines through time, from peak aggregation in the Triassic–Jurassic to complete separation in the latest Cretaceous. Biogeographical connectedness shows no common trend in the raw and novel connection network models, but decreases through time while showing some correlation with continental fragmentation in most of the first-step network models. Despite continental isolation and high SLs, intercontinental faunal exchange continued right up to the end of the Cretaceous. Continental fragmentation and dinosaurian macro-biogeographical structure do not share a common pattern with dinosaurian evolutionary rates, although there is evidence that increased continental isolation resulted in increased origination rates in some dinosaurian lineages. Spatiotemporal sampling biases and early Mesozoic establishment of family-level distribution patterns are important drivers of apparent dinosaur macro-biogeographical structure. Main conclusions: There is some evidence to suggest that dinosaur macro-biogeographical structure was influenced by continental fragmentation, although intercontinental exchange of dinosaur faunas appears to have continued up to the end of the Cretaceous. Macro-biogeographical patterns are obscured by uneven geographical sampling through time and a residual earlier Mesozoic distribution which is sustained up to the end of the Cretaceous
    corecore