332 research outputs found

    Small Transformers Compute Universal Metric Embeddings

    Full text link
    We study representations of data from an arbitrary metric space X\mathcal{X} in the space of univariate Gaussian mixtures with a transport metric (Delon and Desolneux 2020). We derive embedding guarantees for feature maps implemented by small neural networks called \emph{probabilistic transformers}. Our guarantees are of memorization type: we prove that a probabilistic transformer of depth about nlog(n)n\log(n) and width about n2n^2 can bi-H\"{o}lder embed any nn-point dataset from X\mathcal{X} with low metric distortion, thus avoiding the curse of dimensionality. We further derive probabilistic bi-Lipschitz guarantees, which trade off the amount of distortion and the probability that a randomly chosen pair of points embeds with that distortion. If X\mathcal{X}'s geometry is sufficiently regular, we obtain stronger, bi-Lipschitz guarantees for all points in the dataset. As applications, we derive neural embedding guarantees for datasets from Riemannian manifolds, metric trees, and certain types of combinatorial graphs. When instead embedding into multivariate Gaussian mixtures, we show that probabilistic transformers can compute bi-H\"{o}lder embeddings with arbitrarily small distortion.Comment: 42 pages, 10 Figures, 3 Table

    FunkNN: Neural Interpolation for Functional Generation

    Full text link
    Can we build continuous generative models which generalize across scales, can be evaluated at any coordinate, admit calculation of exact derivatives, and are conceptually simple? Existing MLP-based architectures generate worse samples than the grid-based generators with favorable convolutional inductive biases. Models that focus on generating images at different scales do better, but employ complex architectures not designed for continuous evaluation of images and derivatives. We take a signal-processing perspective and treat continuous image generation as interpolation from samples. Indeed, correctly sampled discrete images contain all information about the low spatial frequencies. The question is then how to extrapolate the spectrum in a data-driven way while meeting the above design criteria. Our answer is FunkNN -- a new convolutional network which learns how to reconstruct continuous images at arbitrary coordinates and can be applied to any image dataset. Combined with a discrete generative model it becomes a functional generator which can act as a prior in continuous ill-posed inverse problems. We show that FunkNN generates high-quality continuous images and exhibits strong out-of-distribution performance thanks to its patch-based design. We further showcase its performance in several stylized inverse problems with exact spatial derivatives.Comment: 17 pages, 13 figure

    Joint Cryo-ET Alignment and Reconstruction with Neural Deformation Fields

    Full text link
    We propose a framework to jointly determine the deformation parameters and reconstruct the unknown volume in electron cryotomography (CryoET). CryoET aims to reconstruct three-dimensional biological samples from two-dimensional projections. A major challenge is that we can only acquire projections for a limited range of tilts, and that each projection undergoes an unknown deformation during acquisition. Not accounting for these deformations results in poor reconstruction. The existing CryoET software packages attempt to align the projections, often in a workflow which uses manual feedback. Our proposed method sidesteps this inconvenience by automatically computing a set of undeformed projections while simultaneously reconstructing the unknown volume. We achieve this by learning a continuous representation of the undeformed measurements and deformation parameters. We show that our approach enables the recovery of high-frequency details that are destroyed without accounting for deformations

    Differentiable Uncalibrated Imaging

    Full text link
    We propose a differentiable imaging framework to address uncertainty in measurement coordinates such as sensor locations and projection angles. We formulate the problem as measurement interpolation at unknown nodes supervised through the forward operator. To solve it we apply implicit neural networks, also known as neural fields, which are naturally differentiable with respect to the input coordinates. We also develop differentiable spline interpolators which perform as well as neural networks, require less time to optimize and have well-understood properties. Differentiability is key as it allows us to jointly fit a measurement representation, optimize over the uncertain measurement coordinates, and perform image reconstruction which in turn ensures consistent calibration. We apply our approach to 2D and 3D computed tomography and show that it produces improved reconstructions compared to baselines that do not account for the lack of calibration. The flexibility of the proposed framework makes it easy to apply to almost arbitrary imaging problems

    Manifold Rewiring for Unlabeled Imaging

    Full text link
    Geometric data analysis relies on graphs that are either given as input or inferred from data. These graphs are often treated as "correct" when solving downstream tasks such as graph signal denoising. But real-world graphs are known to contain missing and spurious links. Similarly, graphs inferred from noisy data will be perturbed. We thus define and study the problem of graph denoising, as opposed to graph signal denoising, and propose an approach based on link-prediction graph neural networks. We focus in particular on neighborhood graphs over point clouds sampled from low-dimensional manifolds, such as those arising in imaging inverse problems and exploratory data analysis. We illustrate our graph denoising framework on regular synthetic graphs and then apply it to single-particle cryo-EM where the measurements are corrupted by very high levels of noise. Due to this degradation, the initial graph is contaminated by noise, leading to missing or spurious edges. We show that our proposed graph denoising algorithm improves the state-of-the-art performance of multi-frequency vector diffusion maps

    Experts bodies, experts minds: How physical and mental training shape the brain

    Get PDF
    Skill learning is the improvement in perceptual, cognitive, or motor performance following practice. Expert performance levels can be achieved with well-organized knowledge, using sophisticated and specific mental representations and cognitive processing, applying automatic sequences quickly and efficiently, being able to deal with large amounts of information, and many other challenging task demands and situations that otherwise paralyze the performance of novices. The neural reorganizations that occur with expertise reflect the optimization of the neurocognitive resources to deal with the complex computational load needed to achieve peak performance. As such, capitalizing on neuronal plasticity, brain modifications take place over time-practice and during the consolidation process. One major challenge is to investigate the neural substrates and cognitive mechanisms engaged in expertise, and to define “expertise” from its neural and cognitive underpinnings. Recent insights showed that many brain structures are recruited during task performance, but only activity in regions related to domain-specific knowledge distinguishes experts from novices. The present review focuses on three expertise domains placed across a motor to mental gradient of skill learning: sequential motor skill, mental simulation of the movement (motor imagery), and meditation as a paradigmatic example of “pure” mental training. We first describe results on each specific domain from the initial skill acquisition to expert performance, including recent results on the corresponding underlying neural mechanisms. We then discuss differences and similarities between these domains with the aim to identify the highlights of the neurocognitive processes underpinning expertise, and conclude with suggestions for future research

    Functional Structure of Spontaneous Sleep Slow Oscillation Activity in Humans

    Get PDF
    Background During non-rapid eye movement (NREM) sleep synchronous neural oscillations between neural silence (down state) and neural activity (up state) occur. Sleep Slow Oscillations (SSOs) events are their EEG correlates. Each event has an origin site and propagates sweeping the scalp. While recent findings suggest a SSO key role in memory consolidation processes, the structure and the propagation of individual SSO events, as well as their modulation by sleep stages and cortical areas have not been well characterized so far. Methodology/Principal Findings We detected SSO events in EEG recordings and we defined and measured a set of features corresponding to both wave shapes and event propagations. We found that a typical SSO shape has a transition to down state, which is steeper than the following transition from down to up state. We show that during SWS SSOs are larger and more locally synchronized, but less likely to propagate across the cortex, compared to NREM stage 2. Also, the detection number of SSOs as well as their amplitudes and slopes, are greatest in the frontal regions. Although derived from a small sample, this characterization provides a preliminary reference about SSO activity in healthy subjects for 32-channel sleep recordings. Conclusions/Significance This work gives a quantitative picture of spontaneous SSO activity during NREM sleep: we unveil how SSO features are modulated by sleep stage, site of origin and detection location of the waves. Our measures on SSOs shape indicate that, as in animal models, onsets of silent states are more synchronized than those of neural firing. The differences between sleep stages could be related to the reduction of arousal system activity and to the breakdown of functional connectivity. The frontal SSO prevalence could be related to a greater homeostatic need of the heteromodal association cortices

    Benefits of Motor Imagery for Human Space Flight: A Brief Review of Current Knowledge and Future Applications

    Get PDF
    Motor imagery (MI) is arguably one of the most remarkable capacities of the human mind. There is now strong experimental evidence that MI contributes to substantial improvements in motor learning and performance. The therapeutic benefits of MI in promoting motor recovery among patients with motor impairments have also been reported. Despite promising theoretical and experimental findings, the utility of MI in adapting to unusual conditions, such as weightlessness during space flight, has received far less attention. In this review, we consider how, why, where, and when MI might be used by astronauts, and further evaluate the optimum MI content. Practically, we suggest that MI might be performed before, during, and after exposure to microgravity, respectively, to prepare for the rapid changes in gravitational forces after launch and to reduce the adverse effects of weightlessness exposition. Moreover, MI has potential role in facilitating re-adaptation when returning to Earth after long exposure to microgravity. Suggestions for further research include a focus on the multi-sensory aspects of MI, the requirement to use temporal characteristics as a measurement tool, and to account for the knowledge-base or metacognitive processes underlying optimal MI implementation

    In Vitro Reconstitution of SARS-Coronavirus mRNA Cap Methylation

    Get PDF
    SARS-coronavirus (SARS-CoV) genome expression depends on the synthesis of a set of mRNAs, which presumably are capped at their 5′ end and direct the synthesis of all viral proteins in the infected cell. Sixteen viral non-structural proteins (nsp1 to nsp16) constitute an unusually large replicase complex, which includes two methyltransferases putatively involved in viral mRNA cap formation. The S-adenosyl-L-methionine (AdoMet)-dependent (guanine-N7)-methyltransferase (N7-MTase) activity was recently attributed to nsp14, whereas nsp16 has been predicted to be the AdoMet-dependent (nucleoside-2′O)-methyltransferase. Here, we have reconstituted complete SARS-CoV mRNA cap methylation in vitro. We show that mRNA cap methylation requires a third viral protein, nsp10, which acts as an essential trigger to complete RNA cap-1 formation. The obligate sequence of methylation events is initiated by nsp14, which first methylates capped RNA transcripts to generate cap-0 7MeGpppA-RNAs. The latter are then selectively 2′O-methylated by the 2′O-MTase nsp16 in complex with its activator nsp10 to give rise to cap-1 7MeGpppA2′OMe-RNAs. Furthermore, sensitive in vitro inhibition assays of both activities show that aurintricarboxylic acid, active in SARS-CoV infected cells, targets both MTases with IC50 values in the micromolar range, providing a validated basis for anti-coronavirus drug design

    Thermal oxidative stability of polyanilines

    Get PDF
    This paper deals with thermal ageing of polyaniline obtained from plasma or chemical ways. FTIR analyses suggest the formation of stable carbonyl compounds in both cases. The plasma polyaniline degrades clearly faster than its chemical analog which is discussed from structural considerations. The consequences of thermal ageing on the surface properties monitored by Water Contact Angles (WCA) are also considered and explained as the overlap of an “oxidation” component that decreases WCA and a “crosslinking” component (only observed in plasma polyaniline) responsible for the WCA increase
    corecore