15 research outputs found

    Frontally mediated inhibitory processing and white matter microstructure: age and alcoholism effects

    Get PDF
    RationaleThe NOGO P3 event-related potential is a sensitive marker of alcoholism, relates to EEG oscillation in the Ύ and Ξ frequency ranges, and reflects activation of an inhibitory processing network. Degradation of white matter tracts related to age or alcoholism should negatively affect the oscillatory activity within the network.ObjectiveThis study aims to evaluate the effect of alcoholism and age on Ύ and Ξ oscillations and the relationship between these oscillations and measures of white matter microstructural integrity.MethodsData from ten long-term alcoholics to 25 nonalcoholic controls were used to derive P3 from Fz, Cz, and Pz using a visual GO/NOGO protocol. Total power and across trial phase synchrony measures were calculated for Ύ and Ξ frequencies. DTI, 1.5 T, data formed the basis of quantitative fiber tracking in the left and right cingulate bundles and the genu and splenium of the corpus callosum. Fractional anisotropy and diffusivity (λL and λT) measures were calculated from each tract.ResultsNOGO P3 amplitude and Ύ power at Cz were smaller in alcoholics than controls. Lower Ύ total power was related to higher λT in the left and right cingulate bundles. GO P3 amplitude was lower and GO P3 latency was longer with advancing age, but none of the time-frequency analysis measures displayed significant age or diagnosis effects.ConclusionsThe relation of Ύ total power at CZ with λT in the cingulate bundles provides correlational evidence for a functional role of fronto-parietal white matter tracts in inhibitory processing

    ICAR: endoscopic skull‐base surgery

    Get PDF
    n/

    Data driven image models through continuous joint alignment

    No full text
    This paper presents a family of techniques that we call congealing for modeling image classes from data. The idea is to start with a set of images and make them appear as similar as possible by removing variability along the known axes of variation. This technique can be used to eliminate nuisance” variables such as affine deformations from handwritten digits or unwanted bias fields from magnetic resonance images. In addition to separating and modeling the latent images—i.e., the images without the nuisance variables—we can model the nuisance variables themselves, leading to factorized generative image models. When nuisance variable distributions are shared between classes, one can share the knowledge learned in one task with another task, leading to efficient learning. We demonstrate this process by building a handwritten digit classifier from just a single example of each class. In addition to applications in handwritten character recognition, we describe in detail the application of bias removal from magnetic resonance images. Unlike previous methods, we use a separate, nonparametric model for the intensity values at each pixel. This allows us to leverage the data from the MR images of different patients to remove bias from each other. Only very weak assumptions are made about the distributions of intensity values in the images. In addition to the digit and MR applications, we discuss a number of other uses of congealing and describe experiments about the robustness and consistency of the method

    Unsupervised temporal ensemble alignment for rapid annotation

    Get PDF
    This paper presents a novel framework for the unsupervised alignment of an ensemble of temporal sequences. This approach draws inspiration from the axiom that an ensemble of temporal signals stemming from the same source/class should have lower rank when "aligned" rather than "misaligned". Our approach shares similarities with recent state of the art methods for unsupervised images ensemble alignment (e.g. RASL) which breaks the problem into a set of image alignment problems (which have well known solutions i.e. the Lucas-Kanade algorithm). Similarly, we propose a strategy for decomposing the problem of temporal ensemble alignment into a similar set of independent sequence problems which we claim can be solved reliably through Dynamic Time Warping (DTW). We demonstrate the utility of our method using the Cohn-Kanade+ dataset, to align expression onset across multiple sequences, which allows us to automate the rapid discovery of event annotations

    NONPARAMETRIC CURVE ALIGNMENT

    No full text
    Congealing is a flexible nonparametric data-driven framework for the joint alignment of data. It has been successfully applied to the joint alignment of binary images of digits, binary images of object silhouettes, grayscale MRI images, color images of cars and faces, and 3D brain volumes. This research enhances congealing to practically and effectively apply it to curve data. We develop a parameterized set of nonlinear transformations that allow us to apply congealing to this type of data. We present positive results on aligning synthetic and real curve data sets and conclude with a discussion on extending this work to simultaneous alignment and clustering

    Deforming Autoencoders: Unsupervised Disentangling of Shape and Appearance

    Get PDF
    International audienceIn this work we introduce Deforming Autoencoders, a generative model for images that disentangles shape from appearance in an unsupervised manner. As in the deformable template paradigm, shape is represented as a deformation between a canonical coordinate system ('template') and an observed image, while appearance is modeled in 'canonical', template, coordinates, thus discarding variability due to deformations. We introduce novel techniques that allow this approach to be deployed in the setting of autoencoders and show that this method can be used for unsupervised group-wise image alignment. We show experiments with expression morphing in humans, hands, and digits, face manipulation, such as shape and appearance interpolation, as well as unsupervised landmark local-ization. A more powerful form of unsupervised disentangling becomes possible in template coordinates, allowing us to successfully decompose face images into shading and albedo, and further manipulate face images. Latent Representation Input Image Generated Deformation Generated Texture Decoder Decoder Spatial Warping Reconstructed Image Encoder Fig. 1. Deforming Autoencoders follow the deformable template paradigm and model image generation through a cascade of appearance (or, 'texture') synthesis in a canonical coordinate system and a spatial deformation that warps the texture to the observed image coordinates. By keeping the latent vector for texture short the network is forced to model shape variability through the deformation branch, so as to minimize a reconstruction loss. This allows us to train a deep gen-erative image model that disentangles shape and appearance in an entirely unsupervised manner

    Subspace Procrustes Analysis

    No full text
    Abstract. Procrustes Analysis (PA) has been a popular technique to align and build 2-D statistical models of shapes. Given a set of 2-D shapes PA is applied to remove rigid transformations. Then, a non-rigid 2-D model is computed by modeling (e.g., PCA) the residual. Although PA has been widely used, it has several limitations for modeling 2-D shapes: occluded landmarks and missing data can result in local minima solutions, and there is no guarantee that the 2-D shapes provide a uni-form sampling of the 3-D space of rotations for the object. To address previous issues, this paper proposes Subspace PA (SPA). Given several instances of a 3-D object, SPA computes the mean and a 2-D subspace that can simultaneously model all rigid and non-rigid deformations of the 3-D object. We propose a discrete (DSPA) and continuous (CSPA) for-mulation for SPA, assuming that 3-D samples of an object are provided. DSPA extends the traditional PA, and produces unbiased 2-D models by uniformly sampling different views of the 3-D object. CSPA provides a continuous approach to uniformly sample the space of 3-D rotations, being more efficient in space and time. Experiments using SPA to learn 2-D models of bodies from motion capture data illustrate the benefits of our approach.

    Deep Complementary Joint Model for Complex Scene Registration and Few-shot Segmentation on Medical Images

    No full text
    International audienceDeep learning-based medical image registration and segmentation joint models utilize the complementarity (augmentation data or weakly supervised data from registration, region constraints from seg-mentation) to bring mutual improvement in complex scene and few-shot situation. However, further adoption of the joint models are hindered: 1) the diversity of augmentation data is reduced limiting the further enhancement of segmentation, 2) misaligned regions in weakly supervised data disturb the training process, 3) lack of label-based region constraints in few-shot situation limits the registration performance. We propose a novel Deep Complementary Joint Model (DeepRS) for complex scene registration and few-shot segmentation. We embed a perturbation factor in the registration to increase the activity of deformation thus maintaining the augmentation data diversity. We take a pixel-wise discriminator to extract alignment confidence maps which highlight aligned regions in weakly supervised data so the misaligned regions' disturbance will be suppressed via weighting. The outputs from segmentation model are utilized to implement deep-based region constraints thus relieving the label requirements and bringing fine registration. Extensive experiments on the CT dataset of MM-WHS 2017 Challenge[42] show great advantages of our DeepRS that outperforms the existing state-of-the-art models
    corecore