30 research outputs found

    The Fourteenth Data Release of the Sloan Digital Sky Survey: First Spectroscopic Data from the Extended Baryon Oscillation Spectroscopic Survey and from the Second Phase of the Apache Point Observatory Galactic Evolution Experiment

    Get PDF
    The fourth generation of the Sloan Digital Sky Survey (SDSS-IV) has been in operation since 2014 July. This paper describes the second data release from this phase, and the 14th from SDSS overall (making this Data Release Fourteen or DR14). This release makes the data taken by SDSS-IV in its first two years of operation (2014–2016 July) public. Like all previous SDSS releases, DR14 is cumulative, including the most recent reductions and calibrations of all data taken by SDSS since the first phase began operations in 2000. New in DR14 is the first public release of data from the extended Baryon Oscillation Spectroscopic Survey; the first data from the second phase of the Apache Point Observatory (APO) Galactic Evolution Experiment (APOGEE-2), including stellar parameter estimates from an innovative data-driven machine-learning algorithm known as "The Cannon"; and almost twice as many data cubes from the Mapping Nearby Galaxies at APO (MaNGA) survey as were in the previous release (N = 2812 in total). This paper describes the location and format of the publicly available data from the SDSS-IV surveys. We provide references to the important technical papers describing how these data have been taken (both targeting and observation details) and processed for scientific use. The SDSS web site (www.sdss.org) has been updated for this release and provides links to data downloads, as well as tutorials and examples of data use. SDSS-IV is planning to continue to collect astronomical data until 2020 and will be followed by SDSS-V

    The genetic architecture of the human cerebral cortex

    Get PDF
    The cerebral cortex underlies our complex cognitive capabilities, yet little is known about the specific genetic loci that influence human cortical structure. To identify genetic variants that affect cortical structure, we conducted a genome-wide association meta-analysis of brain magnetic resonance imaging data from 51,665 individuals. We analyzed the surface area and average thickness of the whole cortex and 34 regions with known functional specializations. We identified 199 significant loci and found significant enrichment for loci influencing total surface area within regulatory elements that are active during prenatal cortical development, supporting the radial unit hypothesis. Loci that affect regional surface area cluster near genes in Wnt signaling pathways, which influence progenitor expansion and areal identity. Variation in cortical structure is genetically correlated with cognitive function, Parkinson's disease, insomnia, depression, neuroticism, and attention deficit hyperactivity disorder

    Automatically tracking neurons in a moving and deforming brain

    No full text
    Advances in optical neuroimaging techniques now allow neural activity to be recorded with cellular resolution in awake and behaving animals. Brain motion in these recordings pose a unique challenge. The location of individual neurons must be tracked in 3D over time to accurately extract single neuron activity traces. Recordings from small invertebrates like C. elegans are especially challenging because they undergo very large brain motion and deformation during animal movement. Here we present an automated computer vision pipeline to reliably track populations of neurons with single neuron resolution in the brain of a freely moving C. elegans undergoing large motion and deformation. 3D volumetric fluorescent images of the animal’s brain are straightened, aligned and registered, and the locations of neurons in the images are found via segmentation. Each neuron is then assigned an identity using a new time-independent machine-learning approach we call Neuron Registration Vector Encoding. In this approach, non-rigid point-set registration is used to match each segmented neuron in each volume with a set of reference volumes taken from throughout the recording. The way each neuron matches with the references defines a feature vector which is clustered to assign an identity to each neuron in each volume. Finally, thin-plate spline interpolation is used to correct errors in segmentation and check consistency of assigned identities. The Neuron Registration Vector Encoding approach proposed here is uniquely well suited for tracking neurons in brains undergoing large deformations. When applied to whole-brain calcium imaging recordings in freely moving C. elegans, this analysis pipeline located 156 neurons for the duration of an 8 minute recording and consistently found more neurons more quickly than manual or semi-automated approaches

    Breakdown of computation time and scalings for Neuron Registration Vector Encoding pipeline.

    No full text
    <p><i>n</i><sub>frames</sub> is the total number of low magnification images used to detect centerlines, <i>n</i><sub>vol</sub> is the total number of volumes in the recording, <i>n</i><sub>ref</sub> is the number of reference volumes used for creating feature vectors, <i>n</i><sub>neurons</sub> is the total number of neurons detected, and <i>n</i><sub>subset</sub> is the number of neurons in the subset of volumes used for initial clustering.</p

    Straightening and segmentation.

    No full text
    <p>(A) Centerlines are detected from the low magnification dark field images. The centerline is shown in green and the tip of the worm’s head is indicated by a blue dot. (B) The centerline found from the low magnification image is overlaid on the high magnification RFP images. The lines normal to the centerline, shown in blue, are used to straighten the image. All scale bars are 100 <i>μ</i>m. (C) A maximum intensity projection of the straightened volume is shown. Individual neuronal nuclei are shown (D) before and (E) after segmentation.</p

    Comparison of the automated Neuron Registration Vector Encoding algorithm with manual human annotation.

    No full text
    <p>A previously published 4 minute recording of calcium activity (strain AML14) was annotated by hand, [<a href="http://www.ploscompbiol.org/article/info:doi/10.1371/journal.pcbi.1005517#pcbi.1005517.ref010" target="_blank">10</a>]. (A) Spheres show position of neurons that were detected by the automated algorithm. Grey indicates a neuron detected by both the algorithm and the human. All neurons detected by the human were also detected by the algorithm (70 neurons). Red indicates neurons that were missed by the human and detected only by the algorithm (49 neurons). (B) Histogram showing number of neurons that were mismatched for a given fraction of time-volumes when comparing automated and manual approaches. Only those neurons that were consistently found by both algorithm and human were considered. An automatically identified neuron was deemed correctly matched for a given time-volume if it was paired with the correct corresponding manual neuron.</p

    Input to the pipeline.

    No full text
    <p>(A) Example images from all four video feeds from our imaging system. Both scale bars are 100<i>μ</i>m. Fluorescent images are shown with false coloring. (B)A schematic illustrating the timings from all the devices that run in open loop in our imaging setup. The camera that collects high magnification images captures at 200Hz. The two low magnification images capture at 60Hz, and the focal plane moves up and down in a 3 Hz triangle wave. The cameras are synchronized post-hoc using light flashes and each image is assigned a timestamp on a common timeline.</p

    Schematic of analysis pipeline to segment and track neurons through time and extract their neural activity in a deforming brain.

    No full text
    <p>Neurons are labeled with calcium insensitive red fluorescent proteins, RFP, and calcium sensitive green fluorescent proteins, GCaMP. Videos of the animal’s behavior and volumetric fluorescent images of the animal’s brain serve as input to the pipeline. The algorithm detects all neurons in the head and produces tracks of the neural activity across time as the animal moves.</p

    Schematic of Neuron Registration Vector Encoding.

    No full text
    <p>(A) The registration between a sample volume and a single reference volume is done in several steps. I. The image is segmented into regions corresponding to each of the neurons. II. The image is represented as a Gaussian mixture, with a single Gaussian for each segmented region. The amplitude and the standard deviation of the Gaussians are derived from the brightness and the size of the segmented regions. III. Non-rigid point-set registration is then used to deform the sample points to best overlap the reference point-set. IV. Neurons from the sample and the reference point-sets are paired by minimizing distances between neurons. (B) Neuron registration vectors are constructed by assigning a feature vector <b>v</b><sub><i>i</i>,<i>t</i></sub> to each neuron <i>x</i><sub><i>i</i>,<i>t</i></sub> in a sample volume <b>x</b><sub><i>t</i></sub> by performing the registration between the sample volume and a set of 300 reference volumes, each denoted by <b>r</b><sup><i>k</i></sup>. Each registration of the neuron results in a neuron match, , and the set of matches becomes the feature vector <b>v</b><sub><i>i</i>,<i>t</i></sub>. (C) Vectors from all neuron-times are clustered into similar groups in a two step process: Hierarchical clustering (illustrated in the figure) is performed on a subset of neurons to define clusters, each of which is given a label <i>S</i><sub><i>n</i></sub>. Then each feature vector <b>v</b><sub><i>i</i>,<i>t</i></sub> is assigned to a cluster based on a distance metric (not illustrated). (D) The clustering of the feature vectors shown in (C) assigns an identity to each of the neurons in every volume. This allows us to track the neurons across different volumes of the recording.</p
    corecore