17 research outputs found

    Structure of the northen margin of the Grande river, Bardas Blancas, Mendoza province

    Get PDF
    La estructura de Bardas Blancas comprende una región sobreelevada tanto topográfica como estructuralmente, conformada principalmente por estructuras braquianticlinales. Afloran en su núcleo rocas del Grupo Choiyoi y hacia los flancos depósitos sedimentarios jurásicos. El área se encuentra en un frente orogénico no emergente caracterizazo por el dominio de zonas de plegamiento, fallas ciegas y el desarrollo de una zona triangular. El estudio de estas estructuras con información de subsuelo y superficie permitió el desarrollo de una sección estructural balanceada basada en el modelo de trishear. La sección se basa en el levantamiento de la estructura a partir del desarrollo de fallas de corrimiento de bajo ángulo con vergencia al este despegadas dentro del basamento. El avance de los bloques de basamento habría transmitido parte del acortamiento a la cubierta sedimentaria formándose la zona triangular de Cerro Doña Juana hacia el este. Dentro de la zona triangular se desarrolla un sistema de duplex de techo pasivo generado a partir de un nivel de despegue inferior y otro superior o retrocorrimiento. El retrocorrimiento y los Grupos Rayoso y Neuquén se encuentran volcados, producto del arrastre originado por un corrimiento de vergencia oriental. Hacia el este del retrocorrimiento las secuencias cretácica superior y terciarias se encuentran desacopladas estructuralmente del Grupo Mendoza.The Bardas Blancas structure constitutes a topographic height characterized by an important basement anticline. The Choiyoi Group crops out in the middle of the structure and Jurassic sedimentary deposits at both limbs. This structure is the orogenic front at these latitudes and it is characterized by blind thrusts, folding and the development of a triangular zone. The Bardas Blancas structure was studied using surface and sub-surface data and a balanced cross section was constructed using the trishear model. The main structures are explained using low angle thrust faults verging to the east and detached within the basement. The movement of huge basement blocks to the east transfers shortening to the sedimentary cover developing the triangular zone of Doña Juana in the front of the structure. Inside the triangular zone a passive roof duplex system is developed controlled by two weak detachment units. The shortening is finally transmitted by a backthrust to the surface. The backthrust and the Rayoso and Neuquén Groups are overturned by the action of an east verging thrust reactivated from the main basement structure. To the east of the backthrust, the Cretaceous and Tertiary deposits are structurally disconnected from de Mendoza Group.Fil: Dicarlo, Diego J.. No especifica;Fil: Cristallini, Ernesto Osvaldo. Consejo Nacional de Investigaciones Científicas y Técnicas. Oficina de Coordinación Administrativa Ciudad Universitaria. Instituto de Estudios Andinos "Don Pablo Groeber". Universidad de Buenos Aires. Facultad de Ciencias Exactas y Naturales. Instituto de Estudios Andinos "Don Pablo Groeber"; Argentin

    Deep Neural Networks Rival the Representation of Primate IT Cortex for Core Visual Object Recognition

    Get PDF
    The primate visual system achieves remarkable visual object recognition performance even in brief presentations and under changes to object exemplar, geometric transformations, and background variation (a.k.a. core visual object recognition). This remarkable performance is mediated by the representation formed in inferior temporal (IT) cortex. In parallel, recent advances in machine learning have led to ever higher performing models of object recognition using artificial deep neural networks (DNNs). It remains unclear, however, whether the representational performance of DNNs rivals that of the brain. To accurately produce such a comparison, a major difficulty has been a unifying metric that accounts for experimental limitations such as the amount of noise, the number of neural recording sites, and the number trials, and computational limitations such as the complexity of the decoding classifier and the number of classifier training examples. In this work we perform a direct comparison that corrects for these experimental limitations and computational considerations. As part of our methodology, we propose an extension of "kernel analysis" that measures the generalization accuracy as a function of representational complexity. Our evaluations show that, unlike previous bio-inspired models, the latest DNNs rival the representational performance of IT cortex on this visual object recognition task. Furthermore, we show that models that perform well on measures of representational performance also perform well on measures of representational similarity to IT and on measures of predicting individual IT multi-unit responses. Whether these DNNs rely on computational mechanisms similar to the primate visual system is yet to be determined, but, unlike all previous bio-inspired models, that possibility cannot be ruled out merely on representational performance grounds.Comment: 35 pages, 12 figures, extends and expands upon arXiv:1301.353

    Object-level representational similarity analysis comparing model and neural representations to the IT multi-unit representation.

    No full text
    <p>A) Following the proposed analysis in <a href="http://www.ploscompbiol.org/article/info:doi/10.1371/journal.pcbi.1003963#pcbi.1003963-Kriegeskorte2" target="_blank">[32]</a>, the object-level dissimilarity matrix for the IT multi-unit representation is compared to the matrices computed from the model representations and from the V4 multi-unit representation. Each bar indicates the similarity between the corresponding representation and the IT multi-unit representation as measured by the Spearman correlation between dissimilarity matrices. Error bars indicate standard deviation over 10 splits. The IT Cortex Split-Half bar indicates the deviation measured by comparing half of the multi-unit sites to the other half, measured over 50 repetitions. The V1-like, V2-like, and HMAX representations are highly dissimilar to IT cortex. The HMO representation produces comparable deviations from IT as the V4 multi-unit representation while the Krizhevsky et al. 2012 and Zeiler & Fergus 2013 representations fall in-between the V4 representation and the IT cortex split-half measurement. The representations with an appended “+ IT-fit” follow the methodology in <a href="http://www.ploscompbiol.org/article/info:doi/10.1371/journal.pcbi.1003963#pcbi.1003963-Yamins1" target="_blank">[27]</a>, which first predicts IT multi-unit responses from the model representation and then uses these predictions to form a new representation (see text). B) Depictions of the object-level RDMs for select representations. Each matrix is ordered by object category (animals, cars, chairs, etc.) and scaled independently (see color bar). For the “+ IT-fit” representations, the feature for each image was averaged across testing set predictions before computing the RDM (see <a href="http://www.ploscompbiol.org/article/info:doi/10.1371/journal.pcbi.1003963#s4" target="_blank">Methods</a>).</p

    Example images used to measure object category recognition performance.

    No full text
    <p>Two of the 1960 tested images are shown from the categories Cars, Fruits, and Animals (we also tested the categories Planes, Chairs, Tables, and Faces). Variability within each category consisted of changes to object exemplar (e.g. 7 different types of Animals), geometric transformations due to position, scale, and rotation/pose, and changes to background (each background image is unique).</p

    Kernel analysis curves of sample and noise matched neural and model representations.

    No full text
    <p>Plotting conventions are the same as in <a href="http://www.ploscompbiol.org/article/info:doi/10.1371/journal.pcbi.1003963#pcbi-1003963-g002" target="_blank">Fig. 2</a>. Multi-unit analysis is presented in panel A and single-unit analysis in B. Note that the model representations have been modified such that they are both subsampled and noisy versions of those analyzed in <a href="http://www.ploscompbiol.org/article/info:doi/10.1371/journal.pcbi.1003963#pcbi-1003963-g002" target="_blank">Fig. 2</a> and this modification is indicated by the symbol for noise matched to the multi-unit IT cortex sample and by the symbol for noise matched to the single-unit IT cortex sample. To correct for sampling bias, the multi-unit analysis uses 80 samples, either 80 neural multi-units from V4 or IT cortex, or 80 features from the model representations, and the single-unit analysis uses 40 samples. To correct for experimental and intrinsic neural noise, we added noise to the subsampled model representation (no additional noise is added to the neural representations) that is commensurate to the observed noise from the IT measurements. Note that we observed similar noise between the V4 and IT Cortex samples and we do not attempt to correct the V4 cortex sample of the noise observed in the IT cortex sample. We observed substantially higher noise levels in IT single-unit recordings than multi-unit recordings due to both higher trial-to-trial variability and more trials for the multi-unit recordings. All model representations suffer decreases in accuracy after correcting for sampling and adding noise (compare absolute precision values to <a href="http://www.ploscompbiol.org/article/info:doi/10.1371/journal.pcbi.1003963#pcbi-1003963-g002" target="_blank">Fig. 2</a>). All three deep neural networks perform significantly better than the V4 cortex sample. For the multi-unit analysis (A), IT cortex sample achieves high precision and is only matched in performance by the Zeiler & Fergus 2013 representation. For the single-unit analysis (B), both the Krizhevsky et al. 2012 and the Zeiler & Fergus 2013 representations surpass the IT representational performance.</p

    Neural and model representation predictions of IT multi-unit responses.

    No full text
    <p>A) The median predictions of IT multi-unit responses averaged over 10 train/test splits is plotted for model representations and V4 multi-units. Error bars indicate standard deviation over the 10 train/test splits. Predictions are normalized to correct for trial-to-trial variability of the IT multi-unit recording and calculated as percentage of explained, explainable variance. The HMO, Krizhevsky et al. 2012, and Zeiler & Fergus 2013 representations achieve IT multi-unit predictions that are comparable to the predictions produced by the V4 multi-unit representation. B) The mean predictions over the 10 train/test splits for the V4 cortex multi-unit sample and the Zeiler & Fergus 2013 DNN are plotted against each other for each IT multi-unit site.</p

    Linear-SVM generalization performance of neural and model representations.

    No full text
    <p>Testing set classification accuracy averaged over 10 randomly-sampled test sets is plotted and error bars indicate standard deviation over the 10 random samples. Chance performance is ∼14.3%. V4 and IT Cortex Multi-Unit Sample are the values measured directly from the neural samples. Following the analysis in <a href="http://www.ploscompbiol.org/article/info:doi/10.1371/journal.pcbi.1003963#pcbi-1003963-g003" target="_blank">Fig. 3A</a>, the model representations have been modified such that they are both subsampled and have noise added that is matched to the observed IT multi-unit noise. We indicate this modification by the symbol. Both model and neural representations are subsampled to 80 multi-unit samples or 80 features. Mirroring the results using kernel analysis, the IT cortex multi-unit sample achieves high generalization accuracy and is only matched in performance by the Zeiler & Fergus 2013 representation.</p
    corecore