686 research outputs found

    The Neural Representation Benchmark and its Evaluation on Brain and Machine

    Get PDF
    A key requirement for the development of effective learning representations is their evaluation and comparison to representations we know to be effective. In natural sensory domains, the community has viewed the brain as a source of inspiration and as an implicit benchmark for success. However, it has not been possible to directly test representational learning algorithms directly against the representations contained in neural systems. Here, we propose a new benchmark for visual representations on which we have directly tested the neural representation in multiple visual cortical areas in macaque (utilizing data from [Majaj et al., 2012]), and on which any computer vision algorithm that produces a feature space can be tested. The benchmark measures the effectiveness of the neural or machine representation by computing the classification loss on the ordered eigendecomposition of a kernel matrix [Montavon et al., 2011]. In our analysis we find that the neural representation in visual area IT is superior to visual area V4. In our analysis of representational learning algorithms, we find that three-layer models approach the representational performance of V4 and the algorithm in [Le et al., 2012] surpasses the performance of V4. Impressively, we find that a recent supervised algorithm [Krizhevsky et al., 2012] achieves performance comparable to that of IT for an intermediate level of image variation difficulty, and surpasses IT at a higher difficulty level. We believe this result represents a major milestone: it is the first learning algorithm we have found that exceeds our current estimate of IT representation performance. We hope that this benchmark will assist the community in matching the representational performance of visual cortex and will serve as an initial rallying point for further correspondence between representations derived in brains and machines.Comment: The v1 version contained incorrectly computed kernel analysis curves and KA-AUC values for V4, IT, and the HT-L3 models. They have been corrected in this versio

    Pengaruh Dosis Nitrogen dan Kalium terhadap Produksi dan Mutu Tembakau Temanggung pada Tumpang Sisip Kubis - Tembakau di Pujon Malang

    Full text link
    The experiment were carried out to determine the effect of N and K on growth, yield and quality of Temanggung tobacco grown intercropped with cabbage at Pujon, Malang. The factorial design with randomized block in five replications were used. Cabbage were planted from February-May, 1992 where as tobacco intercropped in April-September 1993. Three levels of nitrogen from Ammonium Sulphate namely 30 kg N/ha (N1), 60 (N2) and 90 kg N/ha (N3), while Potassium from Potassium Sulphate of 0, 50 and 100 kg K2O/ha were the treatment. Cabbage were fertilization at the rate of 390 kg N/ha and 780 kg P2O5/ha without K fertilization. The first tobacco fertilizations were done at a day after cabbage harvest when tobacco at 16 days old, the half dosages of N and full dosages of K were applied. Other half of N were applied when tobacco at 30 days old. The nitrogen did increase top leaf length, 10 leaf width, fresh yield and N content of Temanggung tobacco of variety of Genjah Kemloko. The potassium did increase quality index and crop index significantly. The N and K interactions is likewise influenced significantly the quality and crop indexes. The maximum leaf length and leaf width were obtained from 90 kg N/ha treatment, whereas highest quality and crop indexes obtained from 30 kg N/ha (N1) and 100 kg K2O/ha (K1) treatments. Highest N content of 3.707% were detected from 90 kg N/ha treatments

    Deep Neural Networks Rival the Representation of Primate IT Cortex for Core Visual Object Recognition

    Get PDF
    The primate visual system achieves remarkable visual object recognition performance even in brief presentations and under changes to object exemplar, geometric transformations, and background variation (a.k.a. core visual object recognition). This remarkable performance is mediated by the representation formed in inferior temporal (IT) cortex. In parallel, recent advances in machine learning have led to ever higher performing models of object recognition using artificial deep neural networks (DNNs). It remains unclear, however, whether the representational performance of DNNs rivals that of the brain. To accurately produce such a comparison, a major difficulty has been a unifying metric that accounts for experimental limitations such as the amount of noise, the number of neural recording sites, and the number trials, and computational limitations such as the complexity of the decoding classifier and the number of classifier training examples. In this work we perform a direct comparison that corrects for these experimental limitations and computational considerations. As part of our methodology, we propose an extension of "kernel analysis" that measures the generalization accuracy as a function of representational complexity. Our evaluations show that, unlike previous bio-inspired models, the latest DNNs rival the representational performance of IT cortex on this visual object recognition task. Furthermore, we show that models that perform well on measures of representational performance also perform well on measures of representational similarity to IT and on measures of predicting individual IT multi-unit responses. Whether these DNNs rely on computational mechanisms similar to the primate visual system is yet to be determined, but, unlike all previous bio-inspired models, that possibility cannot be ruled out merely on representational performance grounds.Comment: 35 pages, 12 figures, extends and expands upon arXiv:1301.353

    Study of Bryophytic Flora in the Ramsar Wetland of Merja Zerga (North-west of Morocco)

    Full text link
    The research undertaken in the IBA and Ramsar wetland of Merja Zerga at Moulay Bousselham were interested only in the higher plants. Our study tends then to contribute to enrich the knowledge about the bryophytic flora of this area by filling the gaps in this field. Thus, we carried out a systematic sampling with a stop and a harvest in each encountered bryophyte population. This prospection enabled us to find 26 species of bryophytes including 22 species of mosses belonging to 15 genera and 9 families, and 4 species of liverworts belonging to 3 genera and 3 families. The relative low specificity of this wetland can be explained by the influence of sea spray and the strong anthropic disturbance. A comparison with previous studies allowed us to conclude that 11 species were observed for the first time in the area

    CLUSTERED HIERARCHICAL ANOMALY AND OUTLIER DETECTION ALGORITHMS

    Get PDF
    Anomaly and outlier detection is a long-standing problem in machine learning. In some cases, anomaly detection is easy, such as when data are drawn from well-characterized distributions such as the Gaussian. However, when data occupy high-dimensional spaces, anomaly detection becomes more difficult. We present CLAM (Clustered Learning of Approximate Manifolds), a manifold mapping technique in any metric space. CLAM begins with a fast hierarchical clustering technique and then induces a graph from the cluster tree, based on overlapping clusters as selected using several geometric and topological features. Using these graphs, we implement CHAODA (Clustered Hierarchical Anomaly and Outlier Detection Algorithms), exploring various properties of the graphs and their constituent clusters to find outliers. CHAODA employs a form of transfer learning based on a training set of datasets, and applies this knowledge to a separate test set of datasets of different cardinalities, dimensionalities, and domains. On 24 publicly available datasets, we compare CHAODA (by measure of ROC AUC) to a variety of state-of-the-art unsupervised anomaly-detection algorithms. Six of the datasets are used for training. CHAODA outperforms other approaches on 16 of the remaining 18 datasets. CLAM and CHAODA scale to large, high-dimensional “big data” anomalydetection problems, and generalize across datasets and distance functions. Source code to CLAM and CHAODA are freely available on GitHub1

    Spatial-frequency channels, shape bias, and adversarial robustness

    Full text link
    What spatial frequency information do humans and neural networks use to recognize objects? In neuroscience, critical band masking is an established tool that can reveal the frequency-selective filters used for object recognition. Critical band masking measures the sensitivity of recognition performance to noise added at each spatial frequency. Existing critical band masking studies show that humans recognize periodic patterns (gratings) and letters by means of a spatial-frequency filter (or "channel'') that has a frequency bandwidth of one octave (doubling of frequency). Here, we introduce critical band masking as a task for network-human comparison and test 14 humans and 76 neural networks on 16-way ImageNet categorization in the presence of narrowband noise. We find that humans recognize objects in natural images using the same one-octave-wide channel that they use for letters and gratings, making it a canonical feature of human object recognition. On the other hand, the neural network channel, across various architectures and training strategies, is 2-4 times as wide as the human channel. In other words, networks are vulnerable to high and low frequency noise that does not affect human performance. Adversarial and augmented-image training are commonly used to increase network robustness and shape bias. Does this training align network and human object recognition channels? Three network channel properties (bandwidth, center frequency, peak noise sensitivity) correlate strongly with shape bias (53% variance explained) and with robustness of adversarially-trained networks (74% variance explained). Adversarial training increases robustness but expands the channel bandwidth even further away from the human bandwidth. Thus, critical band masking reveals that the network channel is more than twice as wide as the human channel, and that adversarial training only increases this difference.Comment: Accepted to Neural Information Processing Systems (NeurIPS) 2023 (Oral Presentation
    • …
    corecore