2,383 research outputs found
Recommended from our members
Name agreement in picture naming: An ERP study
Name agreement is the extent to which different people agree on a name for a particular picture. Previous studies have found that it takes longer to name low name agreement pictures than high name agreement pictures. To examine the effect of name agreement in the online process of picture naming, we compared event-related potentials (ERPs) recorded whilst 19 healthy, native English speakers silently named pictures which had either high or low name agreement. A series of ERP components was examined: P1 approximately 120ms from picture onset, N1 around 170ms, P2 around 220ms, N2 around 290ms, and P3 around 400ms. Additionally, a late time window from 800 to 900ms was considered. Name agreement had an early effect, starting at P1 and possibly resulting from uncertainty of picture identity, and continuing into N2, possibly resulting from alternative names for pictures. These results support the idea that name agreement affects two consecutive processes: first, object recognition, and second, lexical selection and/or phonological encoding
Efficient Elastic Net Regularization for Sparse Linear Models
This paper presents an algorithm for efficient training of sparse linear
models with elastic net regularization. Extending previous work on delayed
updates, the new algorithm applies stochastic gradient updates to non-zero
features only, bringing weights current as needed with closed-form updates.
Closed-form delayed updates for the , , and rarely used
regularizers have been described previously. This paper provides
closed-form updates for the popular squared norm and elastic net
regularizers.
We provide dynamic programming algorithms that perform each delayed update in
constant time. The new and elastic net methods handle both fixed and
varying learning rates, and both standard {stochastic gradient descent} (SGD)
and {forward backward splitting (FoBoS)}. Experimental results show that on a
bag-of-words dataset with features, but only nonzero features on
average per training example, the dynamic programming method trains a logistic
regression classifier with elastic net regularization over times faster
than otherwise
Book Review: The Environment from Surplus to Scarcity: The Environment from Surplus to Scarcity, by SchnaibergAllan. Oxford University Press, Walton Street, Oxford OX2 6DP, England, UK: xiii + 464 pp., tables, 22.0 × 15.0 × 2.5 cm, stiff paper cover, £6.75, 1980. --- Either ISSN or Journal title must be supplied
Modeling Word Burstiness Using the Dirichlet Distribution
Multinomial distributions are often used to model text documents. However, they do not capture well the phenomenon that words in a document tend to appear in bursts: if a word appears once, it is more likely to appear again. In this paper, we propose the Dirichlet compound multinomial model (DCM) as an alternative to the multinomial. The DCM model has one additional degree of freedom, which allows it to capture burstiness. We show experimentally that the DCM is substantially better than the multinomial at modeling text data, measured by perplexity. We also show using three standard document collections that the DCM leads to better classification than the multinomial model. DCM performance is comparable to that obtained with multiple heuristic changes to the multinomial model. 1
- …
