90 research outputs found
Effects of local-market radio ownership concentration on radio localism, the public interest, and listener opinions and use of local radio
The Telecommunications Act of 1996 and ensuing radio ownership consolidation are blamed for harming radio localism and the public interest. Prior studies examined impacts attributed to consolidation on format diversity and other measures; however, none explored influences on listener perceptions. The present research sought to determine effects of local-market ownership concentration on listener opinions and use of radio—potentially indicative of stations’ localism and public service—by surveying listeners in markets categorized by ownership concentration levels. Findings suggest concentration does not strongly influence perceptions; however, overall results indicate potentially negative consequences from local and national consolidation on amounts of local music, news, and public-service programming; live-local programming; and station responsiveness. Findings suggest policy change that could enhance radio localism
Recommended from our members
Inducing a Grammar Without an Explicit Teacher: Incremental Distributed Prediction Feedback
A primary problem for a child learning her first language
is that her ungrammatical utterances are rarely explicitly
corrected. It has been argued that this dearth of negative
evidence regarding the child's grammatical hypotheses
makes it impossible for the child to induce the grammar of
the language without substantial innate knowledge of
some universal principles common to all natural
grammars. However, recent connectionist models of
language acquisition have employed a learning technique
that circumvents the negative evidence problem.
Moreover, this learning strategy is not limited to strictly
connectionist architectures. What we call Incremental
Distributed Prediction Feedback refers to when the learner
simply listens to utterances in its environment and makes
internal predictions on-line as to what elements of the
grammar are more or less likely to immediately follow the
current input. Once that subsequent input is received,
those prediction contingencies (essentially, transitional
probabilities) are slightly adjusted accordingly.
Simulations with artificial grammars demonstrate that this
learning strategy is faster and more realistic than
depending on infrequent negative feedback to
ungrammatical output Incremental Distributed Prediction
Feedback allows the learner to produce its own negative
evidence from positive examples of the language by
comparing incrementally predicted input with actual input
Idiomatic syntactic constructions and language learning.
Abstract This article explores the influence of idiomatic syntactic constructions (i.e., constructions whose phrase structure rules violate the rules that underlie the construction of other kinds of sentences in the language) on the acquisition of phrase structure. In Experiment 1, participants were trained on an artificial language generated from hierarchical phrase structure rules. Some participants were given exposure to an idiomatic construction (IC) during training, whereas others were not. Under some circumstances, the presence of an idiomatic construction in the input aided learners in acquiring the phrase structure of the language. Experiment 2 provides a replication of the first experiment and extends the findings by showing that idiomatic constructions that strongly violate the predictive dependencies that define the phrase structure of the language do not aid learners in acquiring the structure of the language. Together, our data suggest that (a) idiomatic constructions aid learners in acquiring the phrase structure of a language by highlighting relevant structural elements in the language, and (b) such constructions are useful cues to learning to the extent that learners can keep their knowledge of the idiomatic construction separate from their knowledge of the rest of the language
Learning and Long-Term Retention of Large-Scale Artificial Languages
Recovering discrete words from continuous speech is one of the first challenges facing language learners. Infants and adults can make use of the statistical structure of utterances to learn the forms of words from unsegmented input, suggesting that this ability may be useful for bootstrapping language-specific cues to segmentation. It is unknown, however, whether performance shown in small-scale laboratory demonstrations of “statistical learning” can scale up to allow learning of the lexicons of natural languages, which are orders of magnitude larger. Artificial language experiments with adults can be used to test whether the mechanisms of statistical learning are in principle scalable to larger lexicons. We report data from a large-scale learning experiment that demonstrates that adults can learn words from unsegmented input in much larger languages than previously documented and that they retain the words they learn for years. These results suggest that statistical word segmentation could be scalable to the challenges of lexical acquisition in natural language learning.National Science Foundation (U.S.) (NSF DDRIG #0746251
Acquiring and processing verb argument structure : distributional learning in a miniature language
Adult knowledge of a language involves correctly balancing lexically-based and more language-general patterns. For example, verb argument structures may sometimes readily generalize to new verbs, yet with particular verbs may resist generalization. From the perspective of acquisition, this creates significant learnability problems, with some researchers claiming a crucial role for verb semantics in the determination of when generalization may and may not occur. Similarly, there has been debate regarding how verb-specific and more generalized constraints interact in sentence processing and on the role of semantics in this process. The current work explores these issues using artificial language learning. In three experiments using languages without semantic cues to verb distribution, we demonstrate that learners can acquire both verb-specific and verb-general patterns, based on distributional information in the linguistic input regarding each of the verbs as well as across the language as a whole. As with natural languages, these factors are shown to affect production, judgments and real-time processing. We demonstrate that learners apply a rational procedure in determining their usage of these different input statistics and conclude by suggesting that a Bayesian perspective on statistical learning may be an appropriate framework for capturing our findings
Modeling human performance in statistical word segmentation
The ability to discover groupings in continuous stimuli on the basis of distributional information is present across species and across perceptual modalities. We investigate the nature of the computations underlying this ability using statistical word segmentation experiments in which we vary the length of sentences, the amount of exposure, and the number of words in the languages being learned. Although the results are intuitive from the perspective of a language learner (longer sentences, less training, and a larger language all make learning more difficult), standard computational proposals fail to capture several of these results. We describe how probabilistic models of segmentation can be modified to take into account some notion of memory or resource limitations in order to provide a closer match to human performance.National Science Foundation (U.S.) (Grant BCS-0631518
Second Language Processing Shows Increased Native-Like Neural Responses after Months of No Exposure
Although learning a second language (L2) as an adult is notoriously difficult, research has shown that adults can indeed attain native language-like brain processing and high proficiency levels. However, it is important to then retain what has been attained, even in the absence of continued exposure to the L2—particularly since periods of minimal or no L2 exposure are common. This event-related potential (ERP) study of an artificial language tested performance and neural processing following a substantial period of no exposure. Adults learned to speak and comprehend the artificial language to high proficiency with either explicit, classroom-like, or implicit, immersion-like training, and then underwent several months of no exposure to the language. Surprisingly, proficiency did not decrease during this delay. Instead, it remained unchanged, and there was an increase in native-like neural processing of syntax, as evidenced by several ERP changes—including earlier, more reliable, and more left-lateralized anterior negativities, and more robust P600s, in response to word-order violations. Moreover, both the explicitly and implicitly trained groups showed increased native-like ERP patterns over the delay, indicating that such changes can hold independently of L2 training type. The results demonstrate that substantial periods with no L2 exposure are not necessarily detrimental. Rather, benefits may ensue from such periods of time even when there is no L2 exposure. Interestingly, both before and after the delay the implicitly trained group showed more native-like processing than the explicitly trained group, indicating that type of training also affects the attainment of native-like processing in the brain. Overall, the findings may be largely explained by a combination of forgetting and consolidation in declarative and procedural memory, on which L2 grammar learning appears to depend. The study has a range of implications, and suggests a research program with potentially important consequences for second language acquisition and related fields
Herpes simplex encephalitis is linked with selective mitochondrial damage; a post-mortem and in vitro study
Herpes simplex virus type-1 (HSV-1) encephalitis (HSE) is the most commonly diagnosed cause of viral encephalitis in western countries. Despite antiviral treatment, HSE remains a devastating disease with high morbidity and mortality. Improved understanding of pathogenesis may lead to more effective therapies. Mitochondrial damage has been reported during HSV infection in vitro. However, whether it occurs in the human brain and whether this contributes to the pathogenesis has not been fully explored. Minocycline, an antibiotic, has been reported to protect mitochondria and limit brain damage. Minocycline has not been studied in HSV infection. In the first genome-wide transcriptomic study of post-mortem human HSE brain tissue, we demonstrated a highly preferential reduction in mitochondrial genome (MtDNA) encoded transcripts in HSE cases (n = 3) compared to controls (n = 5). Brain tissue exhibited a significant inverse correlation for immunostaining between cytochrome c oxidase subunit 1 (CO1), a MtDNA encoded enzyme subunit, and HSV-1; with lower abundance for mitochondrial protein in regions where HSV-1 was abundant. Preferential loss of mitochondrial function, among MtDNA encoded components, was confirmed using an in vitro primary human astrocyte HSV-1 infection model. Dysfunction of cytochrome c oxidase (CO), a mitochondrial enzyme composed predominantly of MtDNA encoded subunits, preceded that of succinate dehydrogenase (composed entirely of nuclear encoded subunits). Minocycline treated astrocytes exhibited higher CO1 transcript abundance, sustained CO activity and cell viability compared to non-treated astrocytes. Based on observations from HSE patient tissue, this study highlights mitochondrial damage as a critical and early event during HSV-1 infection. We demonstrate minocycline preserves mitochondrial function and cell viability during HSV-1 infection. Minocycline, and mitochondrial protection, offers a novel adjunctive therapeutic approach for limiting brain cell damage and potentially improving outcome among HSE patients
A tale of two theories: response to Fisher
1. Introduction
There are currently two theories about how children acquire a language. The first is generative grammar, according to which all human children innately possess a universal grammar, abstract enough to structure any language of the world. Acquisition then consists of two processes: (1) acquiring all the words, idioms, and quirky constructions of the particular language being learned (by ‘normal’ processes of learning); and (2) linking the particular language being learned to the abstract universal grammar. Because it is innate, universal grammar does not develop ontogenetically but is the same throughout the lifespan – this is the so-called continuity assumption (Pinker, 1984). This assumption allows generativists to use adult-like formal grammars to describe children's language and so to assume that the first time a child utters, for example, “I wanna play”, she has an adult-like understanding of infinitival complement sentences and so can generate ‘similar’ infinitival complement sentences ad infinitum
- …