227 research outputs found

    The past tense inflection project (PTIP): speeded past tense inflections, imageability ratings, and past tense consistency measures for 2,200 verbs

    Get PDF
    Abstract The processes involved in past tense verb generation have been central to models of inflectional morphology. However, the empirical support for such models has often been based on studies of accuracy in past tense verb formation on a relatively small set of items. We present the first largescale study of past tense inflection (the Past Tense Inflection Project, or PTIP) that affords response time, accuracy, and error analyses in the generation of the past tense form from the present tense form for over 2,000 verbs. In addition to standard lexical variables (such as word frequency, length, and orthographic and phonological neighborhood), we have also developed new measures of past tense neighborhood consistency and verb imageability for these stimuli, and via regression analyses we demonstrate the utility of these new measures in predicting past tense verb generation. The PTIP can be used to further evaluate existing models, to provide well controlled stimuli for new studies, and to uncover novel theoretical principles in past tense morphology. Keywords Verb processing . Megastudy . Past tense inflection . Item-level variance . Verb consistency . Verb imageability A long-standing question in language acquisition and inflectional morphology is how individuals produce the past tense form of a verb. Past tense inflection (PTI), like spelling-to-sound conversion in English, is quasiregular, meaning that a set of generally applicable descriptive rules are useful for most verbs (e.g., add -ed to the stem form), but there are also some irregular forms (e.g., dodid) and subregularities (as in the eep-ept past tense family: sleep-slept, weep-wept, keep-kept, etc.). Indeed, past tense inflection has been a central focus of the debate between parallel distributed models Although there has been extensive theoretical work in the area of past tense verb generation, experimental work examining response times (RTs) has been relatively limited. For example, in the stem inflection task, participants are asked to produce the past tense (real or hypothetical) of a target verb or novel nonword (e.g., Only a few previous studies of past tense verb inflection have used RT as a dependent variable: Joanisse and Seidenberg One way to address the discrepancies among previous studies, as well as the limitations associated with factorial designs employing relatively few stimuli, is to sample a much larger set of items from the language. Megastudies include stimuli on the order of thousands, rather than 50 to 100, and allow for the effects of variables to be modeled continuously rather than categorically (see In addition to providing a large database of response latencies and accuracies for past tense verb inflection, we also developed two new measures that are important to consider in past tense inflection, consistency and imageability. Similar to the spelling-to-sound consistency measure that has been well-studied in visual word recognition research The second variable that we measured was imageability. Imageability is a variable that reflects the extent to which 152 Behav Res (2013) 45:151-159 one is able to form a mental image of a word, and indeed many imageability norms are already available (e.g., The present study is based on 89 participants' accuracy and RTs for a past tense inflection task with 2,200 verbs. Each participant produced responses to 888 items. For each verb in the PTIP database, we included measures of RT, accuracy, and regularization errors (e.g., saying GRINDED for GRIND), along with the new imageability and consistency measures described above. The PTIP database is useful in examining the specific effects of predictor variables on RT and accuracy and allows for detailed item-level predictions. It is available as supplementary materials with this article for researchers who plan to examine other theoretical questions about past tense inflection, or are hoping to select well-controlled and well-examined stimuli for new studies. These data will serve as both a reference and an impetus for further research in the domain of past tense inflection. Experiment 1 The first experiment was conducted in order to collect imageability rating norms for the verbs in the PTIP database. Method Participants A group of 218 participants were recruited via Amazon's Mechanical Turk (AMT; see Materials The 2,200 words from the PTIP database (see below), plus another 112 words for use in another study, were divided into eight lists of 289 items each. The eight lists were presented as separate jobs in AMT. Procedure Each participant completed one list of the rating task, which was presented in Adobe Flash and appeared after a consent screen in the AMT job description. The instructions were the same as those used in Results The ratings were aggregated across participants for each item (excluding "do not know" responses), so that one mean imageability value was calculated for each verb. These values were used in Experiment 2 (see below). The mean rating across all verbs was 4.28 (SD 0 0.92), and the mean RT across all verbs was 3,191 ms (SD 0 1,695). The overall split-half reliability was r 0 .80, p < .001. Experiment 2 Method Participants A group of 113 native English-speaking college students from the Washington University subject pool participated in Behav Res (2013) 45:151-159 153 the study. After eliminating extreme outliers (less than 80 % accuracy overall; four participants) or participants whose data were subject to recording error (missing sound files from which to code accuracy-20 participants), 89 participants contributed to the final database

    Non stationary Einstein-Maxwell fields interacting with a superconducting cosmic string

    Full text link
    Non stationary cylindrically symmetric exact solutions of the Einstein-Maxwell equations are derived as single soliton perturbations of a Levi-Civita metric, by an application of Alekseev inverse scattering method. We show that the metric derived by L. Witten, interpreted as describing the electrogravitational field of a straight, stationary, conducting wire may be recovered in the limit of a `wide' soliton. This leads to the possibility of interpreting the solitonic solutions as representing a non stationary electrogravitational field exterior to, and interacting with, a thin, straight, superconducting cosmic string. We give a detailed discussion of the restrictions that arise when appropiate energy and regularity conditions are imposed on the matter and fields comprising the string, considered as `source', the most important being that this `source' must necessarily have a non- vanishing minimum radius. We show that as a consequence, it is not possible, except in the stationary case, to assign uniquely a current to the source from a knowledge of the electrogravitational fields outside the source. A discussion of the asymptotic properties of the metrics, the physical meaning of their curvature singularities, as well as that of some of the metric parameters, is also included.Comment: 14 pages, no figures (RevTex

    Nonsingular deformations of singular compactifications, the cosmological constant, and the hierarchy problem

    Get PDF
    We consider deformations of the singular "global cosmic string" compactifications, known to naturally generate exponentially large scales. The deformations are obtained by allowing a constant curvature metric on the brane and correspond to a choice of integration constant. We show that there exists a unique value of the integration constant that gives rise to a nonsingular solution. The metric on the brane is dS_4 with an exponentially small value of expansion parameter. We derive an upper bound on the brane cosmological constant. We find and investigate more general singular solutions---``dilatonic global string" compactifications---and show that they can have nonsingular deformations. We give an embedding of these solutions in type IIB supergravity. There is only one class of supersymmetry-preserving singular dilatonic solutions. We show that they do not have nonsingular deformations of the type considered here.Comment: Final version to appear in Class. Qu. Grav.; references and concluding remarks added, typos corrected. 18 pages, LaTeX, 1 ps figur

    Lack of Relationship Between Chronic Upper Abdominal Symptoms and Gastric Function in Functional Dyspepsia

    Get PDF
    To determine the relationship between gastric function and upper abdominal sensations we studied sixty FD patients (43 female). All patients underwent three gastric function tests: 13C octanoic gastric emptying test, three-dimensional ultrasonography (proximal and distal gastric volume), and the nutrient drink test. Upper abdominal sensations experienced in daily life were scored using questionnaires. Impaired proximal gastric relaxation (23%) and a delayed gastric emptying (33%) are highly prevalent in FD patients; however, only a small overlap exists between the two pathophysiologic disorders (5%). No relationship was found between chronic upper abdominal symptoms and gastric function (proximal gastric relaxation, gastric emptying rate, or drinking capacity) (all P > 0.01). Proximal gastric relaxation or gastric emptying rate had no effect on maximum drinking capacity (P > 0.01). The lack of relationship between chronic upper abdominal sensations and gastric function questions the role of these pathophysiologic mechanisms in the generation of symptoms

    Linear low-dose extrapolation for noncancer health effects is the exception, not the rule

    Get PDF
    The nature of the exposure-response relationship has a profound influence on risk analyses. Several arguments have been proffered as to why all exposure-response relationships for both cancer and noncarcinogenic end-points should be assumed to be linear at low doses. We focused on three arguments that have been put forth for noncarcinogens. First, the general “additivity-to-background” argument proposes that if an agent enhances an already existing disease-causing process, then even small exposures increase disease incidence in a linear manner. This only holds if it is related to a specific mode of action that has nonuniversal properties—properties that would not be expected for most noncancer effects. Second, the “heterogeneity in the population” argument states that variations in sensitivity among members ofthe target population tend to “flatten out and linearize” the exposure-response curve, but this actually only tends to broaden, not linearize, the dose-response relationship. Third, it has been argued that a review of epidemiological evidence shows linear or no-threshold effects at low exposures in humans, despite nonlinear exposure-response in the experimental dose range in animal testing for similar endpoints. It is more likely that this is attributable to exposure measurement error rather than a true non-threshold association. Assuming that every chemical is toxic at high exposures and linear at low exposures does not comport to modern-day scientific knowledge of biology. There is no compelling evidence-based justification for a general low-exposure linearity; rather, case-specific mechanistic arguments are needed

    Toxicity Testing in the 21st Century: Defining New Risk Assessment Approaches Based on Perturbation of Intracellular Toxicity Pathways

    Get PDF
    The approaches to quantitatively assessing the health risks of chemical exposure have not changed appreciably in the past 50 to 80 years, the focus remaining on high-dose studies that measure adverse outcomes in homogeneous animal populations. This expensive, low-throughput approach relies on conservative extrapolations to relate animal studies to much lower-dose human exposures and is of questionable relevance to predicting risks to humans at their typical low exposures. It makes little use of a mechanistic understanding of the mode of action by which chemicals perturb biological processes in human cells and tissues. An alternative vision, proposed by the U.S. National Research Council (NRC) report Toxicity Testing in the 21st Century: A Vision and a Strategy, called for moving away from traditional high-dose animal studies to an approach based on perturbation of cellular responses using well-designed in vitro assays. Central to this vision are (a) “toxicity pathways” (the innate cellular pathways that may be perturbed by chemicals) and (b) the determination of chemical concentration ranges where those perturbations are likely to be excessive, thereby leading to adverse health effects if present for a prolonged duration in an intact organism. In this paper we briefly review the original NRC report and responses to that report over the past 3 years, and discuss how the change in testing might be achieved in the U.S. and in the European Union (EU). EU initiatives in developing alternatives to animal testing of cosmetic ingredients have run very much in parallel with the NRC report. Moving from current practice to the NRC vision would require using prototype toxicity pathways to develop case studies showing the new vision in action. In this vein, we also discuss how the proposed strategy for toxicity testing might be applied to the toxicity pathways associated with DNA damage and repair

    A modern network approach to revisiting the positive and negative affective schedule (PANAS) construct validity

    Get PDF
    Introduction: The factor structure of the Positive and Negative Affective Schedule (PANAS) is still a topic of debate. There are several reasons why using Exploratory Graph Analysis (EGA) for scale validation is advantageous and can help understand and resolve conflicting results in the factor analytic literature. Objective: The main objective of the present study was to advance the knowledge regarding the factor structure underlying the PANAS scores by utilizing the different functionalities of the EGA method. EGA was used to (1) estimate the dimensionality of the PANAS scores, (2) establish the stability of the dimensionality estimate and of the item assignments into the dimensions, and (3) assess the impact of potential redundancies across item pairs on the dimensionality and structure of the PANAS scores. Method: This assessment was carried out across two studies that included two large samples of participants. Results and Conclusion: In sum, the results are consistent with a two-factor oblique structure.Fil: Flores Kanter, Pablo Ezequiel. Universidad Empresarial Siglo XXI; Argentina. Consejo Nacional de Investigaciones Científicas y Técnicas; ArgentinaFil: Garrido, Luis Eduardo. Pontificia Universidad Católica Madre y Maestra; República DominicanaFil: Moretti, Luciana Sofía. Universidad Empresarial Siglo XXI; Argentina. Pontificia Universidad Católica Madre y Maestra; República Dominicana. Consejo Nacional de Investigaciones Científicas y Técnicas; ArgentinaFil: Medrano, Leonardo. Universidad Empresarial Siglo XXI; Argentina. Pontificia Universidad Católica Madre y Maestra; República Dominicana. Consejo Nacional de Investigaciones Científicas y Técnicas; Argentin

    POLICY PREFERENCE FORMATION IN LEGISLATIVE POLITICS:STRUCTURES, ACTORS, AND FOCAL POINTS

    Get PDF
    This dissertation introduces and tests a model of policy preference formation in legislative politics. Emphasizing a dynamic relationship between structure, agent, and decision-making process, it ties the question of policy choice to the dimensionality of the normative political space and the strategic actions of parliamentary agenda-setters. The model proposes that structural factors, such as ideology, shape policy preferences to the extent that legislative specialists successfully link them to specific policy proposals through the provision of informational focal points. These focal points shift attention toward particular aspects of a legislative proposal, thus shaping the dominant interpretation of its content and consequences and, in turn, individual-level policy preferences. The propositions of the focal point model are tested empirically with data from the European Parliament (EP), using both qualitative (interview data, content analyses of parliamentary debates) and quantitative methods (multinomial logit regression analyses of roll-call votes). The findings have implications for our understanding of politics and law-making in the European Union and for the study of legislative decision-making more generally
    corecore