2,045 research outputs found

    How many high frequency words of English do Japanese university freshmen`know\u27?

    Get PDF
    Knowledge of high-frequency vocabulary is essential to language fluency. However, there is more to knowing a word than simply knowing its meaning. Full vocabulary depth knowledge includes not only semantics, but also knowledge of a word\u27s phonology, orthography, collocations, word parts, grammar, constraints on use, concepts and referents, and associations.This research examined the extent of knowledge that a group Japanese university freshmen have of high-frequency English vocabulary. First, students were made to give judgments on how well they knew a list of 3,000 high-frequency English lemma. Then, items deemed as being known were examined to determine whether they functioned as a cognate between English and Japanese. Finally, a sample of the items marked as being known was tested in regards to the full range of vocabulary depth knowledge. The results of this study showed that while the students tested in this study are familiar with a majority of high-frequency English vocabulary, their vocabulary depth knowledge of these items is shallow. The results of this study should help guide teachers as to which aspects of vocabulary depth they need to concentrate on to help make language learning more efficient

    Is Intuition Enough When Choosing Vocabulary?

    Get PDF
    This paper outlines an analytical study whose purpose is to examine and critique the appropriateness of lexical choice in current mass market language textbooks for Japanese students. This study proceeds from the authors extensive use of textbooks which contained a large amount of low frequency vocabulary of questionable usefulness. This research project examined the tokens of the Cover to Cover textbook series, determining what percentage fell into the high frequency vocabulary realm, what words occurred as loan words in Japanese, and what words could be considered to be known to a majority of incoming university freshmen in Japan. The results proved that the writers intuition and experience were sufficient in that a large majority of tokens were high frequency items, but many low frequency vocabulary, loan words, and known words served as keywords, thus diminishing their educational value. Non-high frequency items were then addressed through various treatments to improve upon the overall validity of the texts

    The elusive perfect textbook : Cultural sensitivity as a factor in materials selection/modification/creation

    Get PDF
    A good textbook can provide a strong base from which to build a course, so instructors should take prudence in its selection. One of the many factors that should be considered when making a selection is cultural sensitivity. This study examines suggested textbooks for Advanced English II Level 5 in the fall 2008 semester at Kansai Gaidai University in regards to cultural sensitivity. It is assumed that a universally ideal textbook does not exist due the multiple variables that every teacher, group of learners, or course creates. Therefore, the findings are taken one step further to suggest research paths and methods that teachers can utilize to modify any shortcoming that they find in their own textbooks. The results showed that all of the textbooks had deficiencies that point to trends instructors should be aware of, although some textbooks were more ideal than others. Also of note would be content found that should be deemed inappropriate for the Japanese learner. More importantly, this study sheds light on what instructors can look for when scanning textbooks in regards to cultural sensitivity, thus filling a gap in textbook evaluation that stems from instructor time constraints. Larger implications include points of interest for publishers/materials writers and self reflection for instructors who, unbeknownst to themselves, may have been exposing their students to cultural colonialism through the materials they use

    On the creation of a learner corpus for the purpose of error analysis

    Get PDF
    Learners with similar backgrounds have a tendency to make the same types of errors in L2 production. Such errors can be viewed as having the potential to inform pedagogical methodologies, in that they shed light onto which features of the L2 are the most problematic for particular learners. Analyzing such errors also provides insight as to why these learners tend to make these errors, thus furthering our understanding of how second languages are acquired.This study aimed to create a learner corpus for the purpose of error analysis to discover which errors occurred most frequently, and to examine why such errors occurred. Various CALL (computer aided language learning) methodologies were utilized to create an approximately 85,000 word learner corpus. Errors were corrected and classified, and error analysis was conducted on the most frequent errors found. This analysis revealed that interference from the learners\u27 L1 was the source for the majority of errors, while cultural and metalinguistic knowledge also proved to be at fault for some particular errors.The results of this study should prove to be valuable for English language teachers and researchers in Japan, in that the most frequent English errors that Japanese learners produce were quantified and also discussed. Thus, teachers and researchers can be cognizant of which errors prove to be the most troublesome, and can better understand why they occur to help Japanese learners to avoid them

    In-Store Evaluation of Consumer Willingness to Pay for “Farm-Raised†Pre-Cooked Roast Beef: A Case Study

    Get PDF
    A choice-based conjoint experiment was used to examine consumer willingness to pay for a farm-raised pre-cooked roast beef product. Consumers were contacted in a grocery store and provided a sample of the pre-cooked product. Findings indicate there is a small, but statistically significant willingness-to-pay premium for the farm-raised product, suggesting that some product differentiation may result in higher prices for these products. The study outlines an approach to marketing research.beef, conjoint, convenience foods, experiments, in-store tests, surveys, Livestock Production/Industries, Marketing,

    Corpus Data or Teacher Intuition: Which is More Valuable when Choosing Vocabulary to Teach to Young ESL Learners?

    Get PDF
    This study compared the value of corpus data versus teacher intuition when selectingvocabulary to teach to young ESL learners. It revealed that vocabulary chosen using teacher intuition are mostly low-frequency, but that such items are more preferable because they have high-imageability. It concluded that for young learners, a combination of 500 words with ighimageability chosen using native English speaker intuition and the most frequent 500 words of English is preferable in comparison with teaching the most frequent 1,000 words of English.The reason for this was the minimal gains in text coverage that the second most frequent 500 words of English provided in comparison to words chosen with intuition, which had highimageability and thus a lower learning burden. This study showed that such an approach strikes an ideal balance between the practicality of pedagogical goals and the cost/benefit value of vocabulary choices

    Is native speaker intuition reliable for high-frequency context creation?

    Get PDF
    This study determined whether native speaker intuition could be relied upon to producecontextual content that mostly fell into what is considered high-frequency vocabulary. Native speakers wrote over 160,000 tokens worth of example sentences for high-frequency multi-word units derived from a corpus. The resulting database was examined to determine whether the content added by the native speakers mostly stayed within the high-frequency realm.Results showed that not only did the vast majority of native speakers\u27 tokens fall into the high-frequency realm, the percentage that fell into the high-frequency realm only dropped by 0.84 percent in comparison to the multi-word units alone despite the large amount of data beingadded. This study highlighted how the intuition of experienced ESL practitioners can be relied upon to produce high-frequency contextual content

    Measurement of gut permeability using fluorescent tracer agent technology

    Get PDF
    Abstract The healthy gut restricts macromolecular and bacterial movement across tight junctions, while increased intestinal permeability accompanies many intestinal disorders. Dual sugar absorption tests, which measure intestinal permeability in humans, present challenges. Therefore, we asked if enterally administered fluorescent tracers could ascertain mucosal integrity, because transcutaneous measurement of differentially absorbed molecules could enable specimen-free evaluation of permeability. We induced small bowel injury in rats using high- (15 mg/kg), intermediate- (10 mg/kg), and low- (5 mg/kg) dose indomethacin. Then, we compared urinary ratios of enterally administered fluorescent tracers MB-402 and MB-301 to urinary ratios of sugar tracers lactulose and rhamnose. We also tested the ability of transcutaneous sensors to measure the ratios of absorbed fluorophores. Urinary fluorophore and sugar ratios reflect gut injury in an indomethacin dose dependent manner. The fluorophores generated smooth curvilinear ratio trajectories with wide dynamic ranges. The more chaotic sugar ratios had narrower dynamic ranges. Fluorophore ratios measured through the skin distinguished indomethacin-challenged from same day control rats. Enterally administered fluorophores can identify intestinal injury in a rat model. Fluorophore ratios are measureable through the skin, obviating drawbacks of dual sugar absorption tests. Pending validation, this technology should be considered for human use

    Perilipin regulates the thermogenic actions of norepinephrine in brown adipose tissue

    Get PDF
    In response to cold, norepinephrine (NE)-induced triacylglycerol hydrolysis (lipolysis) in adipocytes of brown adipose tissue (BAT) provides fatty acid substrates to mitochondria for heat generation (adaptive thermogenesis). NE-induced lipolysis is mediated by protein kinase A (PKA)-dependent phosphorylation of perilipin, a lipid droplet-associated protein that is the major regulator of lipolysis. We investigated the role of perilipin PKA phosphorylation in BAT NE-stimulated thermogenesis using a novel mouse model in which a mutant form of perilipin, lacking all six PKA phosphorylation sites, is expressed in adipocytes of perilipin knockout (Peri KO) mice. Here, we show that despite a normal mitochondrial respiratory capacity, NE-induced lipolysis is abrogated in the interscapular brown adipose tissue (IBAT) of these mice. This lipolytic constraint is accompanied by a dramatic blunting (∼70%) of the in vivo thermal response to NE. Thus, in the presence of perilipin, PKA-mediated perilipin phosphorylation is essential for NE-dependent lipolysis and full adaptive thermogenesis in BAT. In IBAT of Peri KO mice, increased basal lipolysis attributable to the absence of perilipin is sufficient to support a rapid NE-stimulated temperature increase (∼3.0°C) comparable to that in wild-type mice. This observation suggests that one or more NE-dependent mechanism downstream of perilipin phosphorylation is required to initiate and/or sustain the IBAT thermal response

    Comparison of Benefit-Risk Assessment Methods for Prospective Monitoring of Newly Marketed Drugs: A Simulation Study

    Get PDF
    AbstractObjectivesTo compare benefit-risk assessment (BRA) methods for determining whether and when sufficient evidence exists to indicate that one drug is favorable over another in prospective monitoring.MethodsWe simulated prospective monitoring of a new drug (A) versus an alternative drug (B) with respect to two beneficial and three harmful outcomes. We generated data for 1000 iterations of six scenarios and applied four BRA metrics: number needed to treat and number needed to harm (NNT|NNH), incremental net benefit (INB) with maximum acceptable risk, INB with relative-value–adjusted life-years, and INB with quality-adjusted life-years. We determined the proportion of iterations in which the 99% confidence interval for each metric included and excluded the null and we calculated mean time to alerting.ResultsWith no true difference in any outcome between drugs A and B, the proportion of iterations including the null was lowest for INB with relative-value–adjusted life-years (64%) and highest for INB with quality-adjusted life-years (76%). When drug A was more effective and the drugs were equally safe, all metrics indicated net favorability of A in more than 70% of the iterations. When drug A was safer than drug B, NNT|NNH had the highest proportion of iterations indicating net favorability of drug A (65%). Mean time to alerting was similar among methods across the six scenarios.ConclusionsBRA metrics can be useful for identifying net favorability when applied to prospective monitoring of a new drug versus an alternative drug. INB-based approaches similarly outperform unweighted NNT|NNH approaches. Time to alerting was similar across approaches
    corecore