91 research outputs found

    The relationship between language proficiency and attentional control in Cantonese-English bilingual children: Evidence from Simon, Simon Switching, and working memory tasks

    Get PDF
    By administering Simon, Simon switching, and operation-span working memory tasks to Cantonese-English bilingual children who varied in their first-language (L1, Cantonese) and second-language (L2, English) proficiencies, as quantified by standardized vocabulary test performance, the current study examined the effects of L1 and L2 proficiency on attentional control performance. Apart from mean performance, we conducted ex-Gaussian analyses to capture the modal and positive-tail components of participants\u27 reaction time distributions in the Simon and Simon switching tasks. Bilinguals\u27 L2 proficiency was associated with higher scores in the operation span task, and a shift of reaction time distributions in incongruent trials, relative to congruent trials (Simon effect in ÎĽ), and the tail size of reaction time distributions (Ď„) regardless of trial types in the Simon task. Bilinguals\u27 L1 proficiency, which was strongly associated with participants\u27 age, showed similar results, except that it was not associated with the Simon effect in ÎĽ. In contrast, neither bilinguals\u27 L1 nor L2 proficiency modulated the global switch cost or local switch cost in the Simon switching task. After taking into account potential cognitive maturation by partialling out the participants\u27 age, only (a) scores in the working memory task and (b) RT in incongruent trials and (c) Simon effect in ÎĽ in the Simon task could still be predicted by bilinguals\u27 L2 proficiency. Overall, the current findings suggest that bilingual children\u27s L2 proficiency was associated with their conflict resolution and working memory capacity, but not goal maintenance or task-set switching, when they performed the cognitive tasks that demanded attentional control. This was not entirely consistent with the findings of college-age bilinguals reported in previous studies

    Can the Testing Effect for General Knowledge Facts Be Influenced by Distraction due to Divided Attention or Experimentally Induced Anxious Mood?

    Get PDF
    Studies on testing effect have showed that a practice test on study materials leads to better performance in a final test than restudying the materials for the same amount of time. Two experiments were conducted to test how distraction, as triggered by divided attention or experimentally induced anxious mood in the practice phase, could modulate the benefit of testing (vs. restudying) on the learning of interesting and boring general knowledge facts. Two individual difference factors (trait test anxiety and working memory (WM) capacity) were measured. Under divided attention, participants restudied or recalled the missing information in visually presented general knowledge facts, while judging whether auditorily presented items were from a pre-specified category. To experimentally induce anxious mood, we instructed participants to view and interpret negative pictures with anxious music background before and during the practice phase. Immediate and two-day delayed tests were given. Regardless of item type (interesting or boring) or retention interval, the testing effect was not significantly affected by divided (vs. full) attention or anxious (vs. neutral) mood. These results remained unchanged after taking into account the influences of participants’ trait test anxiety and WM capacity. However, when analyses were restricted to the study materials that had been learnt in the divided attention condition while participants accurately responded to the concurrent distracting task, the testing effect was stronger in the divided attention condition than in the full attention condition. Contrary to previous studies (e.g., Tse and Pu, 2012), there was no WM capacity × trait test anxiety interaction in the overall testing effect. Theoretical and practical implications of these findings are discussed

    Reproduktionsmanagement in Milchviehbetrieben

    Get PDF
    Die Arbeit liefert eine umfassende Beschreibung der Managementmethoden von Milcherzeugerbetrieben unter dem Fokus Herdenfruchtbarkeit. Die Studie analysiert Einflüsse des Managements auf Reproduktions- und Milchleistung und verfolgt hierzu einen bio-sozialen Forschungsansatz. Die Daten wurden 2007 in Interviews mit Herdenmanagern und Stallrundgängen anhand von Fragebögen in 84 v.a. Brandenburger Milcherzeugerbetrieben erhoben. Die Fragen betrafen u.a. Haltungsumwelt, Stressvermeidung, Herden-, Reproduktions- und Personalmanagement. Die Leistungsdaten der Herden entstammen der Milchleistungsprüfung 2007. Die Datenanalyse umfasste qualitative und quantitative Methoden. Abhängige Variablen waren Zwischenkalbezeit (ZKZ) und 305-Tage-Milchleistung (ML). Im Mittel betrug die Herdengröße 306,3 Kühe (+/-238,3), die ZKZ 413,2d (+/-18,73) und die ML 8555kg (+/-1132,9). Die ZKZ sank tendenziell mit steigender ML (r=-0,188, p=0,10). Die ML stieg mit der Herdengröße (r=0,29, p=0,01). Die Laufgänge (LG) waren in 71,25% der Betriebe nass und rutschig. Hier war die ZKZ tendenziell länger (+16,6d, p=,055) als bei trockenen, rutschfesten LG. Die Liegeflächen (LF) waren in 29,9% der Betriebe trocken und elastisch. Hier war die ML höher (+1110kg, p=0,011) als bei nassen, harten LF. Eine eindeutige Zuordnung der Verantwortung für die Brunstkontrolle wirkte tendenziell positiv auf die ZKZ (-6,3d, p=0,129). Akademisch ausgebildete Herdenmanagerinnen erzielten eine höhere ML als gleichqualifizierte Männer (+752,9kg, p=0,005) bei gleicher ZKZ und damit eine günstigere Relation von Reproduktions- und Milchleistung. Motivierung durch materielle und soziale Anreize bzw. Verantwortungsübertragung und gute Kommunikation war mit einer höheren ML verbunden als Motivierung durch Leistungslohn (+ ca. 1000kg, p=0,009). Die Studie belegt die Notwendigkeit verbesserter Haltungsbedingungen und bietet eine Grundlage für vertiefende Studien zum Personalmanagement in Milchviehbetrieben.This study provides a comprehensive characterization of current management methods of dairy farms, focusing on herd fertility. Relations of management factors to fertility and milk performance are analyzed following a bio-social approach. In 2007 a questionnaire survey including face-to-face interviews and direct observations was conducted in 84 East German dairy farms, mostly located in Brandenburg. Questions referred to housing, stress prevention, and management of herds, reproduction and personnel. Herd performance data stem from milk performance testing in 2007. Data analysis combined qualitative and quantitative methods. Calving interval (CI) and 305-day-milk yield (MY) were used as dependent variables. Mean values of herd size, CI and MY were, respectively, 306.3 cows (+/-238.3), 413.2d (+/-18.73d) and 8555kg (+/-1132.9). CI tended to decrease with increasing MY (r=-0.188, p=0.10). MY increased with rising herd size (r=0.29, p=0.01). Floors were wet and slippery in 71.25% of farms. In these farms CI tended to be longer (+16.6d, p=0.055) compared to farms with dry and non-slippery floors. Lying areas were in 29.9% of the farms dry and flexible. Here MY was higher (+1110kg, p=0.011) than in farms with wet and hard lying areas. A clear assignment of responsibility for heat detection showed a trend of decreasing CI (-6.3d, p=0.129). Female herd managers with academic qualification achieved a higher MY than likewise qualified men (+752.9kg, p=0.005), with no difference in CI. Thus, herds managed by highly qualified women showed a better MY:CI ratio. Employee motivation by material and social incentives or by allocating responsibility to workers and pursuing good communication was related to higher MY than motivation by performance pay alone (+1000kg, p=0.009); CI remained unaffected. Performance pay had no positive effect on targeted parameters. Results underscore the need for improved housing and recommend further study into personnel management in dairy farms

    Evaluating the Effects of Metalinguistic and Working Memory Training on Reading Fluency in Chinese and English: A Randomized Controlled Trial

    Get PDF
    Children traditionally learn to read Chinese characters by rote, and thus stretching children’s memory span could possibly improve their reading in Chinese. Nevertheless, 85% of Chinese characters are semantic-phonetic compounds that contain probabilistic information about meaning and pronunciation. Hence, enhancing children’s metalinguistic skills might also facilitate reading in Chinese. In the present study, we tested whether training children’s metalinguistic skills or training their working-memory capacity in 8 weeks would produce reading gains, and whether these gains would be similar in Chinese and English. We recruited 35 second graders in Hong Kong and randomly assigned them to a metalinguistic training group (N = 13), a working-memory training group (10), or a waitlist control group (12). In the metalinguistic training, children were taught to analyze novel Chinese characters into phonetic and semantic radicals and novel English words into onsets and rimes. In the working-memory training, children were trained to recall increasingly long strings of Cantonese or English syllables in correct or reverse order. All children were tested on phonological skills, verbal working memory, and word reading fluency in Chinese and in English before and after training. Analyses of the pre- and post-test data revealed that only the metalinguistic training group, but not the other two groups, showed significant improvement on phonological skills in Chinese and English. Working-memory span in Chinese and English increased from the pre- to post-test in the working-memory training group relative to other two groups. Despite these domain-specific training effects, the two training groups improved similarly in word reading fluency in Chinese and English compared to the control group. Our findings suggest that increased metalinguistic skills and a larger working-memory span appear equally beneficial to reading fluency, and that these effects are similar in Chinese and English

    Foreign language learning as potential treatment for mild cognitive impairment

    Get PDF
    As the number of older adults increases, age-related health issues (both physical and cognitive) and associated costs are expected to increase, placing emotional and financial stress on family members and the health system. Dementia is one of the most devastating and costly diseases that older adults face. The present study aimed to determine whether foreign language learning can improve cognitive outcomes of older adults with mild cognitive impairment (MCI). The objectives are to determine whether foreign language learning is (1) effective in boosting cognitive reserve and promoting healthy cognitive function and (2) superior to other established cognitively stimulating activities such as crossword and logic puzzles

    Paediatric/young versus adult patients with long QT syndrome

    Get PDF
    Introduction Long QT syndrome (LQTS) is a less prevalent cardiac ion channelopathy than Brugada syndrome in Asia. The present study compared the outcomes between paediatric/young and adult LQTS patients. Methods This was a population-based retrospective cohort study of consecutive patients diagnosed with LQTS attending public hospitals in Hong Kong. The primary outcome was spontaneous ventricular tachycardia/ventricular fibrillation (VT/VF). Results A total of 142 LQTS (mean onset age=27±23 years old) were included. Arrhythmias other than VT/VF (HR 4.67, 95% CI (1.53 to 14.3), p=0.007), initial VT/VF (HR=3.25 (95% CI 1.29 to 8.16), p=0.012) and Schwartz score (HR=1.90 (95% CI 1.11 to 3.26), p=0.020) were predictive of the primary outcome for the overall cohort, while arrhythmias other than VT/VF (HR=5.41 (95% CI 1.36 to 21.4), p=0.016) and Schwartz score (HR=4.67 (95% CI 1.48 to 14.7), p=0.009) were predictive for the adult subgroup (>25 years old; n=58). A random survival forest model identified initial VT/VF, Schwartz score, initial QTc interval, family history of LQTS, initially asymptomatic and arrhythmias other than VT/VF as the most important variables for risk prediction. Conclusion Clinical and ECG presentation varies between the paediatric/young and adult LQTS population. Machine learning models achieved more accurate VT/VF prediction

    Ventricular tachyarrhythmia risk in paediatric/young vs. adult Brugada syndrome patients: a territory-wide study

    Get PDF
    Introduction: Brugada syndrome (BrS) is a cardiac ion channelopathy with a higher prevalence in Asia compared to the Western populations. The present study compared the differences in clinical and electrocardiographic (ECG) presentation between paediatric/young (≤25 years old) and adult (>25 years) BrS patients. Method: This was a territory-wide retrospective cohort study of consecutive BrS patients presenting to public hospitals in Hong Kong. The primary outcome was spontaneous ventricular tachycardia/ventricular fibrillation (VT/VF). Results: The cohort consists of 550 consecutive patients (median age of initial presentation = 51 ± 23 years; female = 7.3%; follow-up period = 83 ± 80 months), divided into adult (n = 505, mean age of initial presentation = 52 ± 19 years; female = 6.7%; mean follow-up period = 83 ± 80 months) and paediatric/young subgroups (n = 45, mean age of initial presentation = 21 ± 5 years, female = 13.3%, mean follow-up period = 73 ± 83 months). The mean annual VT/VF incidence rate were 17 and 25 cases per 1,000 patient-year, respectively. Multivariate analysis showed that initial presentation of type 1 pattern (HR = 1.80, 95% CI = [1.02, 3.15], p = 0.041), initial asymptomatic presentation (HR = 0.26, 95% CI = [0.07, 0.94], p = 0.040) and increased P-wave axis (HR = 0.98, 95% CI = [0.96, 1.00], p = 0.036) were significant predictors of VT/VF for the adult subgroup. Only initial presentation of VT/VF was predictive (HR = 29.30, 95% CI = [1.75, 492.00], p = 0.019) in the paediatric/young subgroup. Conclusion: Clinical and ECG presentation of BrS vary between the paediatric/young and adult population in BrS. Risk stratification and management strategies for younger patients should take into consideration and adopt an individualised approach

    Comparing the performance of published risk scores in Brugada syndrome: a multi-center cohort study.

    Get PDF
    The management of Brugada Syndrome (BrS) patients at intermediate risk of arrhythmic events remains controversial. The present study evaluated the predictive performance of different risk scores in an Asian BrS population and its intermediate risk subgroup. This retrospective cohort study included consecutive patients diagnosed with BrS from January 1 , 1997 to June 20 , 2020 from Hong Kong. The primary outcome is sustained ventricular tachyarrhythmias. Two novel risk risk scores and seven machine learning-based models (random survival forest, Ada boost classifier, Gaussian naïve Bayes, light gradient boosting machine, random forest classifier, gradient boosting classifier and decision tree classifier) were developed. The area under the receiver operator characteristic (ROC) curve (AUC) [95% confidence intervals] was compared between the different models. This study included 548 consecutive BrS patients (7% female, age at diagnosis: 50±16 years, follow-up: 84±55 months). For the whole cohort, the score developed by Sieira et al. showed the best performance (AUC: 0.806 [0.747-0.865]). A novel risk score was developed using the Sieira score and additional variables significant on univariable Cox regression (AUC: 0.855 [0.808-0.901]). A simpler score based on non-invasive results only showed a statistically comparable AUC (0.784 [0.724-0.845]), improved using random survival forests (AUC: 0.942 [0.913-0.964]). For the intermediate risk subgroup (N=274), a gradient boosting classifier model showed the best performance (AUC: 0.814 [0.791-0.832]). A simple risk score based on clinical and electrocardiographic variables showed a good performance for predicting VT/VF, improved using machine learning. Abstract: The management of Brugada Syndrome (BrS) patients at intermediate risk of arrhythmic events remains controversial. This study evaluated the predictive performance of published risk scores in a cohort of BrS patients from Hong Kong (N=548) and its intermediate risk subgroup (N=274). A novel risk score developed by modifying the best performing existing score (by. Sieira et al.) showed an area under the curve of 0.855 and 0.760 for the whole BrS cohort and the intermediate risk subgroup, respectively. The performance of the different scores was significantly improved machine learning-based methods, such as random survival forests and gradient boosting classifier. [Abstract copyright: Copyright © 2022 The Authors. Published by Elsevier Inc. All rights reserved.

    A territory-wide Study of arrhythmogenic right ventricular cardiomyopathy patients from Hong Kong

    Get PDF
    Background: Arrhythmogenic right ventricular cardiomyopathy/dysplasia (ARVC/D) is a hereditary disease characterized by fibrofatty infiltration of the right ventricular myocardium that predisposes affected patients to malignant ventricular arrhythmias, dual-chamber cardiac failure and sudden cardiac death (SCD). The present study aims to investigate the risk of detrimental cardiovascular events in an Asian population of ARVC/D patients, including the incidence of malignant ventricular arrhythmias, new-onset heart failure with reduced ejection fraction (HFrEF), as well as long-term mortality. Methods and Results: This was a territory-wide retrospective cohort study of patients diagnosed with ARVC/D between 1997 and 2019 in Hong Kong. This study consisted of 109 ARVC/D patients (median age: 61 [46–71] years; 58% male). Of these, 51 and 24 patients developed incident VT/VF and new-onset HFrEF, respectively. Five patients underwent cardiac transplantation, and 14 died during follow-up. Multivariate Cox regression identified prolonged QRS duration as a predictor of VT/VF (p <0.05). Female gender, prolonged QTc duration, the presence of epsilon waves and T-wave inversion (TWI) in any lead except aVR/V1 predicted new-onset HFrEF (p <0.05). The presence of epsilon waves, in addition to the parameters of prolonged QRS duration and worsening ejection fraction predicted all-cause mortality (p <0.05). Clinical scores were developed to predict incident VT/VF, new-onset HFrEF and all-cause mortality, and all were significantly improved by machine learning techniques. Conclusions: Clinical and electrocardiographic parameters are important for assessing prognosis in ARVC/D patients and should in turn be used in tandem to aid risk stratification in the hospital setting

    Territory-wide cohort study of Brugada syndrome in Hong Kong: predictors of long-term outcomes using random survival forests and non-negative matrix factorisation

    Get PDF
    Objectives: Brugada syndrome (BrS) is an ion channelopathy that predisposes affected patients to spontaneous ventricular tachycardia/fibrillation (VT/VF) and sudden cardiac death. The aim of this study is to examine the predictive factors of spontaneous VT/VF. Methods: This was a territory-wide retrospective cohort study of patients diagnosed with BrS between 1997 and 2019. The primary outcome was spontaneous VT/VF. Cox regression was used to identify significant risk predictors. Non-linear interactions between variables (latent patterns) were extracted using non-negative matrix factorisation (NMF) and used as inputs into the random survival forest (RSF) model. Results: This study included 516 consecutive BrS patients (mean age of initial presentation=50±16 years, male=92%) with a median follow-up of 86 (IQR: 45–118) months. The cohort was divided into subgroups based on initial disease manifestation: asymptomatic (n=314), syncope (n=159) or VT/VF (n=41). Annualised event rates per person-year were 1.70%, 0.05% and 0.01% for the VT/VF, syncope and asymptomatic subgroups, respectively. Multivariate Cox regression analysis revealed initial presentation of VT/VF (HR=24.0, 95% CI=1.21 to 479, p=0.037) and SD of P-wave duration (HR=1.07, 95% CI=1.00 to 1.13, p=0.044) were significant predictors. The NMF-RSF showed the best predictive performance compared with RSF and Cox regression models (precision: 0.87 vs 0.83 vs. 0.76, recall: 0.89 vs. 0.85 vs 0.73, F1-score: 0.88 vs 0.84 vs 0.74). Conclusions: Clinical history, electrocardiographic markers and investigation results provide important information for risk stratification. Machine learning techniques using NMF and RSF significantly improves overall risk stratification performance
    • …
    corecore