445 research outputs found

    Teaching Math with Confidence-Recommendations for Improving Numeracy from the Lens of Confidence Building

    Get PDF
    Despite the vast amount of literature surrounding the topic of financial literacy and related problems, there is still no universally accepted solution to this issue because the main factors causing financial literacy problems are still not fully understood both by researchers and current policy-makers. A possible new approach was discovered by Skagerlund et al. (2018), as their research suggested that financial literacy is driven by numeracy (the ability to process and perform basic numerical concepts and calculations) rather than direct knowledge about financial concepts. Given that numeracy is an effort based task, this policy brief provides a list of recommendations for developing numeracy from the standpoint of motivating effort to practice and improve the numeracy and mathematical skills of people for them to have the tools necessary to become financially literate, which may be more effective than creating a dedicated course on the topic of financial literacy. The results of the study confirmed that effort is indeed motivated by higher levels of confidence. Furthermore, information, particularly feedback regarding performance, plays a crucial role in shaping future confidence and, by extension, future levels of motivation and effort. Guided by these findings, this brief proposes the following policy recommendations

    Testing the Relationship Between Confidence and Effort: A Behavioral Finance Perspective on the Problem of Financial Literacy

    Get PDF
    This experimental study tested the relationship between confidence and effort with the ultimate objective of discovering how these factors may influence financial literacy. This was done through a modified version of a slider test and ball allocation task. The population consisted of 85 random participants who were primarily approached through social media. A simple OLS regression, along with robustness checks, namely the Tobit model and instrumental variable (IV) regression model using Tobit estimators, were utilized to confirm the causal relationship between confidence and effort

    Determinants of wage and employment disparities for TVET and High School graduates

    Get PDF
    Technical Vocational Education and Training (TVET) was institutionalized by the Philippine government in order to fill in the gaps left by the higher education system in transitioning students to the formal workforce. However, recent studies suggest that TVET graduates have a difficult time gaining employment and wage increases because of skills supply and demand mismatches and the devaluation of TVET degrees. The mismatch is observed through the high unemployment rates of TVET graduates and various job availabilities that could not be filled up by these graduates due to the incompatibility of skills formation with job requirements which is evident in several sectors including ICT, Health Services, Agriculture, and Tourism. This paper used Naive Bayesian Regression and Propensity Score Matching methods to measure the direction and magnitude of labor market outcome differentials between TVET and High School graduates, as well as the Blinder Oaxaca Decomposition to measure how much endogenous and exogenous sources explain said wage and employment differentials

    Nut production in Bertholletia excelsa across a logged forest mosaic: implications for multiple forest use

    Get PDF
    Although many examples of multiple-use forest management may be found in tropical smallholder systems, few studies provide empirical support for the integration of selective timber harvesting with non-timber forest product (NTFP) extraction. Brazil nut (Bertholletia excelsa, Lecythidaceae) is one of the world’s most economically-important NTFP species extracted almost entirely from natural forests across the Amazon Basin. An obligate out-crosser, Brazil nut flowers are pollinated by large-bodied bees, a process resulting in a hard round fruit that takes up to 14 months to mature. As many smallholders turn to the financial security provided by timber, Brazil nut fruits are increasingly being harvested in logged forests. We tested the influence of tree and stand-level covariates (distance to nearest cut stump and local logging intensity) on total nut production at the individual tree level in five recently logged Brazil nut concessions covering about 4000 ha of forest in Madre de Dios, Peru. Our field team accompanied Brazil nut harvesters during the traditional harvest period (January-April 2012 and January-April 2013) in order to collect data on fruit production. Three hundred and ninety-nine (approximately 80%) of the 499 trees included in this study were at least 100 m from the nearest cut stump, suggesting that concessionaires avoid logging near adult Brazil nut trees. Yet even for those trees on the edge of logging gaps, distance to nearest cut stump and local logging intensity did not have a statistically significant influence on Brazil nut production at the applied logging intensities (typically 1–2 timber trees removed per ha). In one concession where at least 4 trees ha-1 were removed, however, the logging intensity covariate resulted in a marginally significant (0.09) P value, highlighting a potential risk for a drop in nut production at higher intensities. While we do not suggest that logging activities should be completely avoided in Brazil nut rich forests, when a buffer zone cannot be observed, low logging intensities should be implemented. The sustainability of this integrated management system will ultimately depend on a complex series of socioeconomic and ecological interactions. Yet we submit that our study provides an important initial step in understanding the compatibility of timber harvesting with a high value NTFP, potentially allowing for diversification of forest use strategies in Amazonian Perù

    Screening of diatoms producing domoic acid and its derivatives in the Philippines

    Get PDF
    Domoic acid is the known causative agent responsible for amnesic shellfish poisoning (ASP). Although there is only one documented ASP case in the world, there is a potential of its occurrence in Southeast Asian countries. However, limited information on domoic acid producing diatoms is available except for Nitzschia navis-varingica, which is known to produce significant levels of domoic acid. In order to obtain fundamental data on domoic acid producing diatoms, screening of Pseudo-nitzschia and Nitzschia species were primarily performed in the Philippines. Two source areas, i.e. Manila Bay and Iba estuary of Luzon Island, were selected for observation of these diatoms. Fifty eight isolates of Pseudo-nitzscia and 18 isolates of Nitzschia-like diatoms were prepared from Manila Bay and Iba estuary, respectively. These isolates were cultured and tested for the production of domoic acid and its derivatives. Pseudo-nitzscia strains did not show any signs of domoic acid production. Five out of 18 Nitzschia isolates were confirmed to produce isodomoic acids A and B. Comparison of sonication and boiling in water bath as extraction methods was investigated and results showed that both methods yielded comparable amounts of domoic acid. Stability of domoic acid extracted by boiling was also investigated and was found out to be stable at room temperature for ten days. Results implied an advantageous and convenient way of sample preparation and preservation for international transport

    The value of episodic, intensive blood glucose monitoring in non-insulin treated persons with type 2 diabetes: Design of the Structured Testing Program (STeP) Study, a cluster-randomised, clinical trial [NCT00674986]

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>The value and utility of self-monitoring of blood glucose (SMBG) in non-insulin treated T2DM has yet to be clearly determined. Findings from studies in this population have been inconsistent, due mainly to design differences and limitations, including the prescribed frequency and timing of SMBG, role of the patient and physician in responding to SMBG results, inclusion criteria that may contribute to untoward floor effects, subject compliance, and cross-arm contamination. We have designed an SMBG intervention study that attempts to address these issues.</p> <p>Methods/design</p> <p>The Structured Testing Program (STeP) study is a 12-month, cluster-randomised, multi-centre clinical trial to evaluate whether poorly controlled (HbA1c ≥ 7.5%), non-insulin treated T2DM patients will benefit from a comprehensive, integrated physician/patient intervention using structured SMBG in US primary care practices. Thirty-four practices will be recruited and randomly assigned to an active control group (ACG) that receives enhanced usual care or to an enhanced usual care group plus structured SMBG (STG). A total of 504 patients will be enrolled; eligible patients at each site will be randomly selected using a defined protocol. Anticipated attrition of 20% will yield a sample size of at least 204 per arm, which will provide a 90% power to detect a difference of at least 0.5% in change from baseline in HbA1c values, assuming a common standard deviation of 1.5%. Differences in timing and degree of treatment intensification, cost effectiveness, and changes in patient self-management behaviours, mood, and quality of life (QOL) over time will also be assessed. Analysis of change in HbA1c and other dependent variables over time will be performed using both intent-to-treat and per protocol analyses. Trial results will be available in 2010.</p> <p>Discussion</p> <p>The intervention and trial design builds upon previous research by emphasizing appropriate and collaborative use of SMBG by both patients and physicians. Utilization of per protocol and intent-to-treat analyses facilitates a comprehensive assessment of the intervention. Use of practice site cluster-randomisation reduces the potential for intervention contamination, and inclusion criteria (HbA1c ≥ 7.5%) reduces the possibility of floor effects. Inclusion of multiple dependent variables allows us to assess the broader impact of the intervention, including changes in patient and physician attitudes and behaviours.</p> <p>Trial Registration</p> <p>Current Controlled Trials NCT00674986.</p

    Quantification of codon selection for comparative bacterial genomics

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Statistics measuring codon selection seek to compare genes by their sensitivity to selection for translational efficiency, but existing statistics lack a model for testing the significance of differences between genes. Here, we introduce a new statistic for measuring codon selection, the Adaptive Codon Enrichment (ACE).</p> <p>Results</p> <p>This statistic represents codon usage bias in terms of a probabilistic distribution, quantifying the extent that preferred codons are over-represented in the gene of interest relative to the mean and variance that would result from stochastic sampling of codons. Expected codon frequencies are derived from the observed codon usage frequencies of a broad set of genes, such that they are likely to reflect nonselective, genome wide influences on codon usage (<it>e.g</it>. mutational biases). The relative adaptiveness of synonymous codons is deduced from the frequency of codon usage in a pre-selected set of genes relative to the expected frequency. The ACE can predict both transcript abundance during rapid growth and the rate of synonymous substitutions, with accuracy comparable to or greater than existing metrics. We further examine how the composition of reference gene sets affects the accuracy of the statistic, and suggest methods for selecting appropriate reference sets for any genome, including bacteriophages. Finally, we demonstrate that the ACE may naturally be extended to quantify the genome-wide influence of codon selection in a manner that is sensitive to a large fraction of codons in the genome. This reveals substantial variation among genomes, correlated with the tRNA gene number, even among groups of bacteria where previously proposed whole-genome measures show little variation.</p> <p>Conclusions</p> <p>The statistical framework of the ACE allows rigorous comparison of the level of codon selection acting on genes, both within a genome and between genomes.</p

    MHC Class I Bound to an Immunodominant Theileria parva Epitope Demonstrates Unconventional Presentation to T Cell Receptors

    Get PDF
    T cell receptor (TCR) recognition of peptide-MHC class I (pMHC) complexes is a crucial event in the adaptive immune response to pathogens. Peptide epitopes often display a strong dominance hierarchy, resulting in focusing of the response on a limited number of the most dominant epitopes. Such T cell responses may be additionally restricted by particular MHC alleles in preference to others. We have studied this poorly understood phenomenon using Theileria parva, a protozoan parasite that causes an often fatal lymphoproliferative disease in cattle. Despite its antigenic complexity, CD8+ T cell responses induced by infection with the parasite show profound immunodominance, as exemplified by the Tp1214–224 epitope presented by the common and functionally important MHC class I allele N*01301. We present a high-resolution crystal structure of this pMHC complex, demonstrating that the peptide is presented in a distinctive raised conformation. Functional studies using CD8+ T cell clones show that this impacts significantly on TCR recognition. The unconventional structure is generated by a hydrophobic ridge within the MHC peptide binding groove, found in a set of cattle MHC alleles. Extremely rare in all other species, this feature is seen in a small group of mouse MHC class I molecules. The data generated in this analysis contribute to our understanding of the structural basis for T cell-dependent immune responses, providing insight into what determines a highly immunogenic p-MHC complex, and hence can be of value in prediction of antigenic epitopes and vaccine design

    ICD-10 coding algorithms for defining comorbidities of acute myocardial infarction

    Get PDF
    BACKGROUND: With the introduction of ICD-10 throughout Canada, it is important to ensure that Acute Myocardial Infarction (AMI) comorbidities employed in risk adjustment methods remain valid and robust. Therefore, we developed ICD-10 coding algorithms for nine AMI comorbidities, examined the validity of the ICD-10 and ICD-9 coding algorithms in detection of these comorbidities, and assessed their performance in predicting mortality. The nine comorbidities that we examined were shock, diabetes with complications, congestive heart failure, cancer, cerebrovascular disease, pulmonary edema, acute renal failure, chronic renal failure, and cardiac dysrhythmias. METHODS: Coders generated a comprehensive list of ICD-10 codes corresponding to each AMI comorbidity. Physicians independently reviewed and determined the clinical relevance of each item on the list. To ensure that the newly developed ICD-10 coding algorithms were valid in recording comorbidities, medical charts were reviewed. After assessing ICD-10 algorithms' validity, both ICD-10 and ICD-9 algorithms were applied to a Canadian provincial hospital discharge database to predict in-hospital, 30-day, and 1-year mortality. RESULTS: Compared to chart review data as a 'criterion standard', ICD-9 and ICD-10 data had similar sensitivities (ranging from 7.1 – 100%), and specificities (above 93.6%) for each of the nine AMI comorbidities studied. The frequencies for the comorbidities were similar between ICD-9 and ICD-10 coding algorithms for 49,861 AMI patients in a Canadian province during 1994 – 2004. The C-statistics for predicting 30-day and 1 year mortality were the same for ICD-9 (0.82) and for ICD-10 data (0.81). CONCLUSION: The ICD-10 coding algorithms developed in this study to define AMI comorbidities performed similarly as past ICD-9 coding algorithms in detecting conditions and risk-adjustment in our sample. However, the ICD-10 coding algorithms should be further validated in external databases

    The Earliest Evidence of Holometabolan Insect Pupation in Conifer Wood

    Get PDF
    Background: The pre-Jurassic record of terrestrial wood borings is poorly resolved, despite body fossil evidence of insect diversification among xylophilic clades starting in the late Paleozoic. Detailed analysis of borings in petrified wood provides direct evidence of wood utilization by invertebrate animals, which typically comprises feeding behaviors.\ud \ud Methodology/Principal Findings: We describe a U-shaped boring in petrified wood from the Late Triassic Chinle Formation of southern Utah that demonstrates a strong linkage between insect ontogeny and conifer wood resources. Xylokrypta durossi new ichnogenus and ichnospecies is a large excavation in wood that is backfilled with partially digested xylem, creating a secluded chamber. The tracemaker exited the chamber by way of a small vertical shaft. This sequence of behaviors is most consistent with the entrance of a larva followed by pupal quiescence and adult emergence — hallmarks of holometabolous insect ontogeny. Among the known body fossil record of Triassic insects, cupedid beetles (Coleoptera: Archostemata) are deemed the most plausible tracemakers of Xylokrypta, based on their body size and modern xylobiotic lifestyle.\ud \ud Conclusions/Significance: This oldest record of pupation in fossil wood provides an alternative interpretation to borings once regarded as evidence for Triassic bees. Instead Xylokrypta suggests that early archostematan beetles were leaders in exploiting wood substrates well before modern clades of xylophages arose in the late Mesozoic
    corecore