638 research outputs found

    Fiction Reader: A Case Study Documenting the Impact Reading Fiction has on One Child’s Literacy

    Get PDF
    This qualitative research case study explores the impact that fiction reading has had on one 10 year-old child’s literacy experience and learning. The Common Core State Standards have been adopted across the United States; the Standards prescribe educational milestones for each grade level, K-12. Literature and informational texts are represented throughout the State Standards for English Language Arts. The purpose of this study is to explore and observe one elementary-level student’s interactions with literary, fiction texts, in order to learn about the impact fiction reading has had on the child’s literacy experience. Within this qualitative research study, I inform elementary-level instruction and educate teachers on the implications of using fiction texts in the classroom. As a result of the triangulation of my data collection with peer reviewed research, I was able to find that the reading of fiction has impacted my focus learner in three unique ways. The three found themes translated to my research findings: fiction is used to teach lessons and morals, fiction is read for pleasure, and fiction is used to aid story sequencing and comprehension

    Using Bad Learners to find Good Configurations

    Full text link
    Finding the optimally performing configuration of a software system for a given setting is often challenging. Recent approaches address this challenge by learning performance models based on a sample set of configurations. However, building an accurate performance model can be very expensive (and is often infeasible in practice). The central insight of this paper is that exact performance values (e.g. the response time of a software system) are not required to rank configurations and to identify the optimal one. As shown by our experiments, models that are cheap to learn but inaccurate (with respect to the difference between actual and predicted performance) can still be used rank configurations and hence find the optimal configuration. This novel \emph{rank-based approach} allows us to significantly reduce the cost (in terms of number of measurements of sample configuration) as well as the time required to build models. We evaluate our approach with 21 scenarios based on 9 software systems and demonstrate that our approach is beneficial in 16 scenarios; for the remaining 5 scenarios, an accurate model can be built by using very few samples anyway, without the need for a rank-based approach.Comment: 11 pages, 11 figure

    Resource use during systematic review production varies widely: a scoping review

    Get PDF
    Objective: We aimed to map the resource use during systematic review (SR) production and reasons why steps of the SR production are resource intensive to discover where the largest gain in improving efficiency might be possible. Study design and setting: We conducted a scoping review. An information specialist searched multiple databases (e.g., Ovid MEDLINE, Scopus) and implemented citation-based and grey literature searching. We employed dual and independent screenings of records at the title/abstract and full-text levels and data extraction. Results: We included 34 studies. Thirty-two reported on the resource use—mostly time; four described reasons why steps of the review process are resource intensive. Study selection, data extraction, and critical appraisal seem to be very resource intensive, while protocol development, literature search, or study retrieval take less time. Project management and administration required a large proportion of SR production time. Lack of experience, domain knowledge, use of collaborative and SR-tailored software, and good communication and management can be reasons why SR steps are resource intensive. Conclusion: Resource use during SR production varies widely. Areas with the largest resource use are administration and project management, study selection, data extraction, and critical appraisal of studies.European Commission CA17117Danube University Krem

    How to Measure Molecular Forces in Cells: A Guide to Evaluating Genetically-Encoded FRET-Based Tension Sensors

    Get PDF
    The ability of cells to sense and respond to mechanical forces is central to a wide range of biological processes and plays an important role in numerous pathol- ogies. The molecular mechanisms underlying cellular mech- anotransduction, however, have remained largely elusive because suitable methods to investigate subcellular force propagation were missing. Here, we review recent advances in the development of biosensors that allow molecular force measurements. We describe the underlying principle of currently available techniques and propose a strategy to systematically evaluate new Fo ̈ rster resonance energy trans- fer (FRET)-based biosensor

    Early characterization of the severity and transmissibility of pandemic influenza using clinical episode data from multiple populations

    Get PDF
    The potential rapid availability of large-scale clinical episode data during the next influenza pandemic suggests an opportunity for increasing the speed with which novel respiratory pathogens can be characterized. Key intervention decisions will be determined by both the transmissibility of the novel strain (measured by the basic reproductive number R0) and its individual-level severity. The 2009 pandemic illustrated that estimating individual-level severity, as described by the proportion pC of infections that result in clinical cases, can remain uncertain for a prolonged period of time. Here, we use 50 distinct US military populations during 2009 as a retrospective cohort to test the hypothesis that real-time encounter data combined with disease dynamic models can be used to bridge this uncertainty gap. Effectively, we estimated the total number of infections in multiple early-affected communities using the model and divided that number by the known number of clinical cases. Joint estimates of severity and transmissibility clustered within a relatively small region of parameter space, with 40 of the 50 populations bounded by: pC, 0.0133-0.150 and R0, 1.09-2.16. These fits were obtained despite widely varying incidence profiles: some with spring waves, some with fall waves and some with both. To illustrate the benefit of specific pairing of rapidly available data and infectious disease models, we simulated a future moderate pandemic strain with pC approximately ×10 that of 2009; the results demonstrating that even before the peak had passed in the first affected population, R0 and pC could be well estimated. This study provides a clear reference in this two-dimensional space against which future novel respiratory pathogens can be rapidly assessed and compared with previous pandemics

    Estimating the Costs of Foundational Public Health Capabilities: A Recommended Methodology

    Get PDF
    The Institute of Medicine’s 2012 report on public health financing recommended the convening of expert panels to identify the components and costs of a “minimum package of public health services” that should be available in every U.S. community. The report recommended that this minimum package include a core set of public health programs that target specific, high-priority preventable health problems and risks, along with a set of “foundational public health capabilities” that are deemed necessary to support the successful implementation of public health programs and policies. In response to this recommendation, the Robert Wood Johnson Foundation, in collaboration with the US Centers for Disease Control and Prevention and other national professional associations, formed the Public Health Leadership Forum, an expert consensus panel process to identify a recommended set of core programs and foundational capabilities for the nation. The Forum’s initial charge focused on the specification of foundational public health capabilities. The Foundational Capabilities Workgroup was formed as a part of the Forum to identify and define the elements to be included as foundational capabilities for governmental public health agencies at both state and local levels. The Robert Wood Johnson Foundation asked the National Coordinating Center for Public Health Services and Systems Research based at the University of Kentucky to convene a second expert panel workgroup, the Workgroup on Public Health Cost Estimation, to develop a methodology for estimating the resources required to develop and maintain foundational capabilities by governmental public health agencies at both state and local levels. Working in parallel with the Foundational Capabilities Workgroup, this Cost Estimation Workgroup has considered relevant cost-accounting models and cost estimation methodologies, and reviewed related cost estimation studies, in order to make recommendations on an approach for generating first-generation estimates of the costs associated with developing and maintaining foundational capabilities

    Epidemiology of Aortic Aneurysm Repair in the United States from 1993 to 2003

    Full text link
    The epidemiology of abdominal aortic aneurysm (AAA) disease has been well described over the preceding 50 years. This disease primarily affects elderly males with smoking, hypertension, and a positive family history contributing to an increased risk of aneurysm formation. The aging population as well as increased screening in high-risk populations has led some to suggest that the incidence of AAAs is increasing. The National Inpatient Sample (1993 2003), a national representative database, was used in this study to determine trends in mortality following AAA repair in the United States. In addition, the impact of the introduction of less invasive endovascular AAA repair was assessed. Overall rates of treated unruptured and ruptured AAAs remained stable (unruptured 12 to 15 100,000; ruptured 1 to 3 100,000). In 2003, 42.7 of unruptured and 8.8 of ruptured AAAs were repaired through an endovascular approach. Inhospital mortality following unruptured AAA repair continues to decline for open repair (5.3 to 4.7 , P 0.007). Mortality after elective endovascular AAA repair also has statistically decreased (2.1 to 1.0 , P 0.024) and remains lower than open repair. Mortality rates for ruptured AAAs following repair remain high (open: 46.5 to 40.7 , P 0.01; endovascular: 40.0 to 35.3 , P 0.823). These data suggest that the numbers of patients undergoing elective AAA repair have remained relatively stable despite the introduction of less invasive technology. A shift in the treatment paradigm is occurring with a higher percentage of patients subjected to elective endovascular AAA repair compared to open repair. This shift, at least in the short term, appears justified as the mortality in patients undergoing elective endovascular AAA repair is significantly reduced compared to patients undergoing open AAA repair.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/73855/1/annals.1383.030.pd

    In Defense of Wireless Carrier Sense

    Get PDF
    Carrier sense is often used to regulate concurrency in wireless medium access control (MAC) protocols, balancing interference protection and spatial reuse. Carrier sense is known to be imperfect, and many improved techniques have been proposed. Is the search for a replacement justified? This paper presents a theoretical model for average case two-sender carrier sense based on radio propagation theory and Shannon capacity. Analysis using the model shows that carrier sense performance is surprisingly close to optimal for radios with adaptive bitrate. The model suggests that hidden and exposed terminals usually cause modest reductions in throughput rather than dramatic decreases. Finally, it is possible to choose a fixed sense threshold which performs well across a wide range of scenarios, in large part due to the role of the noise floor. Experimental results from an indoor 802.11 testbed support these claims

    Recommendations for Urine and Urinary Bladder Collection in Chemical Carcinogenesis Assays with Rodents

    Get PDF
    This review describes the technical procedures to collect and process urine and urinary bladder samples, during and at the end of urinary bladder carcinogenesis assays with small rodents. The applications, advantages and disadvantages of each method are also mentioned and discussed
    • 

    corecore