1,064 research outputs found

    Polar Bear Population Forecasts: A Public-Policy Forecasting Audit

    Get PDF
    The extinction of polar bears by the end of the 21st century has been predicted and calls have been made to list them as a threatened species under the U.S. Endangered Species Act. The decision on whether or not to list rests upon forecasts of what will happen to the bears over the 21st Century. Scientific research on forecasting, conducted since the 1930s, has led to an extensive set of principles—evidence-based procedures—that describe which methods are appropriate under given conditions. The principles of forecasting have been published and are easily available. We assessed polar bear population forecasts in light of these scientific principles. Much research has been published on forecasting polar bear populations. Using an Internet search, we located roughly 1,000 such papers. None of them made reference to the scientific literature on forecasting. We examined references in the nine unpublished government reports that were prepared “…to Support U.S. Fish and Wildlife Service Polar Bear Listing Decision.” The papers did not include references to works on scientific forecasting methodology. Of the nine papers written to support the listing, we judged two to be the most relevant to the decision: Amstrup, Marcot and Douglas et al. (2007), which we refer to as AMD, and Hunter et al. (2007), which we refer to as H6 to represent the six authors. AMD’s forecasts were the product of a complex causal chain. For the first link in the chain, AMD assumed that General Circulation Models (GCMs) are valid. However, the GCM models are not valid as a forecasting method and are not reliable for forecasting at a regional level as being considered by AMD and H6, thus breaking the chain. Nevertheless, we audited their conditional forecasts of what would happen to the polar bear population assuming that the extent of summer sea ice will decrease substantially in the coming decades. AMD could not be rated against 26 relevant principles because the paper did not contain enough information. In all, AMD violated 73 of the 90 forecasting principles we were able to rate. They used two un-validated methods and relied on only one polar bear expert to specify variables, relationships, and inputs into their models. The expert then adjusted the models until the outputs conformed to his expectations. In effect, the forecasts were the opinions of a single expert unaided by forecasting principles. Based on research to date, approaches based on unaided expert opinion are inappropriate to forecasting in situations with high complexity and much uncertainty. Our audit of the second most relevant paper, H6, found that it was also based on faulty forecasting methodology. For example, it extrapolated nearly 100 years into the future on the basis of only five years of data – and data for these years were of doubtful validity. In summary, experts’ predictions, unaided by evidence-based forecasting procedures, should play no role in this decision. Without scientific forecasts of a substantial decline of the polar bear population and of net benefits from feasible policies arising from listing polar bears, a decision to list polar bears as threatened or endangered would be irresponsible.adaptation, bias, climate change, decision making, endangered species, expert opinion, evaluation, evidence-based principles, expert judgment, extinction, forecasting methods, global warming, habitat loss, mathematical models, scientific method, sea ice

    A comparison between traditional and nontraditional elementary schools on the basis of the academic achievement and self-esteem of students and parental perceptions of the education provided

    Get PDF
    The purpose of this study was to determine how students who were provided a fundamental elementary school curriculum compared with those who were provided a regular elementary school curriculum in the areas of basic skills and self-concept. The study also compared the perceptions of students\u27 parents to determine if there were differences in their attitudes toward the education provided. The sample consisted of 52 fundamental school students, 52 regular school students, and 50 of their parents;Univariate analysis of variance was employed to test differences between fundamental and regular school students\u27 achievement as measured by the Iowa Test of Basic Skills and Iowa Test of Educational Development. The t-test was employed to test differences between fundamental and regular school students\u27 achievement in reading and mathematics utilizing district objectives-based tests and also differences between students\u27 self-concept as measured by the Coopersmith Self-Esteem Inventory. The Chi-Square Test of Independence was employed to test differences between fundamental and regular school students on measures of parental perceptions of the goals, nature and quality of education provided by the elementary schools;Findings of this study indicated that students who were enrolled in the fundamental school for one year achieved higher on norm referenced measures in the area of mathematics than their counterparts from the regular schools. Those enrolled in the fundamental school for two years achieved higher composite norm referenced results than their counterparts. Students enrolled in the fundamental school for four years achieved higher on norm referenced measures than their counterparts on subscales reading and mathematics after being exposed to the fundamental program for two years;Findings revealed that no significant differences existed between any of the groups on district objectives-based measures, the overall or any subscale on the Coopersmith Self-Esteem Inventory, or parental responses, with two exceptions. Students exposed to the fundamental school program for four years scored significantly lower than their counterparts on the school-academic subscale. Parents of fundamental school students viewed the regular elementary schools more negatively than those parents of regular school students

    The Marketing of Agricultural Products in Bowie County, Texas

    Get PDF
    This study is being conducted for the purpose of finding out the possibilities for marketing Agricultural products in Bowie County with the view point to improve the economic condition of the Agricultural producing population of the County. The study will present the problem facing the Agricultural worker in his efforts to market the products produced. Another aspect of the study will deal with the possible solutions which may eleviate the existing pressure which is felt in the marketing of Agricultural products. This study is limited to the study of the marketing practices of fifty Negro farmers located in Bowie County, Texas, who are engaged in general farming

    THE AFRICAN AMERICAN AND THE CALIFORNIA BASIC SKILLS REQUIREMENT FOR TEACHING

    Get PDF
    This study examines why the passing rates of African Americans on the CBEST are the lowest in California at 60%. Madkins (2011) identified licensure testing as a significant reason why African Americans cannot enter the teaching progression. According to Darling-Hammond et al. (2016), California has an ongoing credentialed teacher shortage. An even more significant need is for teachers of color. According to the California Department of Education (2021), 60% of the state’s educator workforce is White, while the state student body, multicultural and multilingual, is only slightly more than 22% White. While licensure testing for teachers is required in all 50 states, it is well documented that it negates teacher diversity (Brown, 2005; Goldhaber & Hansen, 2010; Sleeter, 2016). The research confirms how it effectively curtails the number of African American educators (Behizadeh & Neely, 2018; Ingersoll et al., 2019; Petchauer, 2012). To clarify why the CBEST is so difficult for African Americans, I used a narrative inquiry with a counter-narrative framework. The inquiry describes the lived experiences of African American applicants in order to interrogate the CBEST’s impact on prospective and current African American teachers in California

    Quantifying Equivocation for Finite Blocklength Wiretap Codes

    Full text link
    This paper presents a new technique for providing the analysis and comparison of wiretap codes in the small blocklength regime over the binary erasure wiretap channel. A major result is the development of Monte Carlo strategies for quantifying a code's equivocation, which mirrors techniques used to analyze normal error correcting codes. For this paper, we limit our analysis to coset-based wiretap codes, and make several comparisons of different code families at small and medium blocklengths. Our results indicate that there are security advantages to using specific codes when using small to medium blocklengths.Comment: Submitted to ICC 201

    Auditing perspective of the historical development of internal control

    Get PDF
    https://egrove.olemiss.edu/dl_proceedings/1184/thumbnail.jp

    Ecologies of Hope: Understanding Educational Success Among Black Males in an Urban Midwestern City

    Get PDF
    The American Psychological Association\u27s Task Force on Resilience and Strength in Black Children and Adolescents (2008) called for resilience frameworks particularly designed to understand African American development. Thus, the present study explores the lives of seven academically successful Black males in an urban midwestern city. Using a Critical Race Theory framework, the researchers center the counterstories of men of color who matriculated through college from a failing high school in a challenging urban community. Using constant comparative analysis, two critical themes emerged: extended family and extended kinship support networks. A synthesis of these themes resulted in an emergent framework entitled Ecology of Hope, which advances resilience theory (APA, 2008) through centering the strengths vested within African American families, community organizations, and social networks

    Evidence-Based Forecasting for Climate Change

    Get PDF
    Following Green, Armstrong and Soon’s (IJF 2009) (GAS) naïve extrapolation, Fildes and Kourentzes (IJF 2011) (F&K) found that each of six more-sophisticated, but inexpensive, extrapolation models provided forecasts of global mean temperature for the 20 years to 2007 that were more accurate than the “business as usual” projections provided by the complex and expensive “General Circulation Models” used by the U.N.’s Intergovernmental Panel on Climate Change (IPCC). Their average trend forecast was .007°C per year, and diminishing; less than a quarter of the IPCC’s .030°C projection. F&K extended previous research by combining forecasts from evidence-based short-term forecasting methods. To further extend this work, we suggest researchers: (1) reconsider causal forces; (2) validate with more and longer-term forecasts; (3) adjust validation data for known biases and use alternative data; and (4) damp forecasted trends to compensate for the complexity and uncertainty of the situation. We have made a start in following these suggestions and found that: (1) uncertainty about causal forces is such that they should be avoided in climate forecasting models; (2) long term forecasts should be validated using all available data and much longer series that include representative variations in trend; (3) when tested against temperature data collected by satellite, naïve forecasts are more accurate than F&K’s longer-term (11-20 year) forecasts; and (4) progressive damping improves the accuracy of F&K’s forecasts. In sum, while forecasting a trend may improve the accuracy of forecasts for a few years into the future, improvements rapidly disappear as the forecast horizon lengthens beyond ten years. We conclude that predictions of dangerous manmade global warming and of benefits from climate policies fail to meet the standards of evidence-based forecasting and are not a proper basis for policy decisions
    • …
    corecore