89 research outputs found

    Lifetime of Surface Features and Stellar Rotation: A Wavelet Time-Frequency Approach

    Get PDF
    We explore subtle variations in disk-integrated measurements spanning \lsim 18 years of stellar surface magnetism by using a newly developed time-frequency gapped wavelet algorithm. We present results based on analysis of the Mount Wilson Ca II H and K emission fluxes in four, magnetically-active stars (HD 1835 [G2V], 82885 [G8IV-V], 149661 [K0V] and 190007 [K4V]) and sensitivity tests using artificial data. When the wavelet basis is appropriately modified (i.e., when the time-frequency resolution is optimized), the results are consistent with the existence of spatially localized and long-lived Ca II features (assumed here as activity regions that tend to recur in narrowly-confined latitude bands), especially in HD 1835 and 82885. This interpretation is based on the observed persistence of relatively localized Ca II wavelet power at a narrow range of rotational time scales, enduring as long as \gsim 10 years.Comment: to appear in THE ASTROPHYSICAL JOURNAL LETTER

    Polar Bear Population Forecasts: A Public-Policy Forecasting Audit

    Get PDF
    The extinction of polar bears by the end of the 21st century has been predicted and calls have been made to list them as a threatened species under the U.S. Endangered Species Act. The decision on whether or not to list rests upon forecasts of what will happen to the bears over the 21st Century. Scientific research on forecasting, conducted since the 1930s, has led to an extensive set of principles—evidence-based procedures—that describe which methods are appropriate under given conditions. The principles of forecasting have been published and are easily available. We assessed polar bear population forecasts in light of these scientific principles. Much research has been published on forecasting polar bear populations. Using an Internet search, we located roughly 1,000 such papers. None of them made reference to the scientific literature on forecasting. We examined references in the nine unpublished government reports that were prepared “
to Support U.S. Fish and Wildlife Service Polar Bear Listing Decision.” The papers did not include references to works on scientific forecasting methodology. Of the nine papers written to support the listing, we judged two to be the most relevant to the decision: Amstrup, Marcot and Douglas et al. (2007), which we refer to as AMD, and Hunter et al. (2007), which we refer to as H6 to represent the six authors. AMD’s forecasts were the product of a complex causal chain. For the first link in the chain, AMD assumed that General Circulation Models (GCMs) are valid. However, the GCM models are not valid as a forecasting method and are not reliable for forecasting at a regional level as being considered by AMD and H6, thus breaking the chain. Nevertheless, we audited their conditional forecasts of what would happen to the polar bear population assuming that the extent of summer sea ice will decrease substantially in the coming decades. AMD could not be rated against 26 relevant principles because the paper did not contain enough information. In all, AMD violated 73 of the 90 forecasting principles we were able to rate. They used two un-validated methods and relied on only one polar bear expert to specify variables, relationships, and inputs into their models. The expert then adjusted the models until the outputs conformed to his expectations. In effect, the forecasts were the opinions of a single expert unaided by forecasting principles. Based on research to date, approaches based on unaided expert opinion are inappropriate to forecasting in situations with high complexity and much uncertainty. Our audit of the second most relevant paper, H6, found that it was also based on faulty forecasting methodology. For example, it extrapolated nearly 100 years into the future on the basis of only five years of data – and data for these years were of doubtful validity. In summary, experts’ predictions, unaided by evidence-based forecasting procedures, should play no role in this decision. Without scientific forecasts of a substantial decline of the polar bear population and of net benefits from feasible policies arising from listing polar bears, a decision to list polar bears as threatened or endangered would be irresponsible.adaptation, bias, climate change, decision making, endangered species, expert opinion, evaluation, evidence-based principles, expert judgment, extinction, forecasting methods, global warming, habitat loss, mathematical models, scientific method, sea ice

    Three-dimensional inversion of corona structure and simulation of solar wind parameters based on the photospheric magnetic field deduced from the Global Oscillation Network Group

    Get PDF
    In this research, the Potential Field Source Surface–Wang–Sheeley–Arge (PFSS–WSA) solar wind model is used. This model consists of the Potential Field Source Surface (PFSS) coronal magnetic field extrapolation module and the Wang–Sheeley–Arge (WSA) solar wind velocity module. PFSS is implemented by the POT3D package deployed on Tianhe 1A supercomputer system. In order to obtain the three–dimensional (3D) distribution of the coronal magnetic field at different source surface radii (Rss), the model utilizes the Global Oscillation Network Group (GONG) photospheric magnetic field profiles for two Carrington rotations (CRs), CR2069 (in 2008) and CR2217 (in 2019), as the input data, with the source surface at Rss = 2Rs, Rss = 2.5Rs and Rss = 3Rs, respectively. Then the solar wind velocity, the coronal magnetic field expansion factor, and the minimum angular distance of the open magnetic field lines from the coronal hole boundary are estimated within the WSA module. The simulated solar wind speed is compared with the value for the corona extrapolated from the data observed near 1 AU, through the calculations of the mean square error (MSE), root mean square error (RMSE) and correlation coefficient (CC). Here we extrapolate the solar wind velocity at 1 AU back to the source surface via the Parker spiral. By comparing the evaluation metrics of the three source surface heights, we concluded that the solar source surface should be properly decreased with respect to Rss = 2.5Rs during the low solar activity phase of solar cycle 23

    Evidence-Based Forecasting for Climate Change

    Get PDF
    Following Green, Armstrong and Soon’s (IJF 2009) (GAS) naïve extrapolation, Fildes and Kourentzes (IJF 2011) (F&K) found that each of six more-sophisticated, but inexpensive, extrapolation models provided forecasts of global mean temperature for the 20 years to 2007 that were more accurate than the “business as usual” projections provided by the complex and expensive “General Circulation Models” used by the U.N.’s Intergovernmental Panel on Climate Change (IPCC). Their average trend forecast was .007°C per year, and diminishing; less than a quarter of the IPCC’s .030°C projection. F&K extended previous research by combining forecasts from evidence-based short-term forecasting methods. To further extend this work, we suggest researchers: (1) reconsider causal forces; (2) validate with more and longer-term forecasts; (3) adjust validation data for known biases and use alternative data; and (4) damp forecasted trends to compensate for the complexity and uncertainty of the situation. We have made a start in following these suggestions and found that: (1) uncertainty about causal forces is such that they should be avoided in climate forecasting models; (2) long term forecasts should be validated using all available data and much longer series that include representative variations in trend; (3) when tested against temperature data collected by satellite, naïve forecasts are more accurate than F&K’s longer-term (11-20 year) forecasts; and (4) progressive damping improves the accuracy of F&K’s forecasts. In sum, while forecasting a trend may improve the accuracy of forecasts for a few years into the future, improvements rapidly disappear as the forecast horizon lengthens beyond ten years. We conclude that predictions of dangerous manmade global warming and of benefits from climate policies fail to meet the standards of evidence-based forecasting and are not a proper basis for policy decisions

    Validity of Climate Change Forecasting for Public Policy Decision Making

    Get PDF
    Policymakers need to know whether prediction is possible and if so whether any proposed forecasting method will provide forecasts that are substantively more accurate than those from the relevant benchmark method. Inspection of global temperature data suggests that it is subject to irregular variations on all relevant time scales and that variations during the late 1900s were not unusual. In such a situation, a “no change” extrapolation is an appropriate benchmark forecasting method. We used the U.K. Met Office Hadley Centre’s annual average thermometer data from 1850 through 2007 to examine the performance of the benchmark method. The accuracy of forecasts from the benchmark is such that even perfect forecasts would be unlikely to help policymakers. For example, mean absolute errors for 20- and 50-year horizons were 0.18°C and 0.24°C. We nevertheless demonstrate the use of benchmarking with the example of the Intergovernmental Panel on Climate Change’s 1992 linear projection of long-term warming at a rate of 0.03°C-per-year. The small sample of errors from ex ante projections at 0.03°C-per-year for 1992 through 2008 was practically indistinguishable from the benchmark errors. Validation for long-term forecasting, however, requires a much longer horizon. Again using the IPCC warming rate for our demonstration, we projected the rate successively over a period analogous to that envisaged in their scenario of exponential CO2 growth—the years 1851 to 1975. The errors from the projections were more than seven times greater than the errors from the benchmark method. Relative errors were larger for longer forecast horizons. Our validation exercise illustrates the importance of determining whether it is possible to obtain forecasts that are more useful than those from a simple benchmark before making expensive policy decisions

    Benchmark Forecasts for Climate Change

    Get PDF
    We assessed three important criteria of forecastability—simplicity, certainty, and variability. Climate is complex due to many causal variables and their variable interactions. There is uncertainty about causes, effects, and data. Using evidence-based (scientific) forecasting principles, we determined that a naïve no change extrapolation method was the appropriate benchmark. To be useful to policy makers, a proposed forecasting method would have to provide forecasts that were substantially more accurate than the benchmark. We calculated benchmark forecasts against the UK Met Office Hadley Centre\u27s annual average thermometer data from 1850 through 2007. For 20- and 50-year horizons the mean absolute errors were 0.18°C and 0.24°C. The accuracy of forecasts from our naïve model is such that even perfect forecasts would be unlikely to help policy makers. We nevertheless evaluated the Intergovernmental Panel on Climate Change\u27s 1992 forecast of 0.03°C-per-year temperature increases. The small sample of errors from ex ante forecasts for 1992 through 2008 was practically indistinguishable from the naïve benchmark errors. To get a larger sample and evidence on longer horizons we backcast successively from 1974 to 1850. Averaged over all horizons, IPCC errors were more than seventimes greater than errors from the benchmark. Relative errors were larger for longer backcast horizons

    Research on Forecasting for the Manmade Global Warming Alarm: Testimony to Committee on Science, Space and Technology Subcommittee on Energy and Environment on Climate Change: Examining the processes used to create science and policy

    Get PDF
    The validity of the manmade global warming alarm requires the support of scientific forecasts of (1) a substantive long-term rise in global mean temperatures in the absence of regulations, (2) serious net harmful effects due to global warming, and (3) cost-effective regulations that would produce net beneficial effects versus alternatives policies, including doing nothing. Without scientific forecasts for all three aspects of the alarm, there is no scientific basis to enact regulations. In effect, the warming alarm is like a three-legged stool: each leg needs to be strong. Despite repeated appeals to global warming alarmists, we have been unable to find scientific forecasts for any of the three legs. We drew upon scientific (evidence-based) forecasting principles to audit the forecasting procedures used to forecast global mean temperatures by the Intergovernmental Panel on Climate Change (IPCC)—leg “1” of the stool. This audit found that the IPCC procedures violated 81% of the 89 relevant forecasting principles. We also audited forecasting procedures, used in two papers, that were written to support regulation regarding the protection of polar bears from global warming —leg “3” of the stool. On average, the forecasting procedures violated 85% of the 90 relevant principles. The warming alarmists have not demonstrated the predictive validity of their procedures. Instead, their argument for predictive validity is based on their claim that nearly all scientists agree with the forecasts. This count of “votes” by scientists is not only an incorrect tally of scientific opinion, it is also, and most importantly, contrary to the scientific method. We conducted a validation test of the IPCC forecasts that were based on the assumption that there would be no regulations. The errors for the IPCC model long-term forecasts (for 91 to 100 years in the future) were 12.6 times larger than those from an evidence-based “no change” model. Based on our own analyses and the documented unscientific behavior of global warming alarmists, we concluded that the global warming alarm is the product of an anti-scientific political movement. Having come to this conclusion, we turned to the “structured analogies” method to forecast the likely outcomes of the warming alarmist movement. In our ongoing study we have, to date, identified 26 similar historical alarmist movements. None of the forecasts behind the analogous alarms proved correct. Twenty-five alarms involved calls for government intervention and the government imposed regulations in 23. None of the 23 interventions was effective and harm was caused by 20 of them. Our findings on the scientific evidence related to global warming forecasts lead to the following recommendations: 1. End government funding for climate change research. 2. End government funding for research predicated on global warming (e.g., alternative energy; CO2 reduction; habitat loss). 3. End government programs and repeal regulations predicated on global warming. 4. End government support for organizations that lobby or campaign predicated on global warming

    Tidal Forcing on the Sun and the 11-year Solar Activity Cycle

    Full text link
    The hypothesis that tidal forces on the Sun are related to the modulations of the solar-activity cycle has gained increasing attention. The works proposing physical mechanisms of planetary action via tidal forcing have in common that quasi-alignments between Venus, Earth, and Jupiter (V-E-J configurations) would provide a basic periodicity of ≈11.0\approx 11.0 years able to synchronize the operation of solar dynamo with these planetary configurations. Nevertheless, the evidence behind this particular tidal forcing is still controversial. In this context we develop, for the first time, the complete Sun's tide-generating potential (STGP) in terms of a harmonic series, where the effects of different planets on the STGP are clearly separated and identified. We use a modification of the spectral analysis method devised by Kudryavtsev (J. Geodesy. 77, 829, 2004; Astron. Astrophys. 471, 1069, 2007b) that permits to expand any function of planetary coordinates to a harmonic series over long time intervals. We build a catalog of 713 harmonic terms able to represent the STGP with a high degree of precision. We look for tidal forcings related to V-E-J configurations and specifically the existence of periodicities around 11.011.0 years. Although the obtained tidal periods range from ≈\approx 1000 years to 1 week, we do not find any ≈\approx 11.0 years period. The V-E-J configurations do not produce any significant tidal term at this or other periods. The Venus tidal interaction is absent in the 11-year spectral band, which is dominated by Jupiter's orbital motion. The planet that contributes the most to the STGP in three planets configurations, along with Venus and Earth, is Saturn. An ≈11.0\approx 11.0 years tidal period with a direct physical relevance on the 11-year-like solar-activity cycle is highly improbable.Comment: (May 2023) Published in Solar Physic
    • 

    corecore