669 research outputs found
Chapter 11 Climate Change Adaptation and African Cities
Drawing upon a variety of empirical and theoretical perspectives, The Urban Climate Challenge provides a hands-on perspective about the political and technical challenges now facing cities and transnational urban networks in the global climate regime. Bringing together experts working in the fields of global environmental governance, urban sustainability and climate change, this volume explores the ways in which cities, transnational urban networks and global policy institutions are repositioning themselves in relation to this changing global policy environment. Focusing on both Northern and Southern experience across the globe, three questions that have strong bearing on the ways in which we understand and assess the changing relationship between cities and global climate system are examined. The Urban Climate Challenge will be of interest to scholars of urban climate policy, global environmental governance and climate change. It will be of interest to readers more generally interested in the ways in which cities are now addressing the inter-related challenges of sustainable urban growth and global climate change. Chapter 9 and Chapter 11 of this book are freely available as downloadable Open Access PDFs under a Creative Commons Attribution-Non Commercial-No Derivatives 3.0 license. https://s3-us-west-2.amazonaws.com/tandfbis/rt-files/docs/Open+Access+Chapters/9781138776883_oachapter11.pdf Chapter 9 and Chapter 11 of this book are freely available as downloadable Open Access PDFs under a Creative Commons Attribution-Non Commercial-No Derivatives 3.0 license. https://s3-us-west-2.amazonaws.com/tandfbis/rt-files/docs/Open+Access+Chapters/9781138776883_oachapter9.pd
A time series classifier
A time series is a sequence of data measured at successive time intervals. Time series analysis refers to all of the methods employed to understand such data, either with the purpose of explaining the underlying system producing the data or to try to predict future data points in the time series...An evolutionary algorithm is a non-deterministic method of searching a solution space, and modeled after biological evolutionary processes. A learning classifier system (LCS) is a form of evolutionary algorithm that operates on a population of mapping rules. We introduce the time series classifier TSC, a new type of LCS that allows for the modeling and prediction of time series data, derived from Wilson\u27s XCSR, an LCS designed for use with real-valued inputs. Our method works by modifying the makeup of the rules in the LCS so that they are suitable for use on a time series...We tested TSC on real-world historical stock data --Abstract, page iii
Short-Range Forecasting of COVID-19 During Early Onset at County, Health District, and State Geographic Levels Using Seven Methods: Comparative Forecasting Study
BACKGROUND:
Forecasting methods rely on trends and averages of prior observations to forecast COVID-19 case counts. COVID-19 forecasts have received much media attention, and numerous platforms have been created to inform the public. However, forecasting effectiveness varies by geographic scope and is affected by changing assumptions in behaviors and preventative measures in response to the pandemic. Due to time requirements for developing a COVID-19 vaccine, evidence is needed to inform short-term forecasting method selection at county, health district, and state levels.
OBJECTIVE:
COVID-19 forecasts keep the public informed and contribute to public policy. As such, proper understanding of forecasting purposes and outcomes is needed to advance knowledge of health statistics for policy makers and the public. Using publicly available real-time data provided online, we aimed to evaluate the performance of seven forecasting methods utilized to forecast cumulative COVID-19 case counts. Forecasts were evaluated based on how well they forecast 1, 3, and 7 days forward when utilizing 1-, 3-, 7-, or all prior-day cumulative case counts during early virus onset. This study provides an objective evaluation of the forecasting methods to identify forecasting model assumptions that contribute to lower error in forecasting COVID-19 cumulative case growth. This information benefits professionals, decision makers, and the public relying on the data provided by short-term case count estimates at varied geographic levels.
METHODS:
We created 1-, 3-, and 7-day forecasts at the county, health district, and state levels using (1) a naïve approach, (2) Holt-Winters (HW) exponential smoothing, (3) a growth rate approach, (4) a moving average (MA) approach, (5) an autoregressive (AR) approach, (6) an autoregressive moving average (ARMA) approach, and (7) an autoregressive integrated moving average (ARIMA) approach. Forecasts relied on Virginia\u27s 3464 historical county-level cumulative case counts from March 7 to April 22, 2020, as reported by The New York Times. Statistically significant results were identified using 95% CIs of median absolute error (MdAE) and median absolute percentage error (MdAPE) metrics of the resulting 216,698 forecasts.
RESULTS:
The next-day MA forecast with 3-day look-back length obtained the lowest MdAE (median 0.67, 95% CI 0.49-0.84, P\u3c.001) and statistically significantly differed from 39 out of 59 alternatives (66%) to 53 out of 59 alternatives (90%) at each geographic level at a significance level of .01. For short-range forecasting, methods assuming stationary means of prior days\u27 counts outperformed methods with assumptions of weak stationarity or nonstationarity means. MdAPE results revealed statistically significant differences across geographic levels.
CONCLUSIONS:
For short-range COVID-19 cumulative case count forecasting at the county, health district, and state levels during early onset, the following were found: (1) the MA method was effective for forecasting 1-, 3-, and 7-day cumulative case counts; (2) exponential growth was not the best representation of case growth during early virus onset when the public was aware of the virus; and (3) geographic resolution was a factor in the selection of forecasting methods
Application of One-, Three-, and Seven-Day Forecasts During Early Onset on the COVID-19 Epidemic Dataset Using Moving Average, Autoregressive, Autoregressive Moving Average, Autoregressive Integrated Moving Average, and Naïve Forecasting Methods
The coronavirus disease 2019 (COVID-19) spread rapidly across the world since its appearance in December 2019. This data set creates one-, three-, and seven-day forecasts of the COVID-19 pandemic\u27s cumulative case counts at the county, health district, and state geographic levels for the state of Virginia. Forecasts are created over the first 46 days of reported COVID-19 cases using the cumulative case count data provided by The New York Times as of April 22, 2020. From this historical data, one-, three-, seven, and all-days prior to the forecast start date are used to generate the forecasts. Forecasts are created using: (1) a Naïve approach; (2) Holt-Winters exponential smoothing (HW); (3) growth rate (Growth); (4) moving average (MA); (5) autoregressive (AR); (6) autoregressive moving average (ARMA); and (7) autoregressive integrated moving average (ARIMA). Median Absolute Error (MdAE) and Median Absolute Percentage Error (MdAPE) metrics are created with each forecast to evaluate the forecast with respect to existing historical data. These error metrics are aggregated to provide a means for assessing which combination of forecast method, forecast length, and lookback length are best fits, based on lowest aggregated error at each geographic level. The data set is comprised of an R-Project file, four R source code files, all 1,329,404 generated short-range forecasts, MdAE and MdAPE error metric data for each forecast, copies of the input files, and the generated comparison tables. All code and data files are provided to provide transparency and facilitate replicability and reproducibility. This package opens directly in RStudio through the R Project file. The R Project file removes the need to set path locations for the folders contained within the data set to simplify setup requirements. This data set provides two avenues for reproducing results: 1) Use the provided code to generate the forecasts from scratch and then run the analyses; or 2) Load the saved forecast data and run the analyses on the stored data. Code annotations provide the instructions needed to accomplish both routes. This data can be used to generate the same set of forecasts and error metrics for any US state by altering the state parameter within the source code. Users can also generate health district forecasts for any other state, by providing a file which maps each county within a state to its respective health-district. The source code can be connected to the most up-to-date version of The New York Times COVID-19 dataset allows for the generation of forecasts up to the most recently reported data to facilitate near real-time forecasting
Augmenting Bottom-Up Metamodels with Predicates
Metamodeling refers to modeling a model. There are two metamodeling approaches for ABMs: (1) top-down and (2) bottom-up. The top down approach enables users to decompose high-level mental models into behaviors and interactions of agents. In contrast, the bottom-up approach constructs a relatively small, simple model that approximates the structure and outcomes of a dataset gathered fromthe runs of an ABM. The bottom-up metamodel makes behavior of the ABM comprehensible and exploratory analyses feasible. Formost users the construction of a bottom-up metamodel entails: (1) creating an experimental design, (2) running the simulation for all cases specified by the design, (3) collecting the inputs and output in a dataset and (4) applying first-order regression analysis to find a model that effectively estimates the output. Unfortunately, the sums of input variables employed by first-order regression analysis give the impression that one can compensate for one component of the system by improving some other component even if such substitution is inadequate or invalid. As a result the metamodel can be misleading. We address these deficiencies with an approach that: (1) automatically generates Boolean conditions that highlight when substitutions and tradeoffs among variables are valid and (2) augments the bottom-up metamodel with the conditions to improve validity and accuracy. We evaluate our approach using several established agent-based simulations
Predicting Pilot Error in Nextgen: Pilot Performance Modeling and Validation Efforts
We review 25 articles presenting 5 general classes of computational models to predict pilot error. This more targeted review is placed within the context of the broader review of computational models of pilot cognition and performance, including such aspects as models of situation awareness or pilot-automation interaction. Particular emphasis is placed on the degree of validation of such models against empirical pilot data, and the relevance of the modeling and validation efforts to Next Gen technology and procedures
Altitude Exposure at 1800 m Increases Haemoglobin Mass in Distance Runners
The influence of low natural altitudes (\u3c 2000 m) on erythropoietic adaptation is currently unclear, with current recommendations indicating that such low altitudes may be insufficient to stimulate significant increases in haemoglobin mass (Hbmass). As such, the purpose of this study was to determine the influence of 3 weeks of live high, train high exposure (LHTH) at low natural altitude (i.e. 1800 m) on Hbmass, red blood cell count and iron profile. A total of 16 elite or well-trained runners were assigned into either a LHTH (n = 8) or CONTROL (n = 8) group. Venous blood samples were drawn prior to, at 2 weeks and at 3 weeks following exposure. Hbmass was measured in duplicate prior to exposure and at 2 weeks and at 3 weeks following exposure via carbon monoxide rebreathing. The percentage change in Hb mass from baseline was significantly greater in LHTH, when compared with the CONTROL group at 2 (3.1% vs 0.4%; p = 0.01;) and 3 weeks (3.0% vs -1.1%; p \u3c 0.02, respectively) following exposure. Haematocrit was greater in LHTH than CONTROL at 2 (p = 0.01) and 3 weeks (p = 0.04) following exposure. No significant interaction effect was observed for haemoglobin concentration (p = 0.06), serum ferritin (p = 0.43), transferrin (p = 0.52) or reticulocyte percentage (p = 0.16). The results of this study indicate that three week of natural classic (i.e. LHTH) low altitude exposure (1800 m) results in a significant increase in Hbmass of elite distance runners, which is likely due to the continuous exposure to hypoxia
Recommended from our members
Role of the Srs2-Rad51 Interaction Domain in Crossover Control in Saccharomyces cerevisiae.
Saccharomyces cerevisiae Srs2, in addition to its well-documented antirecombination activity, has been proposed to play a role in promoting synthesis-dependent strand annealing (SDSA). Here we report the identification and characterization of an SRS2 mutant with a single amino acid substitution (srs2-F891A) that specifically affects the Srs2 pro-SDSA function. This residue is located within the Srs2-Rad51 interaction domain and embedded within a protein sequence resembling a BRC repeat motif. The srs2-F891A mutation leads to a complete loss of interaction with Rad51 as measured through yeast two-hybrid analysis and a partial loss of interaction as determined through protein pull-down assays with purified Srs2, Srs2-F891A, and Rad51 proteins. Even though previous work has shown that internal deletions of the Srs2-Rad51 interaction domain block Srs2 antirecombination activity in vitro, the Srs2-F891A mutant protein, despite its weakened interaction with Rad51, exhibits no measurable defect in antirecombination activity in vitro or in vivo Surprisingly, srs2-F891A shows a robust shift from noncrossover to crossover repair products in a plasmid-based gap repair assay, but not in an ectopic physical recombination assay. Our findings suggest that the Srs2 C-terminal Rad51 interaction domain is more complex than previously thought, containing multiple interaction sites with unique effects on Srs2 activity
Signal processing for estimating energy expenditure of elite athletes using triaxial accelerometers
Fitness development of elite athletes requires an understanding of physiological factors such as athlete energy expenditure (EE). For athletes involved in football at the elite level, it is necessary to understand the energy demands during competition to develop training regimes. By identifying an appropriate EE estimator in triaxial accelerometer data, in conjunction with identifying sources of inter-athlete variance in that estimator, signal processing was developed to extract the estimator. In this system, low-power signal processing was implemented to extract both the EE estimator and other information of physiological and statistical interestGriffith Sciences, Griffith School of EngineeringFull Tex
- …