3 research outputs found

    Environmental Inequality Dataset

    Get PDF
    The Disaggreated RSEI model data (also known as RSEI-GM, or Geographic Microdata) version 2.3.4 was downloaded from the Amazon Web Service created by EPA. The RSEI-GM provides detailed air model results from EPA’s Risk-Screening Environmental Indicators (RSEI) model. The results include chemical concentration, toxicity-weighted concentration and score, calculated for each 810 meter square grid cell in a 49-km circle around the emitting facility, for every year from 1988 through 2014. The data can be used to examine trends in air pollution from industrial facilities over time and across geographies. In order to allow for evaluation of toxic-weighted concentration over time, we used only the core chemicals and industries that have been consistently required to report since 1988. Thus we filtered the RSEI-GM disaggregated dataset to exclude additional chemicals and industries added after 1988. Crosswalks to translate data from the RSEI grid cell system to U.S. census block geographies, provided by EPA, were used to combine RSEI results with census data and aggregate the results to the census tract level. We used the Longitudinal Tract Database created by Brown University, which adjusts data from previous years to 2010 census tract boundaries, to combine census demographic information for years 1989 through 2004. We used the 2006-2010 American Community Survey (ACS) for years 2005 through 2009 and the 2010-2014 ACS for years 2010 through 2014. This data were then used to calculate measures of environmental inequality. The ZIP file with the dataset is available below under additional files

    Search-based optimization for compiler machine-code generation

    Get PDF
    Compilation encompasses many steps. Parsing turns the input program into a more manageable syntax tree. Verification ensures that the program makes some semblance of sense. Finally, code generation transforms the internal abstract program representation into an executable program. Compilers strive to produce the best possible programs. Optimizations are applied at nearly every level of compilation. Instruction Scheduling is one of the last compilation tasks. It is part of code generation. Instruction Scheduling replaces the internal graph representation of the program with an instruction sequence. The scheduler should produce some sequence that the hardware can execute quickly. Considering that Instruction Scheduling is an NP-Complete optimization problem, it is interesting that schedules are usually generated by a greedy, heuristic algorithm called List Scheduling. Given search-based algorithms' successes in other NP-Complete optimization domains, we ask whether search-based algorithms can be applied to Instruction Scheduling to generate superior schedules without unacceptably increasing compilation time. To answer this question, we formulate a problem description that captures practical scheduling constraints. We show that this problem is NP-Complete given modest requirements on the actual hardware. We adapt three different search algorithms to Instruction Scheduling in order to show that search is an effective Instruction Scheduling technique. The schedules generated by our algorithms are generally shorter than those generated by List Scheduling. Search-based scheduling does take more time, but the increases are acceptable for some compilation domains

    Green Driver: AI in a Microcosm

    No full text
    The Green Driver app is a dynamic routing application for GPS-enabled smartphones. Green Driver combines client GPS data with real-time traffic light information provided by cities to determine optimal routes in response to driver route requests. Routes are optimized with respect to travel time, with the intention of saving the driver both time and fuel, and rerouting can occur if warranted. During a routing session, client phones communicate with a centralized server that both collects GPS data and processes route requests. All relevant data are anonymized and saved to databases for analysis; statistics are calculated from the aggregate data and fed back to the routing engine to improve future routing. Analyses can also be performed to discern driver trends: where do drivers tend to go, how long do they stay, when and where does traffic congestion occur, and so on. The system uses a number of techniques from the field of artificial intelligence. We apply a variant of A* search for solving the stochastic shortest path problem in order to find optimal driving routes through a network of roads given light-status information. We also use dynamic programming and hidden Markov models to determine the progress of a driver through a network of roads from GPS data and light-status data. The Green Driver system is currently deployed for testing in Eugene, Oregon, and is scheduled for large-scale deployment in Portland, Oregon, in Spring 2011
    corecore