1,276 research outputs found

    Strategies for Prioritizing Needs for Accelerated Construction after Hazard Events

    Get PDF
    There is a need for rapid and responsive infrastructure repair and construction after natural disaster events such as hurricanes, wildfires, and tornadoes. These natural disasters often shut down basic infrastructure systems, as experienced recently in several Region 6 states as well as in other states around the country. Accelerated construction practices are often used in these situations to speed up the traditional, and often slow, project delivery process. However, after a natural disaster, several and different types of transportation infrastructure components are in need of inspection, rehabilitation or reconstruction, and transportation agencies are challenged with the task of prioritizing these accelerated projects. This study conducted an extensive literature review of current accelerated methods, infrastructure prioritization practices, and institutional barriers. Interviews with professionals from the transportation industry, including both private and public services, were conducted. Significant input from the railroad industry was used to compare private and public transportation systems responses after disasters. The results of this survey were used to quantify the importance of the accelerate methods and prioritization criteria, and which are the barriers to implement a prioritization model. Lastly, a decision support tool for prioritizing needs for accelerated construction after disaster events, specifically hurricanes and flooding, which commonly affect Region 6, was developed using the data collected

    Runtime-assisted optimizations in the on-chip memory hierarchy

    Get PDF
    Following Moore's Law, the number of transistors on chip has been increasing exponentially, which has led to the increasing complexity of modern processors. As a result, the efficient programming of such systems has become more difficult. Many programming models have been developed to answer this issue. Of particular interest are task-based programming models that employ simple annotations to define parallel work in an application. The information available at the level of the runtime systems associated with these programming models offers great potential for improving hardware design. Moreover, due to technological limitations, Moore's Law is predicted to eventually come to an end, so novel paradigms are necessary to maintain the current performance improvement trends. The main goal of this thesis is to exploit the knowledge about a parallel application available at the runtime system level to improve the design of the on-chip memory hierarchy. The coupling of the runtime system and the microprocessor enables a better hardware design without hurting the programmability. The first contribution is a set of insertion policies for shared last-level caches that exploit information about tasks and task data dependencies. The intuition behind this proposal revolves around the observation that parallel threads exhibit different memory access patterns. Even within the same thread, accesses to different variables often follow distinct patterns. The proposed policies insert cache lines into different logical positions depending on the dependency type and task type to which the corresponding memory request belongs. The second proposal optimizes the execution of reductions, defined as a programming pattern that combines input data to form the resulting reduction variable. This is achieved with a runtime-assisted technique for performing reductions in the processor's cache hierarchy. The proposal's goal is to be a universally applicable solution regardless of the reduction variable type, size and access pattern. On the software level, the programming model is extended to let a programmer specify the reduction variables for tasks, as well as the desired cache level where a certain reduction will be performed. The source-to-source compiler and the runtime system are extended to translate and forward this information to the underlying hardware. On the hardware level, private and shared caches are equipped with functional units and the accompanying logic to perform reductions at the cache level. This design avoids unnecessary data movements to the core and back as the data is operated at the place where it resides. The third contribution is a runtime-assisted prioritization scheme for memory requests inside the on-chip memory hierarchy. The proposal is based on the notion of a critical path in the context of parallel codes and a known fact that accelerating critical tasks reduces the execution time of the whole application. In the context of this work, task criticality is observed at a level of a task type as it enables simple annotation by the programmer. The acceleration of critical tasks is achieved by the prioritization of corresponding memory requests in the microprocessor.Siguiendo la ley de Moore, el número de transistores en los chips ha crecido exponencialmente, lo que ha comportado una mayor complejidad en los procesadores modernos y, como resultado, de la dificultad de la programación eficiente de estos sistemas. Se han desarrollado muchos modelos de programación para resolver este problema; un ejemplo particular son los modelos de programación basados en tareas, que emplean anotaciones sencillas para definir los Trabajos paralelos de una aplicación. La información de que disponen los sistemas en tiempo de ejecución (runtime systems) asociada con estos modelos de programación ofrece un enorme potencial para la mejora del diseño del hardware. Por otro lado, las limitaciones tecnológicas hacen que la ley de Moore pueda dejar de cumplirse próximamente, por lo que se necesitan paradigmas nuevos para mantener las tendencias actuales de mejora de rendimiento. El objetivo principal de esta tesis es aprovechar el conocimiento de las aplicaciones paral·leles de que dispone el runtime system para mejorar el diseño de la jerarquía de memoria del chip. El acoplamiento del runtime system junto con el microprocesador permite realizar mejores diseños hardware sin afectar Negativamente en la programabilidad de dichos sistemas. La primera contribución de esta tesis consiste en un conjunto de políticas de inserción para las memorias caché compartidas de último nivel que aprovecha la información de las tareas y las dependencias de datos entre estas. La intuición tras esta propuesta se basa en la observación de que los hilos de ejecución paralelos muestran distintos patrones de acceso a memoria e, incluso dentro del mismo hilo, los accesos a diferentes variables a menudo siguen patrones distintos. Las políticas que se proponen insertan líneas de caché en posiciones lógicas diferentes en función de los tipos de dependencia y tarea a los que corresponde la petición de memoria. La segunda propuesta optimiza la ejecución de las reducciones, que se definen como un patrón de programación que combina datos de entrada para conseguir la variable de reducción como resultado. Esto se consigue mediante una técnica asistida por el runtime system para la realización de reducciones en la jerarquía de la caché del procesador, con el objetivo de ser una solución aplicable de forma universal sin depender del tipo de la variable de la reducción, su tamaño o el patrón de acceso. A nivel de software, el modelo de programación se extiende para que el programador especifique las variables de reducción de las tareas, así como el nivel de caché escogido para que se realice una determinada reducción. El compilador fuente a Fuente (compilador source-to-source) y el runtime ssytem se modifican para que traduzcan y pasen esta información al hardware subyacente, evitando así movimientos de datos innecesarios hacia y desde el núcleo del procesador, al realizarse la operación donde se encuentran los datos de la misma. La tercera contribución proporciona un esquema de priorización asistido por el runtime system para peticiones de memoria dentro de la jerarquía de memoria del chip. La propuesta se basa en la noción de camino crítico en el contexto de los códigos paralelos y en el hecho conocido de que acelerar tareas críticas reduce el tiempo de ejecución de la aplicación completa. En el contexto de este trabajo, la criticidad de las tareas se considera a nivel del tipo de tarea ya que permite que el programador las indique mediante anotaciones sencillas. La aceleración de las tareas críticas se consigue priorizando las correspondientes peticiones de memoria en el microprocesador.Seguint la llei de Moore, el nombre de transistors que contenen els xips ha patit un creixement exponencial, fet que ha provocat un augment de la complexitat dels processadors moderns i, per tant, de la dificultat de la programació eficient d’aquests sistemes. Per intentar solucionar-ho, s’han desenvolupat diversos models de programació; un exemple particular en són els models basats en tasques, que fan servir anotacions senzilles per definir treballs paral·lels dins d’una aplicació. La informació que hi ha al nivell dels sistemes en temps d’execució (runtime systems) associada amb aquests models de programació ofereix un gran potencial a l’hora de millorar el disseny del maquinari. D’altra banda, les limitacions tecnològiques fan que la llei de Moore pugui deixar de complir-se properament, per la qual cosa calen nous paradigmes per mantenir les tendències actuals en la millora de rendiment. L’objectiu principal d’aquesta tesi és aprofitar els coneixements que el runtime System té d’una aplicació paral·lela per millorar el disseny de la jerarquia de memòria dins el xip. L’acoblament del runtime system i el microprocessador permet millorar el disseny del maquinari sense malmetre la programabilitat d’aquests sistemes. La primera contribució d’aquesta tesi consisteix en un conjunt de polítiques d’inserció a les memòries cau (cache memories) compartides d’últim nivell que aprofita informació sobre tasques i les dependències de dades entre aquestes. La intuïció que hi ha al darrere d’aquesta proposta es basa en el fet que els fils d’execució paral·lels mostren diferents patrons d’accés a la memòria; fins i tot dins el mateix fil, els accessos a variables diferents sovint segueixen patrons diferents. Les polítiques que s’hi proposen insereixen línies de la memòria cau a diferents ubicacions lògiques en funció dels tipus de dependència i de tasca als quals correspon la petició de memòria. La segona proposta optimitza l’execució de les reduccions, que es defineixen com un patró de programació que combina dades d’entrada per aconseguir la variable de reducció com a resultat. Això s’aconsegueix mitjançant una tècnica assistida pel runtime system per dur a terme reduccions en la jerarquia de la memòria cau del processador, amb l’objectiu que la proposta sigui aplicable de manera universal, sense dependre del tipus de la variable a la qual es realitza la reducció, la seva mida o el patró d’accés. A nivell de programari, es realitza una extensió del model de programació per facilitar que el programador especifiqui les variables de les reduccions que usaran les tasques, així com el nivell de memòria cau desitjat on s’hauria de realitzar una certa reducció. El compilador font a font (compilador source-to-source) i el runtime system s’amplien per traduir i passar aquesta informació al maquinari subjacent. A nivell de maquinari, les memòries cau privades i compartides s’equipen amb unitats funcionals i la lògica corresponent per poder dur a terme les reduccions a la pròpia memòria cau, evitant així moviments de dades innecessaris entre el nucli del processador i la jerarquia de memòria. La tercera contribució proporciona un esquema de priorització assistit pel runtime System per peticions de memòria dins de la jerarquia de memòria del xip. La proposta es basa en la noció de camí crític en el context dels codis paral·lels i en el fet conegut que l’acceleració de les tasques que formen part del camí crític redueix el temps d’execució de l’aplicació sencera. En el context d’aquest treball, la criticitat de les tasques s’observa al nivell del seu tipus ja que permet que el programador les indiqui mitjançant anotacions senzilles. L’acceleració de les tasques crítiques s’aconsegueix prioritzant les corresponents peticions de memòria dins el microprocessador

    Disruption Response Support For Inland Waterway Transportation

    Get PDF
    Motivated by the critical role of the inland waterways in the United States\u27 transportation system, this dissertation research focuses on pre- and post- disruption response support when the inland waterway navigation system is disrupted by a natural or manmade event. Following a comprehensive literature review, four research contributions are achieved. The first research contribution formulates and solves a cargo prioritization and terminal allocation problem (CPTAP) that minimizes total value loss of the disrupted barge cargoes on the inland waterway transportation system. It is tailored for maritime transportation stakeholders whose disaster response plans seek to mitigate negative economic and societal impacts. A genetic algorithm (GA)-based heuristic is developed and tested to solve realistically-sized instances of CPTAP. The second research contribution develops and examines a tabu search (TS) heuristic as an improved solution approach to CPTAP. Different from GA\u27s population search approach, the TS heuristic uses the local search to find improved solutions to CPTAP in less computation time. The third research contribution assesses cargo value decreasing rates (CVDRs) through a Value-focused Thinking based methodology. The CVDR is a vital parameter to the general cargo prioritization modeling as well as specifically for the CPTAP model for inland waterways developed here. The fourth research contribution develops a multi-attribute decision model based on the Analytic Hierarchy Process that integrates tangible and intangible factors in prioritizing cargo after an inland waterway disruption. This contribution allows for consideration of subjective, qualitative attributes in addition to the pure quantitative CPTAP approach explored in the first two research contributions

    Optimal Retrofit Strategy Design for Highway Bridges Under Seismic Hazards: A Case Study of Charleston, SC

    Get PDF
    A significant number of US highway bridges are inadequate for seismic loads and could be seriously damaged or collapse during a relatively small earthquake. On the most recent infrastructure report card from the American Society of Civil Engineers (ASCE), one-third of the bridges in the United States are deemed to be structurally deficient. To improve this situation, at-risk bridges must be identified, evaluated, and effective retrofitting programs implemented to reduce their seismic vulnerabilities. In practice, the Federal Highway Administration uses the expected damage method and indices method to assess the condition of bridges. These methods compare the severity of expected damage for each at-risk bridge and the bridges with the highest expected damage will receive the highest priority for retrofitting. However, these methods ignore the crucial effects of traffic networks on the highway bridge\u27s importance. Bridge failures or even capacity reductions may redistribute the traffic of the entire network. This research develops a new retrofit strategy decision scheme for highway bridges under seismic hazards and seamlessly integrates the scenario-based seismic analysis of bridges and the traffic network into the proposed optimization modeling framework. A full spectrum of bridge retrofit strategies are considered based on explicit structural assessment for each seismic damage state. A simplified four-bridge network is used to validate the model, and then a modified version of the validated model is applied to the bridge network in Charleston, SC to illustrate the applicability of the model. The results of the case study justify the importance of taking a system viewpoint in the retrofit strategy decision process and the benefit of using the developed model in the retrofit decision making proces

    Large-scale Asset Renewal Optimization: GAs + Segmentation versus Advanced Mathematical Tools

    Get PDF
    Capital renewal is an essential decision in sustaining the serviceability of infrastructure. Effectively allocating limited renewal funds amongst numerous asset components represents a large-scale combinatorial optimization problem that is difficult to solve. While various mathematical optimization techniques have been presented in the published literature, they are not very effective in handling the complexities and huge calculations related to large scale problems. More recently new evolutionary-based techniques, such as genetic algorithms (GA) have been introduced for finding near-optimum solutions to large-scale problems. Experimenting with this technique for asset renewal problems has revealed that GAs performance rapidly degrades with problem size. For instance, a previous research by Hegazy and Elhakeem (2010), could improve fund allocation for only a portion of total existing components (maximum of 8000 asset components) with degradation in optimization performance by increasing number of components. To address larger scale problems, this research investigates both evolutionary and advanced mathematical optimization techniques and seeks a goal of handling models consist of at least 50,000 asset components. To enhance the performance of GAs for large-scale optimization problems, three aspects were considered: (1) examining different problem formulations such as integer, on-shot-binary, and step-wise binary formulation; (2) experimenting with commercial GA-based tools; and (3) introducing an innovative segmentation method to handle groups of smaller size problems, and then integrating the results. To identify the best segmentation method, similarity-based segmentation is compared to random segmentation and was found to have superior performance. Based on the results of numerous experiments with different problem sizes and comparison with previous results obtained by Hegazy and Elhakeem (2010) from the same prototype used in this study, the GAs + Segmentation approach is found to handle a problem size of 50,000 components, with better solution quality (improved optimum solution), and no noticeable degradation of optimization performance by increasing the problems size. In addition to evolutionary algorithms, performance of one of the advanced mathematical programming tools (GAMS), and its powerful optimization engine (CPLEX), are investigated. For the mathematical representation of the asset renewal problem, best formulation is selected with regard to the definitions of easy-to-solve integer programming (IP) formulations. To reduce internal calculations, the GAMS mathematical model is coded to interact with original spreadsheet data by using GAMS data exchange (GDX) files. Based on experimentations, using advanced mathematical tools with strong (easy-to-solve) IP formulations, improved the solution quality even further in compare to GA + Segmentation. In conclusion, this research investigated both evolutionary and advanced mathematical optimization techniques in handling very large-scale asset renewal problems, and introduced effective models for solving such problems. The developed models represent a major innovative step towards achieving large cost savings, optimizing decisions, and justifying fund allocation decisions for infrastructure asset management. While the focus of this research is on educational buildings, the developed optimization models can be adapted to various large-scale asset management problems

    Element-Based Multi-Objective Optimization Methodology Supporting a Transportation Asset Management Framework for Bridge Planning and Programming

    Get PDF
    The Moving Ahead for Progress in the 21st Century Act (MAP-21) mandates the development of a risk-based transportation asset management plan and use of a performance-based approach in transportation planning and programming. This research introduces a systematic element-based multi-objective optimization (EB-MOO) methodology integrated into a goal-driven transportation asset management framework to (1) improve bridge management, (2) support state departments of transportation with their transition efforts to comply with the MAP-21 requirements, (3) determine short- and long-term intervention strategies and funding requirements, and (4) facilitate trade-offs between funding levels and performance. The proposed methodology focuses on one transportation asset class (i.e., bridge) and is structured around the following five modules: 1. Data Processing Module, 2. Improvement Module, 3. Element-level Optimization Module, 4. Bridge-level Optimization Module, and 5. Network-level Optimization Module. To overcome computer memory and processing time limitations, the methodology relies on the following three distinct screening processes: 1. Element Deficiency Process, 2. Alternative Feasibility Process, and 3. Solution Superiority Screening Process. The methodology deploys an independent deterioration model (i.e., Weibull/Markov model), to predict performance, and a life-cycle cost model, to estimate life-cycle costs and benefits. Life-cycle (LC) alternatives (series of element improvement actions) are generated based on a new simulation arrangement for three distinct improvement types: 1. maintenance, repair and rehabilitation (preservation); 2. functional improvement; and 3. replacement. A LC activity profile is constructed separately for each LC alternative action path. The methodology consists of three levels of optimization assessment based on the Pareto optimality concept: (1) an element-level optimization, to identify optimal or near-optimal element intervention actions for each deficient element (poor condition state) of a candidate bridge; (2) a bridge-level optimization, to identify combinations of optimal or near-optimal element intervention actions for a candidate bridge; and (3) a network-level optimization, following either a top-down or bottom-up approach, to identify sets of optimal or near-optimal element intervention actions for a network of bridges. A robust metaheuristic genetic algorithm (i.e., Non-dominated Sorting Genetic Algorithm II, [NSGA-II]) is deployed to handle the large size of multi-objective optimization problems. A MATLAB-based tool prototype was developed to test concepts, demonstrate effectiveness, and communicate benefits. Several examples of unconstrained and constrained scenarios were established for implementing the methodology using the tool prototype. Results reveal the capability of the proposed EB-MOO methodology to generate a high quality of Pareto optimal or near-optimal solutions, predict performance, and determine appropriate intervention actions and funding requirements. The five modules collectively provide a systematic process for the development and evaluation of improvement programs and transportation plans. Trade-offs between Pareto optimal or near-optimal solutions facilitate identifying best investment strategies that address short- and long-term goals and objective priorities
    • …
    corecore