256 research outputs found
Provably good race detection that runs in parallel
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2005.Includes bibliographical references (p. 93-98).A multithreaded parallel program that is intended to be deterministic may exhibit nondeterminism clue to bugs called determinacy races. A key capability of race detectors is to determine whether one thread executes logically in parallel with another thread or whether the threads must operate in series. This thesis presents two algorithms, one serial and one parallel, to maintain the series-parallel (SP) relationships "on the fly" for fork-join multithreaded programs. For a fork-join program with T1 work and a critical-path length of T[infinity], the serial SP-Maintenance algorithm runs in O(T1) time. The parallel algorithm executes in the nearly optimal O(T1/P + PT[infinity]) time, when run on P processors and using an efficient scheduler. These SP-maintenance algorithms can be incorporated into race detectors to get a provably good race detector that runs in parallel. This thesis describes an efficient parallel race detector I call Nondeterminator-3. For a fork-join program T1 work, critical-path length T[infinity], and v shared memory locations, the Nondeterminator-3 runs in O(T1/P + PT[infinity] lg P + min [(T1 lg P)/P, vT[infinity] Ig P]) expected time, when run on P processors and using an efficient scheduler.by Jeremy T. Fineman.S.M
Large-scale mixed integer optimization approaches for scheduling airline operations under irregularity
Perhaps no single industry has benefited more from advancements in computation, analytics, and optimization than the airline industry. Operations Research (OR) is now ubiquitous in the way airlines develop their schedules, price their
itineraries, manage their fleet, route their aircraft, and schedule their crew. These problems, among others, are well-known to industry practitioners and academics alike and arise within the context of the planning environment which takes place well in advance of the date of departure. One salient feature
of the planning environment is that decisions are made in a frictionless environment that do not consider perturbations to an existing schedule. Airline operations are rife with disruptions caused by factors such as convective weather, aircraft failure, air traffic control restrictions, network effects, among other irregularities. Substantially less work in the OR community has been examined within the context of the real-time operational environment.
While problems in the planning and operational environments are similar from a mathematical perspective, the complexity of the operational environment is exacerbated by two factors. First, decisions need to be made in as close to
real-time as possible. Unlike the planning phase, decision-makers do not have hours of time to return a decision. Secondly, there are a host of operational considerations in which complex rules mandated by regulatory agencies like the
Federal Administration Association (FAA), airline requirements, or union rules. Such restrictions often make finding even a feasible set of re-scheduling decisions an arduous task, let alone the global optimum.
The goals and objectives of this thesis are found in Chapter 1. Chapter 2 provides an overview airline operations and the current practices of disruption management employed at most airlines. Both the causes and the costs associated with irregular operations are surveyed. The role of airline Operations Control Center (OCC) is discussed in which serves as the real-time decision making environment that is important to understand for the body of this work.
Chapter 3 introduces an optimization-based
approach to solve the Airline Integrated Recovery (AIR) problem that simultaneously solves re-scheduling decisions for the operating schedule, aircraft routings, crew assignments, and passenger itineraries. The methodology
is validated by using real-world industrial data from a U.S. hub-and-spoke regional carrier and we show how the incumbent approach can dominate the
incumbent sequential approach in way that is amenable to the operational constraints imposed by a decision-making environment.
Computational effort is central to the efficacy of any algorithm present in a real-time decision making environment such as an OCC. The latter two chapters illustrate various methods that are shown to expedite more traditional large-scale optimization methods that are applicable a wide family of optimization problems, including the AIR problem. Chapter 4 shows how delayed constraint generation and column generation may be used simultaneously through use of alternate polyhedra that verify whether or not a given cut that has been generated from a subset of
variables remains globally valid.
While Benders' decomposition is a well-known algorithm to solve problems exhibiting a block structure, one possible drawback is slow convergence. Expediting Benders' decomposition has been explored in the literature through
model reformulation, improving bounds, and cut selection strategies, but little has been studied how to strengthen a standard cut. Chapter 5 examines four methods for the convergence may be accelerated through an affine transformation into the interior of the feasible set, generating a split cut induced by a standard Benders' inequality, sequential lifting, and superadditive lifting over a relaxation of a multi-row system. It is shown that the first two methods yield
the most promising results within the context of an AIR model.PhDCommittee Co-Chair: Clarke, John-Paul; Committee Co-Chair: Johnson, Ellis; Committee Member: Ahmed, Shabbir; Committee Member: Clarke, Michael; Committee Member: Nemhauser, Georg
Hydrology in Water Resources Management
This book is a collection of 12 papers describing the role of hydrology in water resources management. The papers can be divided s according to their area of focus as 1) modeling of hydrological processes, 2) use of modern techniques in hydrological analysis, 3) impact of human pressure and climate change on water resources, and 4) hydrometeorological extremes. Belonging to the first area is the presentation of a new Muskingum flood routing model, a new tool to perform frequency analysis of maximum precipitation of a specified duration via the so-named PMAXΤP model (Precipitation MAXimum Time (duration) Probability), modeling of interception processes, and using a rainfall-runoff GR2M model to calculate monthly runoff. For the second area, the groundwater potential was evaluated using a model of multi-influencing factors in which the parameters were optimized by using geoprocessing tools in geographical information system (GIS) in combination with satellite altimeter data and the reanalysis of hydrological data to simulate overflow transport using the Nordic Sea as an example. Presented for the third area are a water balance model for the comparison of water resources with the needs of water users, the idea of adaptive water management, impacts of climate change, and anthropogenic activities on the runoff in catchment located in the western Himalayas of Pakistan. The last area includes spatiotemporal analysis of rainfall variability with regard to drought hazard and use of the copula function to meteorologically analyze drought
Management of Knowledge Representation Standards Activities
This report describes the efforts undertaken over the last two years to identify the issues underlying the current difficulties in sharing and reuse, and a community wide initiative to overcome them. First, we discuss four bottlenecks to sharing and reuse, present a vision of a future in which these bottlenecks have been ameliorated, and describe the efforts of the initiative's four working groups to address these bottlenecks. We then address the supporting technology and infrastructure that is critical to enabling the vision of the future. Finally, we consider topics of longer-range interest by reviewing some of the research issues raised by our vision
Developing a support for FPGAs in the Controller parallel programming model
La computación heterogénea se presenta como la solución para conseguir supercomputadores cada vez
más rápidos capaces de resolver problemas más grandes y complejos en diferentes áreas de conocimiento.
Para ello, integra aceleradores con distintas arquitecturas capaces de explotar las características de los
problemas desde distintos enfoques obteniendo, de este modo, un mayor rendimiento.
Las FPGAs son hardware reconfigurable, i.e., es posible modificarlas después de su fabricación. Esto
permite una gran flexibilidad y una máxima adaptación al problema en cuestión. Además, tienen un
consumo energético muy bajo. Todas estas ventajas tienen el gran inconveniente de una más difícil programaci
ón mediante los propensos a errores HDLs (Hardware Description Language), tales como Verilog o
VHDL, y requisitos de conocimientos avanzados de electrónica digital. En los últimos años los principales
fabricantes de FPGAs han enfocado sus esfuerzos en desarrollar herramientas HLS (High Level Synthesis)
que permiten programarlas a través de lenguajes de programación de alto nivel estilo C. Esto ha favorecido
su adopción por la comunidad HPC y su integración en los nuevos supercomputadores. Sin embargo, el
programador aún tiene que ocuparse de aspectos como la gestión de colas de comandos, parámetros de
lanzamiento o transferencias de datos.
El modelo Controller es una librería que facilita la gestión de la coordinación, comunicación y los
detalles de lanzamiento de los kernels en aceleradores hardware. Explota de forma transparente sus modelos
de programación nativos, en concreto OpenCL y CUDA, y, por tanto, consigue un alto rendimiento
independientemente del compilador. Permite al programador utilizar los distintos recursos hardware
disponibles de forma combinada en entornos heterogéneos.
Este trabajo extiende el modelo Controller mediante el desarrollo de un backend que permite la
integración de FPGAs, manteniendo los cambios sobre la interfaz de usuario al mínimo. A través de los
resultados experimentales se comprueba que se consigue una disminución del esfuerzo de programación
significativa en comparación con la implementación nativa en OpenCL. Del mismo modo, se consigue
un elevado solapamiento entre computación y comunicación y un sobrecoste por el uso de la librería
despreciable.Heterogeneous computing appears to be the solution to achieve ever faster computers capable of solving
bigger and more complex problems in difierent fields of knowledge. To that end, it integrates accelerators
with difierent architectures capable of exploiting the features of problems from difierent perspectives thus
achieving higher performance.
FPGAs are reconfigurable hardware, i.e., it is possible to modify them after manufacture. This allows
great flexibility and maximum adaptability to the given problem. In addition, they have low power
consumption. All these advantages have the great objection of more dificult programming with the errorprone
HDLs (Hardware Description Language), such as Verilog or VHDL, and the requirement of advanced
knowledge of digital electronics. The main FPGA vendors have concentrated on developing HLS (High
Level Synthesis) tools that allow to program them with C-like high level programming languages. This
favoured their adoption by the HPC community and their integration in new supercomputers. However,
the programmer still has to take care of aspects such as management of command queues, launching
parameters or data transfers.
The Controller model is a library to easily manage the coordination, communication and kernel launching
details on hardware accelerators. It transparently exploits their native or vendor specific programming
models, namely OpenCL and CUDA, thus enabling the potential performance obtained by using them in
a compiler agnostic way. It is intended to enable the programmer to make use of the diferent available
hardware resources in combination in heterogeneous environments.
This work extends the Controller model through the development of a backend that allows the integration
of FPGAs, keeping the changes over the user-facing interface to the minimum. The experimental
results validate that a significant decrease in programming effort compared to the native OpenCL implementation
is achieved. Similarly, high overlap of computation and communication and a negligible
overhead due to the use of the library are attained.Grado en Ingeniería Informátic
Considering stakeholders’ preferences for scheduling slots in capacity constrained airports
Airport slot scheduling has attracted the attention of researchers as a capacity management tool at congested airports. Recent research work has employed multi-objective approaches for scheduling slots at coordinated airports. However, the central question on how to select a commonly accepted airport schedule remains. The various participating stakeholders may have multiple and sometimes conflicting objectives stemming from their decision-making needs. This complex decision environment renders the identification of a commonly accepted solution rather difficult. In this presentation, we propose a multi-criteria decision-making technique that incorporates the priorities and preferences of the stakeholders in order to determine the best compromise solution
- …