56 research outputs found

    Applications of Biological Integrity within the National Wildlife Refuge System Region 5

    Get PDF
    The passage of the National Wildlife Refuge System Improvement Act of 1997 (NWRSIA) and subsequent implementation of 601 FW 3: Biological Integrity, Diversity and Environmental Health Policy (hereafter, the “Integrity Policy”) represented a groundbreaking paradigm shift for refuge management. NWRSIA set forth a “mission for the System, and clear standards for its management, use, planning, and growth (US Fish and Wildlife Service, 1999),” by uniting the eclectic mix of refuges nationwide under the same mission, “to administer a national network of lands and waters for the conservation, management, and where appropriate, restoration of the fish, wildlife, and plant resources and their habitats within the United States for the benefit of present and future generations of Americans” (NWRSIA 1997). The act goes on to say that the Secretary of the Interior must “ensure that the biological integrity, diversity, and environmental health of the System are maintained for the benefit of present and future generations of Americans (US Fish and Wildlife Service, 1999.)” NWRSIA legally formalized the concept of biological integrity as a refuge management objective, but failed to define it. As a result, field experts and refuge managers struggle to discern applications of the biological integrity concept. Given the difficulties inherent in defining biological integrity, and the ambiguities involved with applying the concept to refuge management, examining how the concept is being applied on local refuges reveals valuable information about its practicality. Ultimately, for the biological integrity concept to shape refuge management, some of the ambiguity surrounding its definition and application must be removed. With outside influences such as surrounding land-use, invasive species, and climate change altering the ecological trajectories, biological integrity, as currently defined by the US Fish and Wildlife Service, proves to be an unattainable goal

    Towards Dynamic Vehicular Clouds

    Get PDF
    Motivated by the success of the conventional cloud computing, Vehicular Clouds were introduced as a group of vehicles whose corporate computing, sensing, communication, and physical resources can be coordinated and dynamically allocated to authorized users. One of the attributes that set Vehicular Clouds apart from conventional clouds is resource volatility. As vehicles enter and leave the cloud, new computing resources become available while others depart, creating a volatile environment where the task of reasoning about fundamental performance metrics becomes very challenging. The goal of this thesis is to design an architecture and model for a dynamic Vehicular Cloud built on top of moving vehicles on highways. We present our envisioned architecture for dynamic Vehicular Cloud, consisting of vehicles moving on the highways and multiple communication stations installed along the highway, and investigate the feasibility of such systems. The dynamic Vehicular Cloud is based on two-way communications between vehicles and the stations. We provide a communication protocol for vehicle-to-infrastructure communications enabling a dynamic Vehicular Cloud. We explain the structure of the proposed protocol in detail and then provide analytical predictions and simulation results to investigate the accuracy of our design and predictions. Just as in conventional clouds, job completion time ranks high among the fundamental quantitative performance figures of merit. In general, predicting job completion time requires full knowledge of the probability distributions of the intervening random variables. More often than not, however, the data center manager does not know these distribution functions. Instead, using accumulated empirical data, she may be able to estimate the first moments of these random variables. Yet, getting a handle on the expected job completion time is a very important problem that must be addressed. With this in mind, another contribution of this thesis is to offer easy-to-compute approximations of job completion time in a dynamic Vehicular Cloud involving vehicles on a highway. We assume estimates of the first moment of the time it takes the job to execute without any overhead attributable to the working of the Vehicular Cloud. A comprehensive set of simulations have shown that our approximations are very accurate. As mentioned, a major difference between the conventional cloud and the Vehicular Cloud is the availability of the computational nodes. The vehicles, which are the Vehicular Cloud\u27s computational resources, arrive and depart at random times, and as a result, this characteristic may cause failure in executing jobs and interruptions in the ongoing services. To handle these interruptions, once a vehicle is ready to leave the Vehicular Cloud, if the vehicle is running a job, the job and all intermediate data stored by the departing vehicle must be migrated to an available vehicle in the Vehicular Cloud

    Essays on the Economic Analysis of Transportation Systems

    Full text link
    This dissertation consists of four essays on the economic analysis of transportation systems. In the first chapter, the conventional disaggregate travel demand model, a probability model for the modeling of multiple modes, generally called random utility maximization (RUM), is expanded to a model of count of mode choice. The extended travel demand model is derived from general economic theory -- maximizing instantaneous utility on the time horizon, subject to a budget constraint -- and can capture the dynamic behavior of countable travel demand. Because the model is for countable dependent variables, it has a more realistic set of assumptions to explain travel demand then the RUM model. An empirical test of the theoretical model using a toll facility user survey in the New York City area was performed. The results show that the theoretical model explain more than 50 percent of the trip frequency behavior observed in the New York City toll facility users. Travel demand for facility users increase with respect to household employment, household vehicle count, and employer payment for tolls and decrease with travel time, road pricing, travel distance and mass transit access. In the second chapter, we perform a statistical comparison of driving travel demand on toll facilities between Electronic Toll Collection (ETC) users, as a treatment group, and non users, as a control group, in order to examine the effect of ETC on travel demand that uses toll facilities. The data that is used for the comparison is a user survey of the ten toll bridges and tunnels in New York City, and the data contains individual user\u27s travel attributes and demographic characteristics, as well as the frequency of usage of the toll facilities so that the data thus allows us to examine the difference in travel demand of E-ZPass, the Electronic Toll Collection System for Northeastern United States\u27 highway ETC system and compare tag holders and non tag holders. We find that the estimated difference of travel demand between E-ZPass users and non-users is biased due to model misspecification and sampling selection, and E-ZPass has no statistically significant effect on travel demand after controlling for possible sources of biases. In the third chapter, we develop a parallel sparse matrix-transpose-matrix multiplication algorithm using the outer product of row vectors. The outer product algorithm works with the compressed sparse row (CSR) form matrix, and as such it does not require a transposition operation prior to perform multiplication. In addition, since the outer product algorithm in the parallel implementation decomposes a matrix by rows, it thus imposes no additional restrictions with respect to matrix size and shape. We particularly focus on implementation of this technique on rectangular matrices, which have a larger number of rows and smaller number or columns for per- forming statistical analysis on large scale data. We test the outer product algorithm for randomly generated matrices. We then apply it to compute descriptive statistics of the New York City taxicab data, which is originally given by a 140.56 Gbytes file. The performance measures of the test and application shows that the outer product algorithm is effective and performed well on large-scale matrix multiplication in a parallel computing environment. In the last chapter, I develop a taxi market mechanism design model that demonstrates the role of a regulated taxi fare system on taxi drivers\u27 route choice behavior. In this model, a fare system is imposed by a taxi market authority with the recognition of asymmetric information, which in this case is about road network and traffic conditions, between passengers and drivers, and taxi trip demand is different and uncertain at its origin and destination. I derive a prediction from the model that shows the drivers have an incentive to make trip longer than optimal if they have passenger

    Safety and Environmental Design Consideration in the Use of Commercial Electronic Variable-Message Signage

    Get PDF
    This study reviews existing reported research and experience regarding use of commercial electronic variable-message signs (CEVMS), and evaluates research findings and methods in terms of implications for highway safety and environmental design. Aspects of CEVMS design and use that are capable of adversely affecting highway safety and/or environmental quality are identified and discussed in terms of the adequacy of existing research and experience to permit formulation of quantified standards for safe and environmentally compatible use. This report notes, with illustrations, the principal forms of variable-message signage developed for official traffic control and informational use, and the major forms of variable-message signage utilizing electronic processes or remote control for display of commercial advertising and public service information in roadside sites. Studies of highway safety aspects of outdoor advertising which are based on analysis of accident data are evaluated and reasons for apparent conflicts of their findings are discussed. Studies of highway safety aspects of outdoor advertising generally and CEVMS specifically based on human factors research and dealing with distraction and attentional demands of driving tasks are discussed in relation to issues involved in the development of standards

    Scalability of dynamic traffic assignment

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Civil and Environmental Engineering, 2009.Includes bibliographical references (p. 163-174).This research develops a systematic approach to analyze the computational performance of Dynamic Traffic Assignment (DTA) models and provides solution techniques to improve their scalability for on-line applications for large-scale networks. DTA models for real-time use provide short-term predictions of network status and generate route guidance for travelers. The computational performance of such systems is a critical concern. Existing methodologies, which have limited capabilities for online large-scale applications, use single-processor configurations that are less scalable, and rely primarily on trade-offs that sacrifice accuracy for improved computational efficiency. In the proposed scalable methodology, algorithmic analyses are first used to identify the system bottlenecks for large-scale problems. Our analyses show that the computation time of DTA systems for a given time interval depends largely on a small set of parameters. Important parameters include the number of origin-destination (OD) pairs, the number of sensors, the number of vehicles, the size of the network, and the number of time-steps used by the simulator. Then scalable approaches are developed to solve the bottlenecks. A constraint generalized least-squares solution enabling efficient use of the sparse-matrix property is applied to the dynamic OD estimation, replacing the Kalman-Filter solution or other full-matrix algorithms. Parallel simulation with an adaptive network decomposition framework is proposed to achieve better load-balancing and improved efficiency. A synchronization-feedback mechanism is designed to ensure the consistency of traffic dynamics across processors while keeping communication overheads minimal. The proposed methodology is implemented in DynaMIT, a state-of-the-art DTA system. Profiling studies are used to validate the algorithmic analysis of the system bottlenecks.(cont.) The new system is evaluated on two real-world networks under various scenarios. Empirical results of the case studies show that the proposed OD estimation algorithm is insensitive to an increase in the number of OD pairs or sensors, and the computation time is reduced from minutes to a few seconds. The parallel simulation is found to maintain accurate output as compared to the sequential simulation, and with adaptive load-balancing, it considerably speeds up the network models even under non-recurrent incident scenarios. The results demonstrate the practical nature of the methodology and its scalability to large-scale real-world problems.by Yang Wen.Ph.D

    The Late Ordovician Biogeochemical Carbon Cycle

    Get PDF
    The isotopic composition of the carbonate carbon (δ13Ccarb) is one of the best tools for understanding the biogeochemical carbon cycle through Earth history. δ13Ccarb is also used to chemostratigraphically correlate coeval strata. This dissertation has three main foci that all utilize δ13Ccarb as the common data type. The geologic interval investigated was the Late Ordovician (458-444 Ma) with emphasis on the Guttenberg δ13C excursion, a globally correlated, positive ~3 / event that is ~400 kyr in duration. In the first topic we evaluate post-depositional alteration (i.e., diagenesis) of δ13Ccarb signals. In the second topic, we make reconstructions of sea level change using lithostratigraphic and δ13Ccarb chemostratigraphic correlations. In the third topic, we use box models to constrain the source of the Guttenberg δ13Ccarb excursion

    Simulation and optimization model for the construction of electrical substations

    Get PDF
    One of the most complex construction projects is electrical substations. An electrical substation is an auxiliary station of an electricity generation, transmission and distribution system where voltage is transformed from high to low or the reverse using transformers. Construction of electrical substation includes civil works and electromechanical works. The scope of civil works includes construction of several buildings/components divided into parallel and overlapped working phases that require variety of resources and are generally quite costly and consume a considerable amount of time. Therefore, construction of substations faces complicated time-cost-resource optimization problems. On another hand, the construction industry is turning out to be progressively competitive throughout the years, whereby the need to persistently discover approaches to enhance construction performance. To address the previously stated afflictions, this dissertation makes the underlying strides and introduces a simulation and optimization model for the execution processes of civil works for an electrical substation based on database excel file for input data entry. The input data include bill of quantities, maximum available resources, production rates, unit cost of resources and indirect cost. The model is built on Anylogic software using discrete event simulation method. The model is divided into three zones working in parallel to each other. Each zone includes a group of buildings related to the same construction area. Each zone-model describes the execution process schedule for each building in the zone, the time consumed, percentage of utilization of equipment and manpower crews, amount of materials consumed and total direct and indirect cost. The model is then optimized to mainly minimize the project duration using parameter variation experiment and genetic algorithm java code implemented using Anylogic platform. The model used allocated resource parameters as decision variables and available resources as constraints. The model is verified on real case studies in Egypt and sensitivity analysis studies are incorporated. The model is also validated using a real case study and proves its efficiency by attaining a reduction in model time units between simulation and optimization experiments of 10.25% and reduction in total cost of 4.7%. Also, by comparing the optimization results by the actual data of the case study, the model attains a reduction in time and cost by 13.6% and 6.3% respectively. An analysis to determine the effect of each resource on reduction in cost is also presented

    Architecture and urban design as influences on the communication of place and experience in graphic design

    Get PDF
    Most architects and urban designers are challenged to design schemas and structures to create a particular experience and sense of place. It is through the manipulation and design of actual three-dimensional spaces that they are able to achieve this. How then is a three-dimensional experience of a place conveyed in two dimensions? Distilling an actual experience into a graphic solution can be exceptionally challenging, but graphic designers may need to accomplish this for particular clients. Examining the ideologies and methodologies of architecture and urban design may offer new and thoughtful approaches for graphic interpretations of three-dimensional experiences. This thesis first examines how a sense of place is created by architecture and urban design solutions through careful considerations related to culture, history, community and environment. The realm of actual places exists in three-dimensions, rather than two-dimensions. However, there are many instances when it is beneficial to distill three-dimensional experiences into two-dimensional formats (i.e. tourism materials, cookbooks, school catalogues) to help visually and verbally summarize and communicate an environment or experience to an audience. This study draws parallels to the field of graphic design from architecture and urban design, to establish ways in which these goals can be effectively communicated through a graphic design solution
    corecore