7,106 research outputs found
A formulation of the relaxation phenomenon for lane changing dynamics in an arbitrary car following model
Lane changing dynamics are an important part of traffic microsimulation and
are vital for modeling weaving sections and merge bottlenecks. However, there
is often much more emphasis placed on car following and gap acceptance models,
whereas lane changing dynamics such as tactical, cooperation, and relaxation
models receive comparatively little attention. This paper develops a general
relaxation model which can be applied to an arbitrary parametric or
nonparametric microsimulation model. The relaxation model modifies car
following dynamics after a lane change, when vehicles can be far from
equilibrium. Relaxation prevents car following models from reacting too
strongly to the changes in space headway caused by lane changing, leading to
more accurate and realistic simulated trajectories. We also show that
relaxation is necessary for correctly simulating traffic breakdown with
realistic values of capacity drop
Optimal Ventilation Control in Complex Urban Tunnels with Multi-Point Pollutant Discharge
We propose an optimal ventilation control model for complex urban vehicular tunnels with distributed pollutant discharge points.
The control problem is formulated as a nonlinear integer program that aims to minimize ventilation energy cost while meeting
multiple air quality control requirements inside the tunnel and at discharge points. Based on the steady-state solutions to tunnel
aerodynamics equations, we propose a reduced form model for air velocities as explicit functions of ventilation decision variables
and traffic density. A compact parameterization of this model helps to show that tunnel airflows can be estimated using standard
linear regression techniques. The steady-state pollutant dispersion model is then incorporated for the derivation of optimal
pollutant discharge control strategies. A case study of a new urban tunnel in Hangzhou, China demonstrates that the scheduling of
fan operations based on the proposed optimization model can effectively achieve different air quality control objectives under
varying traffic intensity.U.S. Department of Transportation 69A355174711
Twenty-five years of random asset exchange modeling
The last twenty-five years have seen the development of a significant
literature within the subfield of econophysics which attempts to model economic
inequality as an emergent property of stochastic interactions among ensembles
of agents. In this article, the literature surrounding this approach to the
study of wealth and income distributions, henceforth the "random asset
exchange" literature following the terminology of Sinha (2003), is thoroughly
reviewed for the first time. The foundational papers of Dragulescu and
Yakovenko (2000), Chakraborti and Chakrabarti (2000), and Bouchaud and Mezard
(2000) are discussed in detail, and principal canonical models within the
random asset exchange literature are established. The most common variations
upon these canonical models are enumerated, and significant papers within each
kind of modification are introduced. The successes of such models, as well as
the limitations of their underlying assumptions, are discussed, and it is
argued that the literature should move in the direction of more explicit
representations of economic structure and processes to acquire greater
explanatory power
Global Geolocated Realtime Data of Interfleet Urban Transit Bus Idling
Urban transit bus idling is a contributor to ecological stress, economic
inefficiency, and medically hazardous health outcomes due to emissions. The
global accumulation of this frequent pattern of undesirable driving behavior is
enormous. In order to measure its scale, we propose GRD-TRT- BUF-4I (Ground
Truth Buffer for Idling) an extensible, realtime detection system that records
the geolocation and idling duration of urban transit bus fleets
internationally. Using live vehicle locations from General Transit Feed
Specification (GTFS) Realtime, the system detects approximately 200,000 idling
events per day from over 50 cities across North America, Europe, Oceania, and
Asia. This realtime data was created to dynamically serve operational
decision-making and fleet management to reduce the frequency and duration of
idling events as they occur, as well as to capture its accumulative effects.
Civil and Transportation Engineers, Urban Planners, Epidemiologists,
Policymakers, and other stakeholders might find this useful for emissions
modeling, traffic management, route planning, and other urban sustainability
efforts at a variety of geographic and temporal scales.Comment: 34 pages, 12 figures, 36 tables, 100 data sources (including links).
Under Review at Nature Scientific Dat
Variance Reduction for Score Functions Using Optimal Baselines
Many problems involve the use of models which learn probability distributions
or incorporate randomness in some way. In such problems, because computing the
true expected gradient may be intractable, a gradient estimator is used to
update the model parameters. When the model parameters directly affect a
probability distribution, the gradient estimator will involve score function
terms. This paper studies baselines, a variance reduction technique for score
functions. Motivated primarily by reinforcement learning, we derive for the
first time an expression for the optimal state-dependent baseline, the baseline
which results in a gradient estimator with minimum variance. Although we show
that there exist examples where the optimal baseline may be arbitrarily better
than a value function baseline, we find that the value function baseline
usually performs similarly to an optimal baseline in terms of variance
reduction. Moreover, the value function can also be used for bootstrapping
estimators of the return, leading to additional variance reduction. Our results
give new insight and justification for why value function baselines and the
generalized advantage estimator (GAE) work well in practice
Cost benefit analysis of various California renewable portfolio standard targets: is a 33% RPS optimal?
Renewable Portfolio Standards (RPSs') require that a certain fraction of the electricity generated for a given region be produced from renewable resources. California's RPS mandates that by 2020, 33% of the electricity sold in the state must be generated from renewables. Such mandates have important implications for the electricity sector as well as for the whole society. In this paper, we estimate the costs and benefits of varying 2020 California RPS targets on electricity prices, greenhouse gas (GHG) emissions, criteria pollutant emissions, the electricity generation mix, the labor market, renewable investment decisions, and social welfare. We have extended the RPS Calculator model, developed by Energy and Environmental Economics (E3) Inc., to account for distributions of fuel and generation costs, to incorporate demand functions, and to estimate the effects of RPS targets on GHG emissions, criteria pollutant emissions, and employment. The results of our modeling provide the following policy insights: (1) the average 2020 electricity price increases as the RPS target rises, with values ranging between 0.175/kWh (2008 dollars) for the 20% RPS to 50% RPS, respectively; (2) the 33% and 50% RPS targets decrease the GHG emissions by about 17.6 and 35.8 million metric tons of carbon dioxide equivalent (MMTCO2e) relative to the 20% RPS; (3) the GHG emission reduction costs of the RPS options are high (94 per ton) relative to results from policy options other than RPS or prices that are common in the carbon markets; and (4) a lower target (e.g., a 27% RPS) provides higher social welfare than the 33% RPS (mandate) under low and moderate CO2 social costs (lower than $35/ton); while a higher RPS target (e.g., 50%) is more beneficial when using high CO2 social costs or with rapid renewable technology diffusion. However, under all studied scenarios, the mandated 33% RPS for California would not provide the best cost/benefit values among the possible targets and would not maximize the net social benefit objective
Star-galaxy separation in the AKARI NEP Deep Field
Context: It is crucial to develop a method for classifying objects detected
in deep surveys at infrared wavelengths. We specifically need a method to
separate galaxies from stars using only the infrared information to study the
properties of galaxies, e.g., to estimate the angular correlation function,
without introducing any additional bias. Aims. We aim to separate stars and
galaxies in the data from the AKARI North Ecliptic Pole (NEP) Deep survey
collected in nine AKARI / IRC bands from 2 to 24 {\mu}m that cover the near-
and mid-infrared wavelengths (hereafter NIR and MIR). We plan to estimate the
correlation function for NIR and MIR galaxies from a sample selected according
to our criteria in future research. Methods: We used support vector machines
(SVM) to study the distribution of stars and galaxies in the AKARIs multicolor
space. We defined the training samples of these objects by calculating their
infrared stellarity parameter (sgc). We created the most efficient classifier
and then tested it on the whole sample. We confirmed the developed separation
with auxiliary optical data obtained by the Subaru telescope and by creating
Euclidean normalized number count plots. Results: We obtain a 90% accuracy in
pinpointing galaxies and 98% accuracy for stars in infrared multicolor space
with the infrared SVM classifier. The source counts and comparison with the
optical data (with a consistency of 65% for selecting stars and 96% for
galaxies) confirm that our star/galaxy separation methods are reliable.
Conclusions: The infrared classifier derived with the SVM method based on
infrared sgc- selected training samples proves to be very efficient and
accurate in selecting stars and galaxies in deep surveys at infrared
wavelengths carried out without any previous target object selection.Comment: 8 pages, 8 figure
- …