63 research outputs found
Making the Traffic Operations Case for Congestion Pricing: Operational Impacts of Congestion Pricing
Congestion begins when an excess of vehicles on a segment of roadway at a given time, resulting in speeds that are significantly slower than normal or 'free flow' speeds. Congestion often means stop-and-go traffic. The transition occurs when vehicle density (the number of vehicles per mile in a lane) exceeds a critical level. Once traffic enters a state of congestion, recovery or time to return to a free-flow state is lengthy; and during the recovery process, delay continues to accumulate. The breakdown in speed and flow greatly impedes the efficient operation of the freeway system, resulting in economic, mobility, environmental and safety problems. Freeways are designed to function as access-controlled highways characterized by uninterrupted traffic flow so references to freeway performance relate primarily to the quality of traffic flow or traffic conditions as experienced by users of the freeway. The maximum flow or capacity of a freeway segment is reached while traffic is moving freely. As a result, freeways are most productive when they carry capacity flows at 60 mph, whereas lower speeds impose freeway delay, resulting in bottlenecks. Bottlenecks may be caused by physical disruptions, such as a reduced number of lanes, a change in grade, or an on-ramp with a short merge lane. This type of bottleneck occurs on a predictable or 'recurrent' basis at the same time of day and same day of week. Recurrent congestion totals 45% of congestion and is primarily from bottlenecks (40%) as well as inadequate signal timing (5%). Nonrecurring bottlenecks result from crashes, work zone disruptions, adverse weather conditions, and special events that create surges in demand and that account for over 55% of experienced congestion. Figure 1.1 shows that nonrecurring congestion is composed of traffic incidents (25%), severe weather (15%), work zones, (10%), and special events (5%). Between 1995 and 2005, the average percentage change in increased peak traveler delay, based on hours spent in traffic in a year, grew by 22% as the national average of hours spent in delay grew from 36 hours to 44 hours. Peak delay per traveler grew one-third in medium-size urban areas over the 10 year period. The traffic engineering community has developed an arsenal of integrated tools to mitigate the impacts of congestion on freeway throughput and performance, including pricing of capacity to manage demand for travel. Congestion pricing is a strategy which dynamically matches demand with available capacity. A congestion price is a user fee equal to the added cost imposed on other travelers as a result of the last traveler's entry into the highway network. The concept is based on the idea that motorists should pay for the additional congestion they create when entering a congested road. The concept calls for fees to vary according to the level of congestion with the price mechanism applied to make travelers more fully aware of the congestion externality they impose on other travelers and the system itself. The operational rationales for the institution of pricing strategies are to improve the efficiency of operations in a corridor and/or to better manage congestion. To this end, the objectives of this project were to: (1) Better understand and quantify the impacts of congestion pricing strategies on traffic operations through the study of actual projects, and (2) Better understand and quantify the impacts of congestion pricing strategies on traffic operations through the use of modeling and other analytical methods. Specifically, the project was to identify credible analytical procedures that FHWA can use to quantify the impacts of various congestion pricing strategies on traffic flow (throughput) and congestion
The Effects of Hunger Marketing in Scarcity products
[[abstract]]Hunger marketing is a marketing strategy where goods suppliers deliberately limit product supply to achieve excess demand. This research paper primarily investigated the varying impacts of Jordon shoes (tangible goods) and travelling to the Maldives (intangible goods) on the variables of the various dimensions of hunger marketing. This researchalso focused on the relationships between hunger marketing; knowledge exchange motivation, opportunities, and ability; involvement; epistemic value; purchase intention; WOM; and the interference caused by Jordon shoes, travelling to the Maldives, and financial status. This research adopted structural equation modelling (SEM) to construct the research framework. The researchers collaborated with a survey company to distribute the questionnaires, of which 975 were recovered. The analytical methods were employed to verify the collected data such as factor loading, t-value, AVE, Cronbach’s α being consistent with the hypotheses set by the aims of this research. The results showed could provide academic value in hunger marketing related researches.[[notice]]補正完
Recommended from our members
Transferring 2001 National Household Travel Survey
Policy makers rely on transportation statistics, including data on personal travel behavior, to formulate strategic transportation policies, and to improve the safety and efficiency of the U.S. transportation system. Data on personal travel trends are needed to examine the reliability, efficiency, capacity, and flexibility of the Nation's transportation system to meet current demands and to accommodate future demand. These data are also needed to assess the feasibility and efficiency of alternative congestion-mitigating technologies (e.g., high-speed rail, magnetically levitated trains, and intelligent vehicle and highway systems); to evaluate the merits of alternative transportation investment programs; and to assess the energy-use and air-quality impacts of various policies. To address these data needs, the U.S. Department of Transportation (USDOT) initiated an effort in 1969 to collect detailed data on personal travel. The 1969 survey was the first Nationwide Personal Transportation Survey (NPTS). The survey was conducted again in 1977, 1983, 1990, 1995, and 2001. Data on daily travel were collected in 1969, 1977, 1983, 1990 and 1995. In 2001, the survey was renamed the National Household Travel Survey (NHTS) and it collected both daily and long-distance trips. The 2001 survey was sponsored by three USDOT agencies: Federal Highway Administration (FHWA), Bureau of Transportation Statistics (BTS), and National Highway Traffic Safety Administration (NHTSA). The primary objective of the survey was to collect trip-based data on the nature and characteristics of personal travel so that the relationships between the characteristics of personal travel and the demographics of the traveler can be established. Commercial and institutional travel were not part of the survey. Due to the survey's design, data in the NHTS survey series were not recommended for estimating travel statistics for categories smaller than the combination of Census division (e.g., New England, Middle Atlantic, and Pacific), MSA size, and the availability of rail. Extrapolating NHTS data within small geographic areas could risk developing and subsequently using unreliable estimates. For example, if a planning agency in City X of State Y estimates travel rates and other travel characteristics based on survey data collected from NHTS sample households that were located in City X of State Y, then the agency could risk developing and using unreliable estimates for their planning process. Typically, this limitation significantly increases as the size of an area decreases. That said, the NHTS contains a wealth of information that could allow statistical inferences about small geographic areas, with a pre-determined level of statistical certainty. The question then becomes whether a method can be developed that integrates the NHTS data and other data to estimate key travel characteristics for small geographic areas such as Census tract and transportation analysis zone, and whether this method can outperform other, competing methods
Final Report for Phase II Study: Prototyping the Sketch Planning Visualization Tool for Non-Motorized Travel
Supply Chain Based Solution to Prevent Fuel Tax Evasion: Proof of Concept Final Report
The goal of this research was to provide a proof-of-concept (POC) system for preventing non-taxable (non-highway diesel use) or low-taxable (jet fuel) petrochemical products from being blended with taxable fuel products and preventing taxable fuel products from cross-jurisdiction evasion. The research worked to fill the need to validate the legitimacy of individual loads, offloads, and movements by integrating and validating, on a near-real-time basis, information from global positioning system (GPS), valve sensors, level sensors, and fuel-marker sensors
Recommended from our members
Supply Chain Based Solution to Prevent Fuel Tax Evasion: Proof of Concept Final Report
The goal of this research was to provide a proof-of-concept (POC) system for preventing non-taxable (non-highway diesel use) or low-taxable (jet fuel) petrochemical products from being blended with taxable fuel products and preventing taxable fuel products from cross-jurisdiction evasion. The research worked to fill the need to validate the legitimacy of individual loads, offloads, and movements by integrating and validating, on a near-real-time basis, information from global positioning system (GPS), valve sensors, level sensors, and fuel-marker sensors
Freight Analysis Framework Version 5 (FAF5) Base Year 2017 Data Development Technical Report
DE-AC05-00OR22725The Freight Analysis Framework (FAF) integrates data from a variety of sources to create a comprehensive national picture of freight movements among states and major metropolitan areas by all modes of transportation. The latest of this data series is FAF5, which is the fifth generation FAF and is benchmarked on Commodity Flow Survey (CFS) 2017. Except for FAF1 that provided estimates for truck, rail, and water tonnage for calendar year 1998, later generations of FAF (FAF2 through FAF5) were built based on their benchmark year CFS data, for 2002, 2007, 2012, and 2017 respectively. The FAF is produced under a partnership between Bureau of Transportation Statistics (BTS) and Federal Highway Administration (FHWA). As a major data product of the FAF program, the FAF regional database provides a national picture of freight flows to, from, and within the United States (among regions and states), by commodity and mode for the base year, as well as for forecasts up to 30 years into the future in a 5-year interval. Additional FAF data products also include FAF network flows database, where truck movements are routed onto the national highway network, estimates of annual projections, and synchronized historical data series. This report is a technical document prepared to describe the data sources and methodologies applied in the process of building the FAF5 base-year 2017 regional database, released as FAF5.0 in February 2021. This report offers a description of the diverse data sources and modeling methods used in constructing the base year FAF5 regional database. The FAF5 base-year database is used as the base for development of forecasts and for assignment of truck flows on highway network. Similarly, the FAF5 base-year database will be used as the base to generate FAF5 annual estimates. In addition to this report, users are encouraged to refer to the FAF5 User\u2019s Guide, which provides basic information of the data, including definitions of the data attributes, information on how to access the data and tool, as well as detailed data dictionary and code tables
Guidelines for the use and interpretation of assays for monitoring autophagy (3rd edition)
In 2008 we published the first set of guidelines for standardizing research in autophagy. Since then, research on this topic has continued to accelerate, and many new scientists have entered the field. Our knowledge base and relevant new technologies have also been expanding. Accordingly, it is important to update these guidelines for monitoring autophagy in different organisms. Various reviews have described the range of assays that have been used for this purpose. Nevertheless, there continues to be confusion regarding acceptable methods to measure autophagy, especially in multicellular eukaryotes. For example, a key point that needs to be emphasized is that there is a difference between measurements that monitor the numbers or volume of autophagic elements (e.g., autophagosomes or autolysosomes) at any stage of the autophagic process versus those that measure fl ux through the autophagy pathway (i.e., the complete process including the amount and rate of cargo sequestered and degraded). In particular, a block in macroautophagy that results in autophagosome accumulation must be differentiated from stimuli that increase autophagic activity, defi ned as increased autophagy induction coupled with increased delivery to, and degradation within, lysosomes (inmost higher eukaryotes and some protists such as Dictyostelium ) or the vacuole (in plants and fungi). In other words, it is especially important that investigators new to the fi eld understand that the appearance of more autophagosomes does not necessarily equate with more autophagy. In fact, in many cases, autophagosomes accumulate because of a block in trafficking to lysosomes without a concomitant change in autophagosome biogenesis, whereas an increase in autolysosomes may reflect a reduction in degradative activity. It is worth emphasizing here that lysosomal digestion is a stage of autophagy and evaluating its competence is a crucial part of the evaluation of autophagic flux, or complete autophagy. Here, we present a set of guidelines for the selection and interpretation of methods for use by investigators who aim to examine macroautophagy and related processes, as well as for reviewers who need to provide realistic and reasonable critiques of papers that are focused on these processes. These guidelines are not meant to be a formulaic set of rules, because the appropriate assays depend in part on the question being asked and the system being used. In addition, we emphasize that no individual assay is guaranteed to be the most appropriate one in every situation, and we strongly recommend the use of multiple assays to monitor autophagy. Along these lines, because of the potential for pleiotropic effects due to blocking autophagy through genetic manipulation it is imperative to delete or knock down more than one autophagy-related gene. In addition, some individual Atg proteins, or groups of proteins, are involved in other cellular pathways so not all Atg proteins can be used as a specific marker for an autophagic process. In these guidelines, we consider these various methods of assessing autophagy and what information can, or cannot, be obtained from them. Finally, by discussing the merits and limits of particular autophagy assays, we hope to encourage technical innovation in the field
Temporary losses of highway capacity and impacts on performance phase 2
Traffic congestion and its impacts significantly affect the nation's economic performance and the public's quality of life. In most urban areas, travel demand routinely exceeds highway capacity during peak periods. In addition, events such as crashes, vehicle breakdowns, work zones, adverse weather, railroad crossings, large trucks loading/unloading in urban areas, and other factors such as toll collection facilities and sub-optimal signal timing cause temporary capacity losses, often worsening the conditions on already congested highway networks. The impacts of these temporary capacity losses include delay, reduced mobility, and reduced reliability of the highway system. They can also cause drivers to re-route or reschedule trips. Such information is vital to formulating sound public policies for the highway infrastructure and its operation. In response to this need, Oak Ridge National Laboratory, sponsored by the Federal Highway Administration (FHWA), made an initial attempt to provide nationwide estimates of the capacity losses and delay caused by temporary capacity-reducing events (Chin et al. 2002). This study, called the Temporary Loss of Capacity (TLC) study, estimated capacity loss and delay on freeways and principal arterials resulting from fatal and non-fatal crashes, vehicle breakdowns, and adverse weather, including snow, ice, and fog. In addition, it estimated capacity lossmore » and delay caused by sub-optimal signal timing at intersections on principal arterials. It also included rough estimates of capacity loss and delay on Interstates due to highway construction and maintenance work zones. Capacity loss and delay were estimated for calendar year 1999, except for work zone estimates, which were estimated for May 2001 to May 2002 due to data availability limitations. Prior to the first phase of this study, which was completed in May of 2002, no nationwide estimates of temporary losses of highway capacity by type of capacity-reducing event had been made. This report describes the second phase of the TLC study (TLC2). TLC2 improves upon the first study by expanding the scope to include delays from rain, toll collection facilities, railroad crossings, and commercial truck pickup and delivery (PUD) activities in urban areas. It includes estimates of work zone capacity loss and delay for all freeways and principal arterials, rather than for Interstates only. It also includes improved estimates of delays caused by fog, snow, and ice, which are based on data not available during the initial phase of the study. Finally, computational errors involving crash and breakdown delay in the original TLC report are corrected.« le
Document type: Repor
- …