19,768 research outputs found

    Evaluation of Coordinated Ramp Metering (CRM) Implemented By Caltrans

    Get PDF
    Coordinated ramp metering (CRM) is a critical component of smart freeway corridors that rely on real-time traffic data from ramps and freeway mainline to improve decision-making by the motorists and Traffic Management Center (TMC) personnel. CRM uses an algorithm that considers real-time traffic volumes on freeway mainline and ramps and then adjusts the metering rates on the ramps accordingly for optimal flow along the entire corridor. Improving capacity through smart corridors is less costly and easier to deploy than freeway widening due to high costs associated with right-of-way acquisition and construction. Nevertheless, conversion to smart corridors still represents a sizable investment for public agencies. However, in the U.S. there have been limited evaluations of smart corridors in general, and CRM in particular, based on real operational data. This project examined the recent Smart Corridor implementation on Interstate 80 (I-80) in the Bay Area and State Route 99 (SR-99, SR99) in Sacramento based on travel time reliability measures, efficiency measures, and before-and-after safety evaluation using the Empirical Bayes (EB) approach. As such, this evaluation represents the most complete before-and-after evaluation of such systems. The reliability measures include buffer index, planning time, and measures from the literature that account for both the skew and width of the travel time distribution. For efficiency, the study estimates the ratio of vehicle miles traveled vs. vehicle hour traveled. The research contextualizes before-and-after comparisons for efficiency and reliability measures through similar measures from another corridor (i.e., the control corridor of I-280 in District 4 and I-5 in District 3) from the same region, which did not have CRM implemented. The results show there has been an improvement in freeway operation based on efficiency data. Post-CRM implementation, travel time reliability measures do not show a similar improvement. The report also provides a counterfactual estimate of expected crashes in the post-implementation period, which can be compared with the actual number of crashes in the “after” period to evaluate effectiveness

    Delay-oriented active queue management in TCP/IP networks

    Get PDF
    PhDInternet-based applications and services are pervading everyday life. Moreover, the growing popularity of real-time, time-critical and mission-critical applications set new challenges to the Internet community. The requirement for reducing response time, and therefore latency control is increasingly emphasized. This thesis seeks to reduce queueing delay through active queue management. While mathematical studies and research simulations reveal that complex trade-off relationships exist among performance indices such as throughput, packet loss ratio and delay, etc., this thesis intends to find an improved active queue management algorithm which emphasizes delay control without trading much on other performance indices such as throughput and packet loss ratio. The thesis observes that in TCP/IP network, packet loss ratio is a major reflection of congestion severity or load. With a properly functioning active queue management algorithm, traffic load will in general push the feedback system to an equilibrium point in terms of packet loss ratio and throughput. On the other hand, queue length is a determinant factor on system delay performance while has only a slight influence on the equilibrium. This observation suggests the possibility of reducing delay while maintaining throughput and packet loss ratio relatively unchanged. The thesis also observes that queue length fluctuation is a reflection of both load changes and natural fluctuation in arriving bit rate. Monitoring queue length fluctuation alone cannot distinguish the difference and identify congestion status; and yet identifying this difference is crucial in finding out situations where average queue size and hence queueing delay can be properly controlled and reasonably reduced. However, many existing active queue management algorithms only monitor queue length, and their control policies are solely based on this measurement. In our studies, our novel finding is that the arriving bit rate distribution of all sources contains information which can be a better indication of congestion status and has a correlation with traffic burstiness. And this thesis develops a simple and scalable way to measure its two most important characteristics, namely the mean ii and the variance of the arriving rate distribution. The measuring mechanism is based on a Zombie List mechanism originally proposed and deployed in Stabilized RED to estimate the number of flows and identify misbehaving flows. This thesis modifies the original zombie list measuring mechanism, makes it capable of measuring additional variables. Based on these additional measurements, this thesis proposes a novel modification to the RED algorithm. It utilizes a robust adaptive mechanism to ensure that the system reaches proper equilibrium operating points in terms of packet loss ratio and queueing delay under various loads. Furthermore, it identifies different congestion status where traffic is less bursty and adapts RED parameters in order to reduce average queue size and hence queueing delay accordingly. Using ns-2 simulation platform, this thesis runs simulations of a single bottleneck link scenario which represents an important and popular application scenario such as home access network or SoHo. Simulation results indicate that there are complex trade-off relationships among throughput, packet loss ratio and delay; and in these relationships delay can be substantially reduced whereas trade-offs on throughput and packet loss ratio are negligible. Simulation results show that our proposed active queue management algorithm can identify circumstances where traffic is less bursty and actively reduce queueing delay with hardly noticeable sacrifice on throughput and packet loss ratio performances. In conclusion, our novel approach enables the application of adaptive techniques to more RED parameters including those affecting queue occupancy and hence queueing delay. The new modification to RED algorithm is a scalable approach and does not introduce additional protocol overhead. In general it brings the benefit of substantially reduced delay at the cost of limited processing overhead and negligible degradation in throughput and packet loss ratio. However, our new algorithm is only tested on responsive flows and a single bottleneck scenario. Its effectiveness on a combination of responsive and non-responsive flows as well as in more complicated network topology scenarios is left for future work

    General aviation environment

    Get PDF
    The background, development, and relationship, among economic factors, airworthiness, costs, and environment protection are examined. Government regulations for airports, air agencies, aircraft, and airmen are reviewed

    Improving the Performance of Internet Data Transport

    Get PDF
    With the explosion of the World Wide Web, the Internet infrastructure faces new challenges in providing high performance for data traffic. First, it must be able to pro-vide a fair-share of congested link bandwidth to every flow. Second, since web traffic is inherently interactive, it must minimize the delay for data transfer. Recent studies have shown that queue management algorithms such as Tail Drop, RED and Blue are deficient in providing high throughput, low delay paths for a data flow. Two major shortcomings of the current algorithms are: they allow TCP flows to get synchronized and thus require large buffers during congestion to enable high throughput; and they allow unfair bandwidth usage for shorter round-trip time TCP flows. We propose algorithms using multiple queues and discard policies with hysteresis at bottleneck routers to address both these issues. Us-ing ns-2 simulations, we show that these algorithms can significantly outperform RED and Blue, especially at smaller buffer sizes. Using multiple queues raises two new concerns: scalability and excess memory bandwidth usage caused by dropping packets which have been queued. We propose and evaluate an architecture using Bloom filters to evenly distribute flows among queues to improve scalability. We have also developed new intelligent packet discard algorithms that discard packets on arrival and are able to achieve performance close to that of policies that may discard packets that have already been queued. Finally, we propose better methods for evaluating the performance of fair-queueing methods. In the current literature, fair-queueing methods are evaluated based on their worst-case performance. This can exaggerate the differences among algorithms, since the worst-case behavior is dependent on the the precise timing of packet arrivals. This work seeks to understand what happens under more typical circumstances

    Datacenter Traffic Control: Understanding Techniques and Trade-offs

    Get PDF
    Datacenters provide cost-effective and flexible access to scalable compute and storage resources necessary for today's cloud computing needs. A typical datacenter is made up of thousands of servers connected with a large network and usually managed by one operator. To provide quality access to the variety of applications and services hosted on datacenters and maximize performance, it deems necessary to use datacenter networks effectively and efficiently. Datacenter traffic is often a mix of several classes with different priorities and requirements. This includes user-generated interactive traffic, traffic with deadlines, and long-running traffic. To this end, custom transport protocols and traffic management techniques have been developed to improve datacenter network performance. In this tutorial paper, we review the general architecture of datacenter networks, various topologies proposed for them, their traffic properties, general traffic control challenges in datacenters and general traffic control objectives. The purpose of this paper is to bring out the important characteristics of traffic control in datacenters and not to survey all existing solutions (as it is virtually impossible due to massive body of existing research). We hope to provide readers with a wide range of options and factors while considering a variety of traffic control mechanisms. We discuss various characteristics of datacenter traffic control including management schemes, transmission control, traffic shaping, prioritization, load balancing, multipathing, and traffic scheduling. Next, we point to several open challenges as well as new and interesting networking paradigms. At the end of this paper, we briefly review inter-datacenter networks that connect geographically dispersed datacenters which have been receiving increasing attention recently and pose interesting and novel research problems.Comment: Accepted for Publication in IEEE Communications Surveys and Tutorial

    Potential of PM-selected components to induce oxidative stress and root system alteration in a plant model organism

    Get PDF
    Over the last years, various acellular assays have been used for the evaluation of the oxidative potential (OP) of particular matter (PM) to predict PM capacity to generate reactive oxygen (ROS) and nitrogen (RNS) species in biological systems. However, relationships among OP and PM toxicological effects on living organisms are still largely unknown. This study aims to assess the effects of atmospheric PM-selected components (brake dust - BD, pellet ash - PA, road dust - RD, certified urban dust NIST1648a - NIST, soil dust - S, coke dust - C and Saharan dust - SD) on the model plant A. thaliana development, with emphasis on their capacity to induce oxidative stress and root morphology alteration. Before growing A. thaliana in the presence of the PM-selected components, each atmospheric dust has been chemically characterized and tested for the OP through dithiothreitol (DTT), ascorbic acid (AA) and 2′,7′-dichlorofluorescin (DCFH) assays. After the exposure, element bioaccumulation in the A. thaliana seedlings, i.e., in roots and shoots, was determined and both morphological and oxidative stress analyses were performed in roots. The results indicated that, except for SD and S, all the tested dusts affected A. thaliana root system morphology, with the strongest effects in the presence of the highest OPs dusts (BD, PA and NIST). Principal component analysis (PCA) revealed correlations among OPs of the dusts, element bioaccumulation and root morphology alteration, identifying the most responsible dust-associated elements affecting the plant. Lastly, histochemical analyses of NO and O2•− content and distribution confirmed that BD, PA and NIST induce oxidative stress in A. thaliana, reflecting the high OPs of these dusts and ultimately leading to cell membrane lipid peroxidation

    ATP-sensitive potassium channel subcellular trafficking during ischemia, reperfusion, and preconditioning

    Full text link
    Ischemic preconditioning is an endogenous cardioprotective mechanism in which short periods of ischemia and reperfusion provide protection when given before a subsequent ischemic event. Early mechanistic studies showed ATP-sensitive potassium (KATP) channels to play an important role in ischemic preconditioning. KATP channels link intracellular energy metabolism to membrane excitability and contractility. It is thought that KATP channels provide a cardioprotective role during ischemia by inducing action potential shortening, reducing an excessive Ca^2+ influx, and by preventing arrhythmias. However, the mechanisms by which KATP channels protect during ischemic preconditioning are not known. In this study, we investigated a novel potential mechanism in which alterations in subcellular KATP channel trafficking during ischemia and ischemic preconditioning may result in altered levels of surface channel density, and therefore, a greater degree of cardioprotection. In the optimization of our experiments, we compared various antibodies for their specificity and sensitivity for channel subunit detection in immunoblotting. In addition, we examined the effects of varying salt concentrations during tissue homogenization in order to determine the optimal conditions for protein isolation. Furthermore, we examined the effect of heating the samples prior to SDS-PAGE for improved detection of channel proteins by immunoblotting. The subcellular trafficking of some membrane proteins is altered by ischemia. For example, the glucose transporter, Glut4, translocates from endosomal compartments to the sarcolemma (Sun, Nguyen, DeGrado, Schwaiger, & Brosius, 1994). Conflicting data exists regarding the effects of ischemia on KATP channel subcellular trafficking and the regulation of KATP channel surface density (Edwards et al., 2009 and Bao, Hadjiolova, Coetzee, & Rindler, 2011). We therefore, sought to test our hypothesis that KATP channels are internalized from the surface of cardiomyocytes to endosomal compartments during ischemia, and this internalization can be reduced and/or reversed by ischemic preconditioning. We subjected isolated Langendorff-perfused mouse hearts to ischemia, reperfusion, or ischemic preconditioning events and measured the density of KATP channels in the sarcolemmal and endosomal compartments. We also determined the degree of injury by staining heart slices with triphenyltetrazolium chloride and compared infarct sizes between hearts subjected to ischemia and ischemic preconditioning. Our data demonstrated that KATP channels are, in fact, internalized during ischemia and that reperfusion led to a slow recovery of surface KATP channel density. Interestingly, ischemic preconditioning reduced the size of infarcts induced by ischemia and also prevented the ischemia-induced decrease of KATP channel surface density, thereby, contributing to cardioprotection
    • …
    corecore