65 research outputs found
Characterizing Interstate Crash Rates Based on Traffic Congestion Using Probe Vehicle Data
Crash reduction factors are widely used by engineers for prioritizing safety investments. Work zones are routinely analyzed by the length and duration of queues. Queue detection warning technology has been growing in availability and reliability in recent years. However, there is sparse literature on the impact of freeway queueing on crash rates. This paper analyzes three years of crash data and crowdsourced probe vehicle data to classify crashes as being associated with queueing conditions or free flow conditions. In 2014, only 1.2% of the distanced-weighted hours of operation of Indiana interstates operated at or under 45 MPH. A three-year study on Indiana interstates indicates that commercial vehicles were involved in over 87% of back-of-queue fatal crashes compared to 39% of all fatal crashes during free flow conditions. A new measure of crash rate was developed to account for the presence and duration of queues: crashes per mile-hour of congestion. The congested crash rate on all Indiana interstates in 2014 was found to be 24 times greater than the uncongested crash rate. These data were also separated into both rural and urban categories. In rural areas, the congested crash rate is 23 times the uncongested crash rate. In urban areas, the congested crash rate is 21 times the uncongested crash rate. Queues are found to be present for five minutes or longer prior to approximately 90% of congestion crashes in 2014. Longer term, this information shows the importance in the development of technology that can warn motorists of traffic queues
Characterizing Interstate Crash Rates Based on Traffic Congestion Using Probe Vehicle Data
Crash reduction factors are widely used by engineers for prioritizing safety investments. Work zones are routinely analyzed by the length and duration of queues. Queue detection warning technology has been growing in availability and reliability in recent years. However, there is sparse literature on the impact of freeway queueing on crash rates. This paper analyzes three years of crash data and crowd-sourced probe vehicle data to classify crashes as being associated with queueing conditions or free flow conditions. In 2014, only 1.2% of the distanced-weighted hours of operation of Indiana interstates operated at or under 45 MPH. A three-year study on Indiana interstates indicates that commercial vehicles were involved in over 87% of back-of-queue fatal crashes compared to 39% of all fatal crashes during free flow conditions. A new measure of crash rate was developed to account for the presence and duration of queues: crashes per mile-hour of congestion. The congested crash rate on all Indiana interstates in 2014 was found to be 24 times greater than the uncongested crash rate. These data were also separated into both rural and urban categories. In rural areas, the congested crash rate is 23 times the uncongested crash rate. In urban areas, the congested crash rate is 21 times the uncongested crash rate. Queues are found to be present for five minutes or longer prior to approximately 90% of congestion crashes in 2014. Longer term, this information shows the importance in the development of technology that can warn motorists of traffic queues
Performance Ranking of Arterial Corridors Using Travel Time and Travel Time Reliability Metrics
Performance measures are important for managing transportation systems and demonstrating accountability. This session presents a scalable methodology for analyzing arterial travel times, taking into account both the central tendency of the travel time and its reliability. Findings will be presented from a pilot analysis that was carried out for 28 arterials including a total of 341 signalized intersections from across the state of Indiana
Performance Ranking of Arterial Corridors Using Travel Time and Travel Time Reliability Metrics
Performance measures are important for managing transportation systems and demonstrating accountability. This session presents a scalable methodology for analyzing arterial travel times, taking into account both the central tendency of the travel time and its reliability. Findings will be presented from a pilot analysis that was carried out for 28 arterials including a total of 341 signalized intersections from across the state of Indiana
Evaluation of Mobile Advanced Road Weather Information Sensor (MARWIS) by Lufft for Indiana Winter Road Operations
The students of the Field Infrastructure Assessment course evaluated the Mobile Advanced Road Weather Information Sensor by Lufft on behalf of the Indiana Department of Transportation. The device is mounted on a vehicle, takes roadway condition measurements 100 times per second, and aggregates the measurements into one-second intervals. The data is transmitted real-time to the cloud. The class specifically evaluated the following measurements: road condition, surface temperature, and friction. It was found that the MARWIS temperature reading was systematically 4ËF lower than the probe readings. Skid test empirical results were consistent with the MARWIS friction readings. It was also found, using two devices on the back of the test vehicle, that the friction in the wheel track was greater than the friction in the center of the lane, as expected. Lastly, the âchemically wetâ road condition was not consistently observed by MARWIS where expected
Identifying Effects and Applications of Fixed and Variable Speed Limits
In Indiana, distracted driving and unexpected queues have led to an increase in the amount of back-of-queue crashes, particularly on approach to work zones. This report presents new strategies for the assessment of both transportation safety and traffic operations using crowd-sourced probe vehicle data and a speed laser vehicle re-identification scheme. This report concludes by recommending strategies for the placement of variable speed limits (VSL) adjacent to work zones and suggestions for future research
Research priorities to address the global burden of chronic obstructive pulmonary disease (COPD) in the next decade
Background The global prevalence of chronic obstructive pulmonary disease (COPD) has increased markedly in recent decades. Given the scarcity of resources available to address global health challenges and respiratory medicine being relatively under-invested in, it is important to define research priorities for COPD globally. In this paper, we aim to identify a ranked set of COPD research priorities that need to be addressed in the next 10 years to substantially reduce the global impact of COPD. Methods We adapted the Child Health and Nutrition Research Initiative (CHNRI) methodology to identify global COPD research priorities. Results 62 experts contributed 230 research ideas, which were scored by 34 researchers according to six pre-defined criteria: answerability, effectiveness, feasibility, deliverability, burden reduction, and equity. The top-ranked research priority was the need for new effective strategies to support smoking cessation. Of the top 20 overall research priorities, six were focused on feasible and cost-effective pulmonary rehabilitation delivery and access, particularly in primary/community care and low-resource settings. Three of the top 10 overall priorities called for research on improved screening and accurate diagnostic methods for COPD in low-resource primary care settings. Further ideas that drew support involved a better understanding of risk factors for COPD, development of effective training programmes for health workers and physicians in low resource settings, and evaluation of novel interventions to encourage physical activity. Conclusions The experts agreed that the most pressing feasible research questions to address in the next decade for COPD reduction were on prevention, diagnosis and rehabilitation of COPD, especially in low resource settings. The largest gains should be expected in low- and middle-income countries (LMIC) settings, as the large majority of COPD deaths occur in those settings. Research priorities identified by this systematic international process should inform and motivate policymakers, funders, and researchers to support and conduct research to reduce the global burden of COPD
Recommended from our members
Effect of Hydrocortisone on Mortality and Organ Support in Patients With Severe COVID-19: The REMAP-CAP COVID-19 Corticosteroid Domain Randomized Clinical Trial.
Importance: Evidence regarding corticosteroid use for severe coronavirus disease 2019 (COVID-19) is limited. Objective: To determine whether hydrocortisone improves outcome for patients with severe COVID-19. Design, Setting, and Participants: An ongoing adaptive platform trial testing multiple interventions within multiple therapeutic domains, for example, antiviral agents, corticosteroids, or immunoglobulin. Between March 9 and June 17, 2020, 614 adult patients with suspected or confirmed COVID-19 were enrolled and randomized within at least 1 domain following admission to an intensive care unit (ICU) for respiratory or cardiovascular organ support at 121 sites in 8 countries. Of these, 403 were randomized to open-label interventions within the corticosteroid domain. The domain was halted after results from another trial were released. Follow-up ended August 12, 2020. Interventions: The corticosteroid domain randomized participants to a fixed 7-day course of intravenous hydrocortisone (50 mg or 100 mg every 6 hours) (nâ=â143), a shock-dependent course (50 mg every 6 hours when shock was clinically evident) (nâ=â152), or no hydrocortisone (nâ=â108). Main Outcomes and Measures: The primary end point was organ support-free days (days alive and free of ICU-based respiratory or cardiovascular support) within 21 days, where patients who died were assigned -1 day. The primary analysis was a bayesian cumulative logistic model that included all patients enrolled with severe COVID-19, adjusting for age, sex, site, region, time, assignment to interventions within other domains, and domain and intervention eligibility. Superiority was defined as the posterior probability of an odds ratio greater than 1 (threshold for trial conclusion of superiority >99%). Results: After excluding 19 participants who withdrew consent, there were 384 patients (mean age, 60 years; 29% female) randomized to the fixed-dose (nâ=â137), shock-dependent (nâ=â146), and no (nâ=â101) hydrocortisone groups; 379 (99%) completed the study and were included in the analysis. The mean age for the 3 groups ranged between 59.5 and 60.4 years; most patients were male (range, 70.6%-71.5%); mean body mass index ranged between 29.7 and 30.9; and patients receiving mechanical ventilation ranged between 50.0% and 63.5%. For the fixed-dose, shock-dependent, and no hydrocortisone groups, respectively, the median organ support-free days were 0 (IQR, -1 to 15), 0 (IQR, -1 to 13), and 0 (-1 to 11) days (composed of 30%, 26%, and 33% mortality rates and 11.5, 9.5, and 6 median organ support-free days among survivors). The median adjusted odds ratio and bayesian probability of superiority were 1.43 (95% credible interval, 0.91-2.27) and 93% for fixed-dose hydrocortisone, respectively, and were 1.22 (95% credible interval, 0.76-1.94) and 80% for shock-dependent hydrocortisone compared with no hydrocortisone. Serious adverse events were reported in 4 (3%), 5 (3%), and 1 (1%) patients in the fixed-dose, shock-dependent, and no hydrocortisone groups, respectively. Conclusions and Relevance: Among patients with severe COVID-19, treatment with a 7-day fixed-dose course of hydrocortisone or shock-dependent dosing of hydrocortisone, compared with no hydrocortisone, resulted in 93% and 80% probabilities of superiority with regard to the odds of improvement in organ support-free days within 21 days. However, the trial was stopped early and no treatment strategy met prespecified criteria for statistical superiority, precluding definitive conclusions. Trial Registration: ClinicalTrials.gov Identifier: NCT02735707
Effect of angiotensin-converting enzyme inhibitor and angiotensin receptor blocker initiation on organ support-free days in patients hospitalized with COVID-19
IMPORTANCE Overactivation of the renin-angiotensin system (RAS) may contribute to poor clinical outcomes in patients with COVID-19.
Objective To determine whether angiotensin-converting enzyme (ACE) inhibitor or angiotensin receptor blocker (ARB) initiation improves outcomes in patients hospitalized for COVID-19.
DESIGN, SETTING, AND PARTICIPANTS In an ongoing, adaptive platform randomized clinical trial, 721 critically ill and 58 nonâcritically ill hospitalized adults were randomized to receive an RAS inhibitor or control between March 16, 2021, and February 25, 2022, at 69 sites in 7 countries (final follow-up on June 1, 2022).
INTERVENTIONS Patients were randomized to receive open-label initiation of an ACE inhibitor (nâ=â257), ARB (nâ=â248), ARB in combination with DMX-200 (a chemokine receptor-2 inhibitor; nâ=â10), or no RAS inhibitor (control; nâ=â264) for up to 10 days.
MAIN OUTCOMES AND MEASURES The primary outcome was organ supportâfree days, a composite of hospital survival and days alive without cardiovascular or respiratory organ support through 21 days. The primary analysis was a bayesian cumulative logistic model. Odds ratios (ORs) greater than 1 represent improved outcomes.
RESULTS On February 25, 2022, enrollment was discontinued due to safety concerns. Among 679 critically ill patients with available primary outcome data, the median age was 56 years and 239 participants (35.2%) were women. Median (IQR) organ supportâfree days among critically ill patients was 10 (â1 to 16) in the ACE inhibitor group (nâ=â231), 8 (â1 to 17) in the ARB group (nâ=â217), and 12 (0 to 17) in the control group (nâ=â231) (median adjusted odds ratios of 0.77 [95% bayesian credible interval, 0.58-1.06] for improvement for ACE inhibitor and 0.76 [95% credible interval, 0.56-1.05] for ARB compared with control). The posterior probabilities that ACE inhibitors and ARBs worsened organ supportâfree days compared with control were 94.9% and 95.4%, respectively. Hospital survival occurred in 166 of 231 critically ill participants (71.9%) in the ACE inhibitor group, 152 of 217 (70.0%) in the ARB group, and 182 of 231 (78.8%) in the control group (posterior probabilities that ACE inhibitor and ARB worsened hospital survival compared with control were 95.3% and 98.1%, respectively).
CONCLUSIONS AND RELEVANCE In this trial, among critically ill adults with COVID-19, initiation of an ACE inhibitor or ARB did not improve, and likely worsened, clinical outcomes.
TRIAL REGISTRATION ClinicalTrials.gov Identifier: NCT0273570
- âŠ