66 research outputs found

    Impact of data aggregation approaches on the relationships between operating speed and traffic safety

    Get PDF
    The impact of operating speed on traffic crash occurrence has been a controversial topic in the traffic safety discipline as some studies reported a positive association whereas others indicated a negative relationship between speed and crashes. Two major issues thought to be accountable for such conflicting findings are the application of inappropriate statistical methods and the use of sample datasets with varying levels of aggregation. The main objective of this study is therefore to investigate the impacts of data aggregation schemes on the relationships between operating speed and traffic safety. A total of three aggregation approaches were examined: (1) a segment-based dataset in which crashes are grouped by roadway segment, (2) a scenario-based dataset where crashes are aggregated by traffic operating scenarios, and (3) a disaggregated crash-level dataset consisting of information from individual crashes. The first two aggregation approaches were used in examining the relationships between operating speed and crash frequency using Bayesian random-effects negative binomial models. The third disaggregated crash risk analysis was conducted utilizing Bayesian random-effects logistic regression models. From the modeling results, it has been concluded that the scenario-based approach shared similar findings with those of the disaggregated crash risk analysis approach in which a U-shaped relationship between operating speed and crash occurrence was identified. However, the commonly adopted segment-based aggregation approach revealed a monotonous negative relationship between speed and crash frequency. The implications of the different analyses results and the potential applications of the results on speed management systems have therefore been discussed

    How to determine an optimal threshold to classify real-time crash-prone traffic conditions?

    Get PDF
    One of the proactive approaches in reducing traffic crashes is to identify hazardous traffic conditions that may lead to a traffic crash, known as real-time crash prediction. Threshold selection is one of the essential steps of real-time crash prediction. And it provides the cut-off point for the posterior probability which is used to separate potential crash warnings against normal traffic conditions, after the outcome of the probability of a crash occurring given a specific traffic condition on the basis of crash risk evaluation models. There is however a dearth of research that focuses on how to effectively determine an optimal threshold. And only when discussing the predictive performance of the models, a few studies utilized subjective methods to choose the threshold. The subjective methods cannot automatically identify the optimal thresholds in different traffic and weather conditions in real application. Thus, a theoretical method to select the threshold value is necessary for the sake of avoiding subjective judgments. The purpose of this study is to provide a theoretical method for automatically identifying the optimal threshold. Considering the random effects of variable factors across all roadway segments, the mixed logit model was utilized to develop the crash risk evaluation model and further evaluate the crash risk. Cross-entropy, between-class variance and other theories were employed and investigated to empirically identify the optimal threshold. And K-fold cross-validation was used to validate the performance of proposed threshold selection methods with the help of several evaluation criteria. The results indicate that (i) the mixed logit model can obtain a good performance; (ii) the classification performance of the threshold selected by the minimum cross-entropy method outperforms the other methods according to the criteria. This method can be well-behaved to automatically identify thresholds in crash prediction, by minimizing the cross entropy between the original dataset with continuous probability of a crash occurring and the binarized dataset after using the thresholds to separate potential crash warnings against normal traffic conditions

    A Bayesian Tobit quantile regression approach for naturalistic longitudinal driving capability assessment

    No full text
    © 2020 Elsevier Ltd Given the severe traffic safety issue, tremendous efforts have been devoted to identify the crash contributing factors for developing and implementing safety improvement countermeasures. According to the study findings, driving behaviors have attributed to the majority crash occurrence, among which inadequate driving capability is a key factor. Therefore, a number of studies have been conducted for developing techniques associated with the driving capability assessment and its various improvement. However, the conventional assessment approaches, such as driving license exams and vehicle insurance quotes, have only focused on basic driving skill evaluations or aggregated driving style classifications, which failed to quantify driving capability from the safety perspective with respect to the complex driving scenarios. In this study, a novel longitudinal driving capacity assessment and ranking approach was developed with naturalistic driving data. Two Responsibility-Sensitive Safety (RSS) based driving capability indicators from the perspectives of risk exposure and severity were first proposed. Then, Bayesian Tobit quantile regression (BTQR) models were introduced to explore the relationships between driving capability indicators with trip level characteristics from the aspects of travel features, operational conditions, and roadway characteristics. The modeling results concluded that nighttime driving and higher average speed would lead to higher longitudinal collision risk and its severity. Besides, the BTQR models have provided varying factors significances among different quantile levels, for instance, driving duration is only significant at high quantiles for the driving capability indicators, implying that duration only affects drivers with large longitudinal risk exposures and strong close following tendencies. Furthermore, the case studies provided how to deploy the developed model to obtain the relative longitudinal driving capability rankings. Finally, the model applications from the aspects of commercial fleet safety management and comparing the autonomous vehicles’ longitudinal driving behaviors with human drivers have been discussed

    A marginalized random effects hurdle negative binomial model for analyzing refined-scale crash frequency data

    No full text
    Crash frequency prediction models have been an important subject of safety research that unveils a relationship between crash occurrences and their influencing factors. Recently, the hourly-based refined-scale crash frequency analysis becomes attractive since it holds the benefits of introducing time-varying explanatory information (e.g. traffic volume and operating speed). However, crash frequency data with short time intervals possess the analytical issues of excessive zeros and unobserved heterogeneity. In this study, a marginalized random effects hurdle negative binomial (MREHNB) model was developed in which the hurdle modelling structure handles the excessive zeros issue and site-specific random effect terms capture the factors associated with unobserved heterogeneity. Moreover, the marginalized inference approach was first introduced here to obtain the marginal mean inference for the overall population rather than subject-specific estimations. Empirical analyses were conducted based on data from the Shanghai urban expressway system, and the MREHNB model was compared with the HNB (hurdle negative binomial) and the REHNB (random effects hurdle negative binomial) model. In terms of model goodness-of-fits, REHNB and MREHNB model showed substantial improvement compared to the HNB model while there was no distinct difference between the REHNB and MREHNB models. However, as for the estimated parameters, the MREHNB model provided better inference precisions. 20 Furthermore, the MREHNB model provided interesting findings for the crash 21 contributing factors, for example, higher ratios of local vehicles within the volume 22 would enhance the probability of crash occurrence; and a non-linear relationship was 23 concluded between traffic volume and crash frequency with the moderate level of 24 volume held the highest crash occurrence probability. Finally, in-depth analyses about 25 the modeling results and the model technique were discussed

    Impact of data aggregation approaches on the relationships between operating speed and traffic safety

    No full text
    The impact of operating speed on traffic crash occurrence has been a controversial topic in the traffic safety discipline as some studies reported a positive association whereas others indicated a negative relationship between speed and crashes. Two major issues thought to be accountable for such conflicting findings are the application of inappropriate statistical methods and the use of sample datasets with varying levels of aggregation. The main objective of this study is therefore to investigate the impacts of data aggregation schemes on the relationships between operating speed and traffic safety. A total of three aggregation approaches were examined: (1) a segment-based dataset in which crashes are grouped by roadway segment, (2) a scenario-based dataset where crashes are aggregated by traffic operating scenarios, and (3) a disaggregated crash-level dataset consisting of information from individual crashes. The first two aggregation approaches were used in examining the relationships between operating speed and crash frequency using Bayesian random-effects negative binomial models. The third disaggregated crash risk analysis was conducted utilizing Bayesian random-effects logistic regression models. From the modeling results, it has been concluded that the scenario-based approach shared similar findings with those of the disaggregated crash risk analysis approach in which a U-shaped relationship between operating speed and crash occurrence was identified. However, the commonly adopted segment-based aggregation approach revealed a monotonous negative relationship between speed and crash frequency. The implications of the different analyses results and the potential applications of the results on speed management systems have therefore been discussed

    A marginalized random effects hurdle negative binomial model for analyzing refined-scale crash frequency data

    No full text
    Crash frequency prediction models have been an important subject of safety research that unveils a relationship between crash occurrences and their influencing factors. Recently, the hourly-based refined-scale crash frequency analysis becomes attractive since it holds the benefits of introducing time-varying explanatory information (e.g. traffic volume and operating speed). However, crash frequency data with short time intervals possess the analytical issues of excessive zeros and unobserved heterogeneity. In this study, a marginalized random effects hurdle negative binomial (MREHNB) model was developed in which the hurdle modelling structure handles the excessive zeros issue and site-specific random effect terms capture the factors associated with unobserved heterogeneity. Moreover, the marginalized inference approach was first introduced here to obtain the marginal mean inference for the overall population rather than subject-specific estimations. Empirical analyses were conducted based on data from the Shanghai urban expressway system, and the MREHNB model was compared with the HNB (hurdle negative binomial) and the REHNB (random effects hurdle negative binomial) model. In terms of model goodness-of-fits, REHNB and MREHNB model showed substantial improvement compared to the HNB model while there was no distinct difference between the REHNB and MREHNB models. However, as for the estimated parameters, the MREHNB model provided better inference precisions. 20 Furthermore, the MREHNB model provided interesting findings for the crash 21 contributing factors, for example, higher ratios of local vehicles within the volume 22 would enhance the probability of crash occurrence; and a non-linear relationship was 23 concluded between traffic volume and crash frequency with the moderate level of 24 volume held the highest crash occurrence probability. Finally, in-depth analyses about 25 the modeling results and the model technique were discussed

    How to determine an optimal threshold to classify real-time crash-prone traffic conditions?

    No full text
    One of the proactive approaches in reducing traffic crashes is to identify hazardous traffic conditions that may lead to a traffic crash, known as real-time crash prediction. Threshold selection is one of the essential steps of real-time crash prediction. And it provides the cut-off point for the posterior probability which is used to separate potential crash warnings against normal traffic conditions, after the outcome of the probability of a crash occurring given a specific traffic condition on the basis of crash risk evaluation models. There is however a dearth of research that focuses on how to effectively determine an optimal threshold. And only when discussing the predictive performance of the models, a few studies utilized subjective methods to choose the threshold. The subjective methods cannot automatically identify the optimal thresholds in different traffic and weather conditions in real application. Thus, a theoretical method to select the threshold value is necessary for the sake of avoiding subjective judgments. The purpose of this study is to provide a theoretical method for automatically identifying the optimal threshold. Considering the random effects of variable factors across all roadway segments, the mixed logit model was utilized to develop the crash risk evaluation model and further evaluate the crash risk. Cross-entropy, between-class variance and other theories were employed and investigated to empirically identify the optimal threshold. And K-fold cross-validation was used to validate the performance of proposed threshold selection methods with the help of several evaluation criteria. The results indicate that (i) the mixed logit model can obtain a good performance; (ii) the classification performance of the threshold selected by the minimum cross-entropy method outperforms the other methods according to the criteria. This method can be well-behaved to automatically identify thresholds in crash prediction, by minimizing the cross entropy between the original dataset with continuous probability of a crash occurring and the binarized dataset after using the thresholds to separate potential crash warnings against normal traffic conditions

    First reported case of ANCA-associated vasculitis induced by oxaliplatin, capecitabine, and trastuzumab

    No full text
    A 68-year-old male, who was undergoing XELOX plus trastuzumab therapy for gastric cancer, developed proteinuria, hematuria, and progressive increase in creatinine after 3 months. Subsequently, the patient also experienced hemoptysis, nasal bleeding. Chest CT examination shown pulmonary hemorrhage. The MRI of the nasopharynx ruled out nasopharyngeal cancer recurrence. The MPO and PR3 were elevated, and renal biopsy confirmed ANCA-related vasculitis, which affected the lungs, kidneys, and nasopharynx. Based on the review of the patient’’s medical history and medication, it is believed that ANCA-related vasculitis was caused by XELOX plus trastuzumab chemotherapy, but it is difficult to confirm which specific drug caused it. After stopping XELOX plus trastuzumab chemotherapy, glucocorticoids and cyclophosphamide was given, the patient’’s pulmonary hemorrhage and nasal bleeding stopped, and the lung lesions were absorbed. The renal function also improved. The patient later experienced pulmonary infection again, and tNGS indicated Legionella pneumophila and pulmonary tuberculosis infection. Despite anti-infection treatment, steroid dose was rapidly reduced. Ultimately, the patient gave up on treatment and eventually died.</p

    The mechanism for the ligan-independent basal activity of PAC1 dimers (A) and the ligand-dependent activation of PAC1.

    No full text
    <p>(A) In ligand-free situation, the disturbance of the plasma induced entocytosis of PAC1 dimers, which triggered the activation of the basal activity of PAC1 dimers involved with Wnt/β-catenin signal pathway to protect the cells against apoptosis. (B) In ligand-dependent manner, the binding of the ligands for PAC1 disrupted the dimerization of PAC1 and induced internalization of PAC1 monomer, which inhibited the basal activity of PAC1 dimers.</p

    The ligand independent activity of PAC1 and M-PAC1 against serum withdrawal induced apoptosis.

    No full text
    <p>(A) The remaining cell viabilities of PAC1-CHO, M-PAC1-CHO and pcDNA-CHO cells 48 h after serum withdrawal. When the data were plotted as the percentage of the initial cell viability without serum withdrawal, it was shown that PAC1-CHO had remaining cell viability (57.34±5.91%) that was significantly higher than that of M-PAC1-CHO (36.96±6.85%) or pcDNA-CHO (37.89±7.11%) (*, P<0.01, PAC1-CHO vs. pcDNA-CHO and M-PAC1-CHO). (B) The intracellular caspase3 activities after serum withdrawal. The reactions of pcDNA-CHO were considered not result from PAC1 because pcDNA-CHO did not express PAC1 or PACAP; therefore, all the data were plotted as fold changes in pcDNA-CHO. As shown, PAC1-CHO had significantly lower caspase3 activity than M-PAC1-CHO or pcDNA-CHO (*, P<0.01, PAC1-CHO vs. pcDNA-CHO and M-PAC1-CHO), whereas there was no significant difference between M-PAC1-CHO and pcDNA-CHO. (C) The intracellular Bcl-2 levels after serum withdrawal. After the data were plotted as the fold changes of pcDNA-CHO, it was shown that PAC1-CHO had significantly higher Bcl-2 level about 2 folds of that in M-PAC1-CHO or pcDNA-CHO (*, P<0.01, PAC1-CHO vs. pcDNA-CHO and M-PAC1-CHO). The data were represented as the means ± S.E. of three independent experiments. (D) The detection of β-catenin, cyclin D1 and c-myc levels in PAC1-CHO, M-PAC1-CHO and pcDNA-CHO cells by western blotting. The western blotting results and the statistical analysis showed that the levels of β-catenin, cyclin D1 and c-myc (tow targets of β-catenin) in PAC1-CHO cells were significantly higher than those in M-PAC1-CHO or pcDNA-CHO cells (*, P<0.01, PAC1-CHO vs. pcDNA-CHO and M-PAC1-CHO). These findings indicated that overexpression of wild type PAC1 endowed CHO with anti-apoptotic activities against serum withdrawal, suggesting that PAC1 had ligand independent basal activity, while M-PAC1 did not. And Wnt/β-catenin signals were involved in the anti-apoptotic activity of PAC1-CHO. The data were represented as the means ± S.E. of three independent experiments.</p
    corecore