1,341 research outputs found

    Exploring differences in injury severity between occupant groups involved in fatal rear-end crashes: A correlated random parameter logit model with mean heterogeneity

    Full text link
    Rear-end crashes are one of the most common crash types. Passenger cars involved in rear-end crashes frequently produce severe outcomes. However, no study investigated the differences in the injury severity of occupant groups when cars are involved as following and leading vehicles in rear-end crashes. Therefore, the focus of this investigation is to compare the key factors affecting the injury severity between the front- and rear-car occupant groups in rear-end crashes. First, data is extracted from the Fatality Analysis Reporting System (FARS) for two types of rear-end crashes from 2017 to 2019, including passenger cars as rear-end and rear-ended vehicles. Significant injury severity difference between front- and rear-car occupant groups is found by conducting likelihood ratio test. Moreover, the front- and rear-car occupant groups are modelled by the correlated random parameter logit model with heterogeneity in means (CRPLHM) and the random parameter logit model with heterogeneity in means (RPLHM), respectively. From the modeling, the significant factors are occupant positions, driver age, overturn, vehicle type, etc. For instance, the driving and front-right positions significantly increase the probability of severe injury when struck by another vehicle. Large truck-strike-car tends to cause severe outcomes compared to car-strike-large truck. This study provides an insightful knowledge of mechanism of occupant injury severity in rear-end crashes, and propose some effective countermeasures to mitigate the crash severity, such as implementing stricter seat belt laws, improving the coverage of the streetlights, strengthening car driver's emergency response ability

    Investigating the spatial heterogeneity of factors influencing speeding-related crash severities using correlated random parameter order models with heterogeneity-in-means

    Full text link
    Speeding has been acknowledged as a critical determinant in increasing the risk of crashes and their resulting injury severities. This paper demonstrates that severe speeding-related crashes within the state of Pennsylvania have a spatial clustering trend, where four crash datasets are extracted from four hotspot districts. Two log-likelihood ratio (LR) tests were conducted to determine whether speeding-related crashes classified by hotspot districts should be modeled separately. The results suggest that separate modeling is necessary. To capture the unobserved heterogeneity, four correlated random parameter order models with heterogeneity in means are employed to explore the factors contributing to crash severity involving at least one vehicle speeding. Overall, the findings exhibit that some indicators are observed to be spatial instability, including hit pedestrian crashes, head-on crashes, speed limits, work zones, light conditions (dark), rural areas, older drivers, running stop signs, and running red lights. Moreover, drunk driving, exceeding the speed limit, and being unbelted present relative spatial stability in four district models. This paper provides insights into preventing speeding-related crashes and potentially facilitating the development of corresponding crash injury mitigation policies

    EUCLIA - Exploring the UV/optical continuum lag in active galactic nuclei. I. a model without light echoing

    Full text link
    The tight inter-band correlation and the lag-wavelength relation among UV/optical continua of active galactic nuclei have been firmly established. They are usually understood within the widespread reprocessing scenario, however, the implied inter-band lags are generally too small. Furthermore, it is challenged by new evidences, such as the X-ray reprocessing yields too much high frequency UV/optical variations as well as it fails to reproduce the observed timescale-dependent color variations among {\it Swift} lightcurves of NGC 5548. In a different manner, we demonstrate that an upgraded inhomogeneous accretion disk model, whose local {\it independent} temperature fluctuations are subject to a speculated {\it common} large-scale temperature fluctuation, can intrinsically generate the tight inter-band correlation and lag across UV/optical, and be in nice agreement with several observational properties of NGC 5548, including the timescale-dependent color variation. The emergent lag is a result of the {\it differential regression capability} of local temperature fluctuations when responding to the large-scale fluctuation. An average speed of propagations as large as ≳15%\gtrsim 15\% of the speed of light may be required by this common fluctuation. Several potential physical mechanisms for such propagations are discussed. Our interesting phenomenological scenario may shed new light on comprehending the UV/optical continuum variations of active galactic nuclei.Comment: 18 pages, 8 figures. ApJ accepted. Further comments are very welcome

    Cross Aggregation Transformer for Image Restoration

    Full text link
    Recently, Transformer architecture has been introduced into image restoration to replace convolution neural network (CNN) with surprising results. Considering the high computational complexity of Transformer with global attention, some methods use the local square window to limit the scope of self-attention. However, these methods lack direct interaction among different windows, which limits the establishment of long-range dependencies. To address the above issue, we propose a new image restoration model, Cross Aggregation Transformer (CAT). The core of our CAT is the Rectangle-Window Self-Attention (Rwin-SA), which utilizes horizontal and vertical rectangle window attention in different heads parallelly to expand the attention area and aggregate the features cross different windows. We also introduce the Axial-Shift operation for different window interactions. Furthermore, we propose the Locality Complementary Module to complement the self-attention mechanism, which incorporates the inductive bias of CNN (e.g., translation invariance and locality) into Transformer, enabling global-local coupling. Extensive experiments demonstrate that our CAT outperforms recent state-of-the-art methods on several image restoration applications. The code and models are available at https://github.com/zhengchen1999/CAT.Comment: Accepted to NeurIPS 2022. Code is available at https://github.com/zhengchen1999/CA

    An intrinsic link between long-term UV/optical variations and X-ray loudness in quasars

    Full text link
    Observations have shown that UV/optical variation amplitude of quasars depend on several physi- cal parameters including luminosity, Eddington ratio, and likely also black hole mass. Identifying new factors which correlate with the variation is essential to probe the underlying physical processes. Combining ~ten years long quasar light curves from SDSS stripe 82 and X-ray data from Stripe 82X, we build a sample of X-ray detected quasars to investigate the relation between UV/optical variation amplitude (σrms\sigma_{rms}) and X-ray loudness. We find that quasars with more intense X-ray radiation (com- pared to bolometric luminosity) are more variable in UV/optical. Such correlation remains highly significant after excluding the effect of other parameters including luminosity, black hole mass, Ed- dington ratio, redshift, rest-frame wavelength (i.e., through partial correlation analyses). We further find the intrinsic link between X-ray loudness and UV/optical variation is gradually more prominent on longer timescales (up to 10 years in the observed frame), but tends to disappear at timescales < 100 days. This suggests a slow and long-term underlying physical process. The X-ray reprocessing paradigm, in which UV/optical variation is produced by a variable central X-ray emission illuminating the accretion disk, is thus disfavored. The discovery points to an interesting scheme that both the X-ray corona heating and UV/optical variation is quasars are closely associated with magnetic disc turbulence, and the innermost disc turbulence (where corona heating occurs) correlates with the slow turbulence at larger radii (where UV/optical emission is produced).Comment: 9 pages, 4 figures, 1 table, accepted by Ap

    Hierarchical Integration Diffusion Model for Realistic Image Deblurring

    Full text link
    Diffusion models (DMs) have recently been introduced in image deblurring and exhibited promising performance, particularly in terms of details reconstruction. However, the diffusion model requires a large number of inference iterations to recover the clean image from pure Gaussian noise, which consumes massive computational resources. Moreover, the distribution synthesized by the diffusion model is often misaligned with the target results, leading to restrictions in distortion-based metrics. To address the above issues, we propose the Hierarchical Integration Diffusion Model (HI-Diff), for realistic image deblurring. Specifically, we perform the DM in a highly compacted latent space to generate the prior feature for the deblurring process. The deblurring process is implemented by a regression-based method to obtain better distortion accuracy. Meanwhile, the highly compact latent space ensures the efficiency of the DM. Furthermore, we design the hierarchical integration module to fuse the prior into the regression-based model from multiple scales, enabling better generalization in complex blurry scenarios. Comprehensive experiments on synthetic and real-world blur datasets demonstrate that our HI-Diff outperforms state-of-the-art methods. Code and trained models are available at https://github.com/zhengchen1999/HI-Diff.Comment: Code is available at https://github.com/zhengchen1999/HI-Dif

    Image Super-Resolution with Text Prompt Diffusion

    Full text link
    Image super-resolution (SR) methods typically model degradation to improve reconstruction accuracy in complex and unknown degradation scenarios. However, extracting degradation information from low-resolution images is challenging, which limits the model performance. To boost image SR performance, one feasible approach is to introduce additional priors. Inspired by advancements in multi-modal methods and text prompt image processing, we introduce text prompts to image SR to provide degradation priors. Specifically, we first design a text-image generation pipeline to integrate text into the SR dataset through the text degradation representation and degradation model. The text representation applies a discretization manner based on the binning method to describe the degradation abstractly. This method maintains the flexibility of the text and is user-friendly. Meanwhile, we propose the PromptSR to realize the text prompt SR. The PromptSR utilizes the pre-trained language model (e.g., T5 or CLIP) to enhance restoration. We train the model on the generated text-image dataset. Extensive experiments indicate that introducing text prompts into SR, yields excellent results on both synthetic and real-world images. Code is available at: https://github.com/zhengchen1999/PromptSR.Comment: Code is available at https://github.com/zhengchen1999/PromptS

    NeutronOrch: Rethinking Sample-based GNN Training under CPU-GPU Heterogeneous Environments

    Full text link
    Graph Neural Networks (GNNs) have demonstrated outstanding performance in various applications. Existing frameworks utilize CPU-GPU heterogeneous environments to train GNN models and integrate mini-batch and sampling techniques to overcome the GPU memory limitation. In CPU-GPU heterogeneous environments, we can divide sample-based GNN training into three steps: sample, gather, and train. Existing GNN systems use different task orchestrating methods to employ each step on CPU or GPU. After extensive experiments and analysis, we find that existing task orchestrating methods fail to fully utilize the heterogeneous resources, limited by inefficient CPU processing or GPU resource contention. In this paper, we propose NeutronOrch, a system for sample-based GNN training that incorporates a layer-based task orchestrating method and ensures balanced utilization of the CPU and GPU. NeutronOrch decouples the training process by layer and pushes down the training task of the bottom layer to the CPU. This significantly reduces the computational load and memory footprint of GPU training. To avoid inefficient CPU processing, NeutronOrch only offloads the training of frequently accessed vertices to the CPU and lets GPU reuse their embeddings with bounded staleness. Furthermore, NeutronOrch provides a fine-grained pipeline design for the layer-based task orchestrating method, fully overlapping different tasks on heterogeneous resources while strictly guaranteeing bounded staleness. The experimental results show that compared with the state-of-the-art GNN systems, NeutronOrch can achieve up to 11.51x performance speedup
    • …
    corecore