72 research outputs found

    Correlating sparse sensing for large-scale traffic speed estimation: A Laplacian-enhanced low-rank tensor kriging approach

    Full text link
    Traffic speed is central to characterizing the fluidity of the road network. Many transportation applications rely on it, such as real-time navigation, dynamic route planning, and congestion management. Rapid advances in sensing and communication techniques make traffic speed detection easier than ever. However, due to sparse deployment of static sensors or low penetration of mobile sensors, speeds detected are incomplete and far from network-wide use. In addition, sensors are prone to error or missing data due to various kinds of reasons, speeds from these sensors can become highly noisy. These drawbacks call for effective techniques to recover credible estimates from the incomplete data. In this work, we first identify the issue as a spatiotemporal kriging problem and propose a Laplacian enhanced low-rank tensor completion (LETC) framework featuring both lowrankness and multi-dimensional correlations for large-scale traffic speed kriging under limited observations. To be specific, three types of speed correlation including temporal continuity, temporal periodicity, and spatial proximity are carefully chosen and simultaneously modeled by three different forms of graph Laplacian, named temporal graph Fourier transform, generalized temporal consistency regularization, and diffusion graph regularization. We then design an efficient solution algorithm via several effective numeric techniques to scale up the proposed model to network-wide kriging. By performing experiments on two public million-level traffic speed datasets, we finally draw the conclusion and find our proposed LETC achieves the state-of-the-art kriging performance even under low observation rates, while at the same time saving more than half computing time compared with baseline methods. Some insights into spatiotemporal traffic data modeling and kriging at the network level are provided as well

    Nexus sine qua non: Essentially connected neural networks for spatial-temporal forecasting of multivariate time series

    Full text link
    Modeling and forecasting multivariate time series not only facilitates the decision making of practitioners, but also deepens our scientific understanding of the underlying dynamical systems. Spatial-temporal graph neural networks (STGNNs) are emerged as powerful predictors and have become the de facto models for learning spatiotemporal representations in recent years. However, existing architectures of STGNNs tend to be complicated by stacking a series of fancy layers. The designed models could be either redundant or enigmatic, which pose great challenges on their complexity and scalability. Such concerns prompt us to re-examine the designs of modern STGNNs and identify core principles that contribute to a powerful and efficient neural predictor. Here we present a compact predictive model that is fully defined by a dense encoder-decoder and a message-passing layer, powered by node identifications, without any complex sequential modules, e.g., TCNs, RNNs, and Transformers. Empirical results demonstrate how a simple and elegant model with proper inductive basis can compare favorably w.r.t. the state of the art with elaborate designs, while being much more interpretable and computationally efficient for spatial-temporal forecasting problem. We hope our findings would open new horizons for future studies to revisit the design of more concise neural forecasting architectures

    Towards better traffic volume estimation: Tackling both underdetermined and non-equilibrium problems via a correlation-adaptive graph convolution network

    Full text link
    Traffic volume is an indispensable ingredient to provide fine-grained information for traffic management and control. However, due to limited deployment of traffic sensors, obtaining full-scale volume information is far from easy. Existing works on this topic primarily focus on improving the overall estimation accuracy of a particular method and ignore the underlying challenges of volume estimation, thereby having inferior performances on some critical tasks. This paper studies two key problems with regard to traffic volume estimation: (1) underdetermined traffic flows caused by undetected movements, and (2) non-equilibrium traffic flows arise from congestion propagation. Here we demonstrate a graph-based deep learning method that can offer a data-driven, model-free and correlation adaptive approach to tackle the above issues and perform accurate network-wide traffic volume estimation. Particularly, in order to quantify the dynamic and nonlinear relationships between traffic speed and volume for the estimation of underdetermined flows, a speed patternadaptive adjacent matrix based on graph attention is developed and integrated into the graph convolution process, to capture non-local correlations between sensors. To measure the impacts of non-equilibrium flows, a temporal masked and clipped attention combined with a gated temporal convolution layer is customized to capture time-asynchronous correlations between upstream and downstream sensors. We then evaluate our model on a real-world highway traffic volume dataset and compare it with several benchmark models. It is demonstrated that the proposed model achieves high estimation accuracy even under 20% sensor coverage rate and outperforms other baselines significantly, especially on underdetermined and non-equilibrium flow locations. Furthermore, comprehensive quantitative model analysis are also carried out to justify the model designs

    Temporal-spatial Correlation Attention Network for Clinical Data Analysis in Intensive Care Unit

    Full text link
    In recent years, medical information technology has made it possible for electronic health record (EHR) to store fairly complete clinical data. This has brought health care into the era of "big data". However, medical data are often sparse and strongly correlated, which means that medical problems cannot be solved effectively. With the rapid development of deep learning in recent years, it has provided opportunities for the use of big data in healthcare. In this paper, we propose a temporal-saptial correlation attention network (TSCAN) to handle some clinical characteristic prediction problems, such as predicting death, predicting length of stay, detecting physiologic decline, and classifying phenotypes. Based on the design of the attention mechanism model, our approach can effectively remove irrelevant items in clinical data and irrelevant nodes in time according to different tasks, so as to obtain more accurate prediction results. Our method can also find key clinical indicators of important outcomes that can be used to improve treatment options. Our experiments use information from the Medical Information Mart for Intensive Care (MIMIC-IV) database, which is open to the public. Finally, we have achieved significant performance benefits of 2.0\% (metric) compared to other SOTA prediction methods. We achieved a staggering 90.7\% on mortality rate, 45.1\% on length of stay. The source code can be find: \url{https://github.com/yuyuheintju/TSCAN}

    Comparison of Rooting Strategies to Explore Rock Fractures for Shallow Soil-Adapted Tree Species with Contrasting Aboveground Growth Rates: A Greenhouse Microcosm Experiment

    Get PDF
    For tree species adapted to shallow soil environments, rooting strategies that efficiently explore rock fractures are important because soil water depletion occurs frequently. However, two questions: (a) to what extent shallow soil-adapted species rely on exploring rock fractures and (b) what outcomes result from drought stress, have rarely been tested. Therefore, based on the expectation that early development of roots into deep soil layers is at the cost of aboveground growth, seedlings of three tree species (Cyclobalanopsis glauca, Delavaya toxocarpa, and Acer cinnamomifolium) with distinct aboveground growth rates were selected from a typical shallow soil region. In a greenhouse experiment that mimics the basic features of shallow soil environments, 1-year-old seedlings were transplanted into simulated microcosms of shallow soil overlaying fractured bedrock. Root biomass allocation and leaf physiological activities, as well as leaf δ13C values were investigated and compared for two treatments: regular irrigation and repeated cycles of drought stress. Our results show that the three species differed in their rooting strategies in the context of encountering rock fractures, however, these strategies were not closely related to the aboveground growth rate. For the slowest-growing seedling, C. glauca, percentages of root mass in the fractures, as well as in the soil layer between soil and bedrock increased significantly under both treatments, indicating a specialized rooting strategy that facilitated the exploration of rock fractures. Early investment in deep root growth was likely critical to the establishment of this drought-vulnerable species. For the intermediate-growing, A. cinnamomifolium, percentages of root mass in the bedrock and interface soil layers were relatively low and exhibited no obvious change under either treatment. This limited need to explore rock fractures was compensated by a conservative water use strategy. For the fast-growing, D. toxocarpa, percentages of root mass in the bedrock and interface layers increased simultaneously under drought conditions, but not under irrigated conditions. This drought-induced rooting plasticity was associated with drought avoidance by this species. Although, root development might have been affected by the simulated microcosm, contrasting results among the three species indicated that efficient use of rock fractures is not a necessary or specialized strategy of shallow-soil adapted species. The establishment and persistence of these species relied on the mutual complementation between their species-specific rooting strategies and drought adaptations

    Research on the Heating of Deicing Fluid in a New Reshaped Coiled Tube

    Get PDF
    Aircraft ground deicing operation is significant to ensure civil flight safety in winter. Helically coiled tube is the important heat exchanger in Chinese deicing fluid heating system. In order to improve the deicing efficiency, the research focuses on heat transfer enhancement of deicing fluid in the tube. Based on the field synergy principle, a new reshaped tube (TCHC) is designed by ring-rib convex on the inner wall. Deicing fluid is high viscosity ethylene-glycol-based mixture. Because of the power function relation between high viscosity and temperature, viscosity has a negative influence on heat transfer. The number of ring-ribs and inlet velocity are two key parameters to the heat transfer performance. For both water and ethylene glycol, the outlet temperature rises when the number of ring-ribs increases to a certain limit. However, the increasing of velocity reduces heating time, which results in lower outlet temperature. The heating experiment of the original tube is conducted. The error between experiment and simulation is less than 5%. The outlet temperature of TCHC increases by 3.76%. As a result, TCHC efficiently promotes the coordination of velocity and temperature fields by changing the velocity field. TCHC has enhanced heat transfer of high viscosity deicing fluid

    Deep Reinforcement Learning Framework for Thoracic Diseases Classification via Prior Knowledge Guidance

    Full text link
    The chest X-ray is often utilized for diagnosing common thoracic diseases. In recent years, many approaches have been proposed to handle the problem of automatic diagnosis based on chest X-rays. However, the scarcity of labeled data for related diseases still poses a huge challenge to an accurate diagnosis. In this paper, we focus on the thorax disease diagnostic problem and propose a novel deep reinforcement learning framework, which introduces prior knowledge to direct the learning of diagnostic agents and the model parameters can also be continuously updated as the data increases, like a person's learning process. Especially, 1) prior knowledge can be learned from the pre-trained model based on old data or other domains' similar data, which can effectively reduce the dependence on target domain data, and 2) the framework of reinforcement learning can make the diagnostic agent as exploratory as a human being and improve the accuracy of diagnosis through continuous exploration. The method can also effectively solve the model learning problem in the case of few-shot data and improve the generalization ability of the model. Finally, our approach's performance was demonstrated using the well-known NIH ChestX-ray 14 and CheXpert datasets, and we achieved competitive results. The source code can be found here: \url{https://github.com/NeaseZ/MARL}

    Photometric Metallicity Calibration with SDSS and SCUSS and its Application to distant stars in the South Galactic Cap

    Full text link
    Based on SDSS g, r and SCUSS (South Galactic Cap of u-band Sky Survey) uu photometry, we develop a photometric calibration for estimating the stellar metallicity from u−gu-g and g−rg-r colors by using the SDSS spectra of 32,542 F- and G-type main sequence stars, which cover almost 37003700 deg2^{2} in the south Galactic cap. The rms scatter of the photometric metallicity residuals relative to spectrum-based metallicity is 0.140.14 dex when g−r<0.4g-r<0.4, and 0.160.16 dex when g−r>0.4g-r>0.4. Due to the deeper and more accurate magnitude of SCUSS uu band, the estimate can be used up to the faint magnitude of g=21g=21. This application range of photometric metallicity calibration is wide enough so that it can be used to study metallicity distribution of distant stars. In this study, we select the Sagittarius (Sgr) stream and its neighboring field halo stars in south Galactic cap to study their metallicity distribution. We find that the Sgr stream at the cylindrical Galactocentric coordinate of R∼19R\sim 19 kpc, ∣z∣∼14\left| z\right| \sim 14 kpc exhibits a relative rich metallicity distribution, and the neighboring field halo stars in our studied fields can be modeled by two-Gaussian model, with peaks respectively at [Fe/H]=−1.9=-1.9 and [Fe/H]=−1.5=-1.5.Comment: 8 pages, 7 figures, Accepted for publication in MNRA

    Study on the irradiation effect of mechanical properties of RPV steels using crystal plasticity model

    No full text
    In this paper a body-centered cubic(BCC) crystal plasticity model based on microscopic dislocation mechanism is introduced and numerically implemented. The model is coupled with irradiation effect via tracking dislocation loop evolution on each slip system. On the basis of the model, uniaxial tensile tests of unirradiated and irradiated RPV steel(take Chinese A508-3 as an example) at different temperatures are simulated, and the simulation results agree well with the experimental results. Furthermore, crystal plasticity damage is introduced into the model. Then the damage behavior before and after irradiation is studied using the model. The results indicate that the model is an effective tool to study the effect of irradiation and temperature on the mechanical properties and damage behavior. Keywords: Crystal plasticity, Dislocation evolution, Irradiation effect, Damage, RPV stee
    • …
    corecore