403 research outputs found
Human Mobility Trends during the COVID-19 Pandemic in the United States
In March of this year, COVID-19 was declared a pandemic and it continues to
threaten public health. This global health crisis imposes limitations on daily
movements, which have deteriorated every sector in our society. Understanding
public reactions to the virus and the non-pharmaceutical interventions should
be of great help to fight COVID-19 in a strategic way. We aim to provide
tangible evidence of the human mobility trends by comparing the day-by-day
variations across the U.S. Large-scale public mobility at an aggregated level
is observed by leveraging mobile device location data and the measures related
to social distancing. Our study captures spatial and temporal heterogeneity as
well as the sociodemographic variations regarding the pandemic propagation and
the non-pharmaceutical interventions. All mobility metrics adapted capture
decreased public movements after the national emergency declaration. The
population staying home has increased in all states and becomes more stable
after the stay-at-home order with a smaller range of fluctuation. There exists
overall mobility heterogeneity between the income or population density groups.
The public had been taking active responses, voluntarily staying home more, to
the in-state confirmed cases while the stay-at-home orders stabilize the
variations. The study suggests that the public mobility trends conform with the
government message urging to stay home. We anticipate our data-driven analysis
offers integrated perspectives and serves as evidence to raise public awareness
and, consequently, reinforce the importance of social distancing while
assisting policymakers.Comment: 11 pages, 9 figure
GNNHLS: Evaluating Graph Neural Network Inference via High-Level Synthesis
With the ever-growing popularity of Graph Neural Networks (GNNs), efficient
GNN inference is gaining tremendous attention. Field-Programming Gate Arrays
(FPGAs) are a promising execution platform due to their fine-grained
parallelism, low-power consumption, reconfigurability, and concurrent
execution. Even better, High-Level Synthesis (HLS) tools bridge the gap between
the non-trivial FPGA development efforts and rapid emergence of new GNN models.
In this paper, we propose GNNHLS, an open-source framework to comprehensively
evaluate GNN inference acceleration on FPGAs via HLS, containing a software
stack for data generation and baseline deployment, and FPGA implementations of
6 well-tuned GNN HLS kernels. We evaluate GNNHLS on 4 graph datasets with
distinct topologies and scales. The results show that GNNHLS achieves up to
50.8x speedup and 423x energy reduction relative to the CPU baselines. Compared
with the GPU baselines, GNNHLS achieves up to 5.16x speedup and 74.5x energy
reduction
Pre-training on Synthetic Driving Data for Trajectory Prediction
Accumulating substantial volumes of real-world driving data proves pivotal in
the realm of trajectory forecasting for autonomous driving. Given the heavy
reliance of current trajectory forecasting models on data-driven methodologies,
we aim to tackle the challenge of learning general trajectory forecasting
representations under limited data availability. We propose to augment both HD
maps and trajectories and apply pre-training strategies on top of them.
Specifically, we take advantage of graph representations of HD-map and apply
vector transformations to reshape the maps, to easily enrich the limited number
of scenes. Additionally, we employ a rule-based model to generate trajectories
based on augmented scenes; thus enlarging the trajectories beyond the collected
real ones. To foster the learning of general representations within this
augmented dataset, we comprehensively explore the different pre-training
strategies, including extending the concept of a Masked AutoEncoder (MAE) for
trajectory forecasting. Extensive experiments demonstrate the effectiveness of
our data expansion and pre-training strategies, which outperform the baseline
prediction model by large margins, e.g. 5.04%, 3.84% and 8.30% in terms of
, and
Comparing levonorgestrel intrauterine system versus hysteroscopic resection in patients with postmenstrual spotting related to a niche in the caesarean scar (MIHYS NICHE trial) : Protocol of a randomised controlled trial
Funding This work was supported by National Key Research and Development Programme (2018YFC1002102), Research Project of Shanghai Health and Fitness Commission (201940012,20184Y0344)),Shanghai Municipal Key Clinical Specialty (shslczdzk01802), Medical Engineering Cross Funds from Shanghai Jiao Tong University (YG2017QN38, ZH2018QNA36, YG2021ZD31), Medical innovation research project of the 2020 'Science and Technology Innovation Action Plan' of Shanghai Science and Technology Commission (20Y11907700), and Clinical Science and Technology Innovation Project of Shanghai Hospital Development Center(SHDC22020216).Peer reviewedPublisher PD
BoostTree and BoostForest for Ensemble Learning
Bootstrap aggregating (Bagging) and boosting are two popular ensemble
learning approaches, which combine multiple base learners to generate a
composite model for more accurate and more reliable performance. They have been
widely used in biology, engineering, healthcare, etc. This article proposes
BoostForest, which is an ensemble learning approach using BoostTree as base
learners and can be used for both classification and regression. BoostTree
constructs a tree model by gradient boosting. It achieves high randomness
(diversity) by sampling its parameters randomly from a parameter pool, and
selecting a subset of features randomly at node splitting. BoostForest further
increases the randomness by bootstrapping the training data in constructing
different BoostTrees. BoostForest outperformed four classical ensemble learning
approaches (Random Forest, Extra-Trees, XGBoost and LightGBM) on 34
classification and regression datasets. Remarkably, BoostForest has only one
hyper-parameter (the number of BoostTrees), which can be easily specified. Our
code is publicly available, and the proposed ensemble learning framework can
also be used to combine many other base learners
Domain decomposition approach for parallel improvement of tetrahedral meshes
Presently, a tetrahedral mesher based on the Delaunay triangulation approach may outperform a tetrahedral improver based on local smoothing and flip operations by nearly one order in terms of computing time. Parallelization is a feasible way to speed up the improver and enable it to handle large-scale meshes. In this study, a novel domain decomposition approach is proposed for parallel mesh improvement. It analyses the dual graph of the input mesh to build an inter-domain boundary that avoids small dihedral angles and poorly shaped faces. Consequently, the parallel improver can fit this boundary without compromising the mesh quality. Meanwhile, the new method does not involve any inter-processor communications and therefore runs very efficiently. A parallel pre-processing pipeline that combines the proposed improver and existing parallel surface and volume meshers can prepare a quality mesh containing hundreds of millions of elements in minutes. Experiments are presented to show that the developed system is robust and applicable to models of a complication level experienced in industry
Open X-Embodiment:Robotic learning datasets and RT-X models
Large, high-capacity models trained on diverse datasets have shown remarkable successes on efficiently tackling downstream applications. In domains from NLP to Computer Vision, this has led to a consolidation of pretrained models, with general pretrained backbones serving as a starting point for many applications. Can such a consolidation happen in robotics? Conventionally, robotic learning methods train a separate model for every application, every robot, and even every environment. Can we instead train "generalist" X-robot policy that can be adapted efficiently to new robots, tasks, and environments? In this paper, we provide datasets in standardized data formats and models to make it possible to explore this possibility in the context of robotic manipulation, alongside experimental results that provide an example of effective X-robot policies. We assemble a dataset from 22 different robots collected through a collaboration between 21 institutions, demonstrating 527 skills (160266 tasks). We show that a high-capacity model trained on this data, which we call RT-X, exhibits positive transfer and improves the capabilities of multiple robots by leveraging experience from other platforms. The project website is robotics-transformer-x.github.io
- âŠ