94 research outputs found
Shaping Online Dialogue: Examining How Community Rules Affect Discussion Structures on Reddit
Community rules play a key part in enabling or constraining the behaviors of
members in online communities. However, little is unknown regarding whether and
to what degree changing rules actually affects community dynamics. In this
paper, we seek to understand how these behavior-governing rules shape the
interactions between users, as well as the structure of their discussion. Using
the top communities on Reddit (i.e. subreddits), we first contribute a taxonomy
of behavior-based rule categories across Reddit. Then, we use a network
analysis perspective to discover how changing implementation of different rule
categories affects subreddits' user interaction and discussion networks over a
1.5 year period. Our study find several significant effects, including greater
clustering among users when subreddits increase rules focused on structural
regulation and how restricting allowable content surprisingly leads to more
interactions between users. Our findings contribute to research in proactive
moderation through rule setting, as well as lend valuable insights for online
community designers and moderators to achieve desired community dynamics
A sEMG-based shared control system with no-target obstacle avoidance for omnidirectional mobile robots
We propose a novel shared control strategy for mobile robots in a human-robot interaction manner based on surface eletromyography (sEMG) signals. For security reasons, an obstacle avoidance scheme is introduced to the shared control system as collision avoidance guidance. The motion of the mobile robot is a resultant of compliant motion control and obstacle avoidance. In the mode of compliant motion, the sEMG signals obtained from the operator's forearms are transformed into human commands to control the moving direction and linear velocity of the mobile robot, respectively. When the mobile robot is blocked by obstacles, the motion mode is converted into obstacle avoidance. Aimed at the obstacle avoidance problem without a specific target, we develop a no-target Bug (NT-Bug) algorithm to guide the mobile robot to avoid obstacles and return to the command line. Besides, the command moving direction given by the operator is taken into consideration in the obstacle avoidance process to plan a smoother and safer path for the mobile robot. A model predictive controller is exploited to minimize the tracking errors. Experiments have been implemented to demonstrate the effectiveness of the proposed shared control strategy and the NT-Bug algorithm
Machine Learning Enabled Prediction of Mechanical Properties of Tungsten Disulfide Monolayer
One of two-dimensional transition metal dichalcogenide materials, tungsten disulfide (WS2), has aroused much research interest, and its mechanical properties play an important role in a practical application. Here the mechanical properties of h-WS2 and t-WS2 monolayers in the armchair and zigzag directions are evaluated by utilizing the molecular dynamics (MD) simulations and machine learning (ML) technique. We mainly focus on the effects of chirality, system size, temperature, strain rate, and random vacancy defect on mechanical properties, including fracture strain, fracture strength, and Young’s modulus. We find that the mechanical properties of h-WS2 surpass those of t-WS2 due to the different coordination spheres of the transition metal atoms. It can also be observed that the fracture strain, fracture strength, and Young’s modulus decrease when temperature and vacancy defect ratio are enhanced. The random forest (RF) supervised ML algorithm is employed to model the correlations between different impact factors and target outputs. A total number of 3600 MD simulations are performed to generate the training and testing dataset for the ML model. The mechanical properties of WS2 (i.e., target outputs) can be predicted using the trained model with the knowledge of different input features, such as WS2 type, chirality, temperature, strain rate, and defect ratio. The mean square errors of ML predictions for the mechanical properties are orders of magnitude smaller than the actual values of each property, indicating good training results of the RF model
SHERF: Generalizable Human NeRF from a Single Image
Existing Human NeRF methods for reconstructing 3D humans typically rely on
multiple 2D images from multi-view cameras or monocular videos captured from
fixed camera views. However, in real-world scenarios, human images are often
captured from random camera angles, presenting challenges for high-quality 3D
human reconstruction. In this paper, we propose SHERF, the first generalizable
Human NeRF model for recovering animatable 3D humans from a single input image.
SHERF extracts and encodes 3D human representations in canonical space,
enabling rendering and animation from free views and poses. To achieve
high-fidelity novel view and pose synthesis, the encoded 3D human
representations should capture both global appearance and local fine-grained
textures. To this end, we propose a bank of 3D-aware hierarchical features,
including global, point-level, and pixel-aligned features, to facilitate
informative encoding. Global features enhance the information extracted from
the single input image and complement the information missing from the partial
2D observation. Point-level features provide strong clues of 3D human
structure, while pixel-aligned features preserve more fine-grained details. To
effectively integrate the 3D-aware hierarchical feature bank, we design a
feature fusion transformer. Extensive experiments on THuman, RenderPeople,
ZJU_MoCap, and HuMMan datasets demonstrate that SHERF achieves state-of-the-art
performance, with better generalizability for novel view and pose synthesis.Comment: Accepted by ICCV2023. Project webpage:
https://skhu101.github.io/SHERF
Zolly: Zoom Focal Length Correctly for Perspective-Distorted Human Mesh Reconstruction
As it is hard to calibrate single-view RGB images in the wild, existing 3D
human mesh reconstruction (3DHMR) methods either use a constant large focal
length or estimate one based on the background environment context, which can
not tackle the problem of the torso, limb, hand or face distortion caused by
perspective camera projection when the camera is close to the human body. The
naive focal length assumptions can harm this task with the incorrectly
formulated projection matrices. To solve this, we propose Zolly, the first
3DHMR method focusing on perspective-distorted images. Our approach begins with
analysing the reason for perspective distortion, which we find is mainly caused
by the relative location of the human body to the camera center. We propose a
new camera model and a novel 2D representation, termed distortion image, which
describes the 2D dense distortion scale of the human body. We then estimate the
distance from distortion scale features rather than environment context
features. Afterwards, we integrate the distortion feature with image features
to reconstruct the body mesh. To formulate the correct projection matrix and
locate the human body position, we simultaneously use perspective and
weak-perspective projection loss. Since existing datasets could not handle this
task, we propose the first synthetic dataset PDHuman and extend two real-world
datasets tailored for this task, all containing perspective-distorted human
images. Extensive experiments show that Zolly outperforms existing
state-of-the-art methods on both perspective-distorted datasets and the
standard benchmark (3DPW)
Towards a Learner-Centered Explainable AI: Lessons from the learning sciences
In this short paper, we argue for a refocusing of XAI around human learning
goals. Drawing upon approaches and theories from the learning sciences, we
propose a framework for the learner-centered design and evaluation of XAI
systems. We illustrate our framework through an ongoing case study in the
context of AI-augmented social work.Comment: 7 pages, 2 figure
SynBody: Synthetic Dataset with Layered Human Models for 3D Human Perception and Modeling
Synthetic data has emerged as a promising source for 3D human research as it
offers low-cost access to large-scale human datasets. To advance the diversity
and annotation quality of human models, we introduce a new synthetic dataset,
SynBody, with three appealing features: 1) a clothed parametric human model
that can generate a diverse range of subjects; 2) the layered human
representation that naturally offers high-quality 3D annotations to support
multiple tasks; 3) a scalable system for producing realistic data to facilitate
real-world tasks. The dataset comprises 1.2M images with corresponding accurate
3D annotations, covering 10,000 human body models, 1,187 actions, and various
viewpoints. The dataset includes two subsets for human pose and shape
estimation as well as human neural rendering. Extensive experiments on SynBody
indicate that it substantially enhances both SMPL and SMPL-X estimation.
Furthermore, the incorporation of layered annotations offers a valuable
training resource for investigating the Human Neural Radiance Fields (NeRF).Comment: Accepted by ICCV 2023. Project webpage: https://synbody.github.io
SMPLer-X: Scaling Up Expressive Human Pose and Shape Estimation
Expressive human pose and shape estimation (EHPS) unifies body, hands, and
face motion capture with numerous applications. Despite encouraging progress,
current state-of-the-art methods still depend largely on a confined set of
training datasets. In this work, we investigate scaling up EHPS towards the
first generalist foundation model (dubbed SMPLer-X), with up to ViT-Huge as the
backbone and training with up to 4.5M instances from diverse data sources. With
big data and the large model, SMPLer-X exhibits strong performance across
diverse test benchmarks and excellent transferability to even unseen
environments. 1) For the data scaling, we perform a systematic investigation on
32 EHPS datasets, including a wide range of scenarios that a model trained on
any single dataset cannot handle. More importantly, capitalizing on insights
obtained from the extensive benchmarking process, we optimize our training
scheme and select datasets that lead to a significant leap in EHPS
capabilities. 2) For the model scaling, we take advantage of vision
transformers to study the scaling law of model sizes in EHPS. Moreover, our
finetuning strategy turn SMPLer-X into specialist models, allowing them to
achieve further performance boosts. Notably, our foundation model SMPLer-X
consistently delivers state-of-the-art results on seven benchmarks such as
AGORA (107.2 mm NMVE), UBody (57.4 mm PVE), EgoBody (63.6 mm PVE), and EHF
(62.3 mm PVE without finetuning). Homepage:
https://caizhongang.github.io/projects/SMPLer-X/Comment: Homepage: https://caizhongang.github.io/projects/SMPLer-X
Identification and validation of SERPINE1 as a prognostic and immunological biomarker in pan-cancer and in ccRCC
Background:SERPINE1, a serine protease inhibitor involved in the regulation of the plasminogen activation system, was recently identified as a cancer-related gene. However, its clinical significance and potential mechanisms in pan-cancer remain obscure.Methods: In pan-cancer multi-omics data from public datasets, including The Cancer Genome Atlas (TCGA) and Genotype-Tissue Expression (GTEx), and online web tools were used to analyze the expression of SERPINE1 in different cancers and its correlation with prognosis, genetic alteration, DNA promoter methylation, biological processes, immunoregulator expression levels, immune cell infiltration into tumor, tumor mutation burden (TMB), microsatellite instability (MSI), immunotherapy response and drug sensitivity. Further, two single-cell databases, Tumor Immune Single-cell Hub 2 (TISCH2) and CancerSEA, were used to explore the expression and potential roles of SERPINE1 at a single-cell level. The aberrant expression of SERPINE1 was further verified in clear cell renal cell carcinoma (ccRCC) through qRT-PCR of clinical patient samples, validation in independent cohorts using The Gene Expression Omnibus (GEO) database, and proteomic validation using the Clinical Proteomic Tumor Analysis Consortium (CPTAC) database.Results: The expression of SERPINE1 was dysregulated in cancers and enriched in endothelial cells and fibroblasts. Copy number amplification and low DNA promoter methylation could be partly responsible for high SERPINE1 expression. High SERPINE1 expression was associated with poor prognosis in 21 cancers. The results of gene set enrichment analysis (GSEA) indicated SERPINE1 involvement in the immune response and tumor malignancy. SERPINE1 expression was also associated with the expression of several immunoregulators and immune cell infiltration and could play an immunosuppression role. Besides, SERPINE1 was found to be related with TMB, MSI, immunotherapy response and sensitivity to several drugs in cancers. Finally, the high expression of SERPINE1 in ccRCC was verified using qRT-PCR performed on patient samples, six independent GEO cohorts, and proteomic data from the CPTAC database.Conclusion: The findings of the present study revealed that SERPINE1 exhibits aberrant expression in various types of cancers and is associated with cancer immunity and tumor malignancy, providing novel insights for individualized cancer treatment
Concept for a Future Super Proton-Proton Collider
Following the discovery of the Higgs boson at LHC, new large colliders are
being studied by the international high-energy community to explore Higgs
physics in detail and new physics beyond the Standard Model. In China, a
two-stage circular collider project CEPC-SPPC is proposed, with the first stage
CEPC (Circular Electron Positron Collier, a so-called Higgs factory) focused on
Higgs physics, and the second stage SPPC (Super Proton-Proton Collider) focused
on new physics beyond the Standard Model. This paper discusses this second
stage.Comment: 34 pages, 8 figures, 5 table
- …