227 research outputs found
Transit Use for Single-parent Households: Evidence from Maryland
Single parents face unique transportation barriers in their lives. Although helping single parents obtain private vehicles (e.g., car donation programs) would be a potential solution, we cannot ignore the high expense of maintaining and operating a vehicle, which may impose a heavy financial burden on single-parent families and constrain their ability to access opportunities and services. In contrast, public transit could be a more accessible and affordable transportation mode that benefits single-parent families. This study examined the association between public transit use and single parents using 2017 National Household Travel Survey and American Community Survey data for Maryland, United States. Using zero-inflated negative binomial (ZINB) regression, we found that single parents used transit more than the average resident, and census block groups with more single-parent families had more transit commuters, holding other demographic and socioeconomic variables constant. This association was more significant in large metropolitan and urban areas than the state average. The findings highlight the vital role of public transit in single parents\u27 daily travel. We discussed policy implications related to helping single parents access opportunities and services
A study of energy correction for the electron beam data in the BGO ECAL of the DAMPE
The DArk Matter Particle Explorer (DAMPE) is an orbital experiment aiming at
searching for dark matter indirectly by measuring the spectra of photons,
electrons and positrons originating from deep space. The BGO electromagnetic
calorimeter is one of the key sub-detectors of the DAMPE, which is designed for
high energy measurement with a large dynamic range from 5 GeV to 10 TeV. In
this paper, some methods for energy correction are discussed and tried, in
order to reconstruct the primary energy of the incident electrons. Different
methods are chosen for the appropriate energy ranges. The results of Geant4
simulation and beam test data (at CERN) are presented
A Systematic Evaluation of Federated Learning on Biomedical Natural Language Processing
Language models (LMs) like BERT and GPT have revolutionized natural language
processing (NLP). However, privacy-sensitive domains, particularly the medical
field, face challenges to train LMs due to limited data access and privacy
constraints imposed by regulations like the Health Insurance Portability and
Accountability Act (HIPPA) and the General Data Protection Regulation (GDPR).
Federated learning (FL) offers a decentralized solution that enables
collaborative learning while ensuring the preservation of data privacy. In this
study, we systematically evaluate FL in medicine across biomedical NLP
tasks using LMs encompassing corpora. Our results showed that: 1) FL
models consistently outperform LMs trained on individual client's data and
sometimes match the model trained with polled data; 2) With the fixed number of
total data, LMs trained using FL with more clients exhibit inferior
performance, but pre-trained transformer-based models exhibited greater
resilience. 3) LMs trained using FL perform nearly on par with the model
trained with pooled data when clients' data are IID distributed while
exhibiting visible gaps with non-IID data. Our code is available at:
https://github.com/PL97/FedNLPComment: Accepted by KDD 2023 Workshop FL4Data-Minin
AniPortraitGAN: Animatable 3D Portrait Generation from 2D Image Collections
Previous animatable 3D-aware GANs for human generation have primarily focused
on either the human head or full body. However, head-only videos are relatively
uncommon in real life, and full body generation typically does not deal with
facial expression control and still has challenges in generating high-quality
results. Towards applicable video avatars, we present an animatable 3D-aware
GAN that generates portrait images with controllable facial expression, head
pose, and shoulder movements. It is a generative model trained on unstructured
2D image collections without using 3D or video data. For the new task, we base
our method on the generative radiance manifold representation and equip it with
learnable facial and head-shoulder deformations. A dual-camera rendering and
adversarial learning scheme is proposed to improve the quality of the generated
faces, which is critical for portrait images. A pose deformation processing
network is developed to generate plausible deformations for challenging regions
such as long hair. Experiments show that our method, trained on unstructured 2D
images, can generate diverse and high-quality 3D portraits with desired control
over different properties.Comment: SIGGRAPH Asia 2023. Project Page:
https://yuewuhkust.github.io/AniPortraitGAN
ePointDA: An End-to-End Simulation-to-Real Domain Adaptation Framework for LiDAR Point Cloud Segmentation
Due to its robust and precise distance measurements, LiDAR plays an important
role in scene understanding for autonomous driving. Training deep neural
networks (DNNs) on LiDAR data requires large-scale point-wise annotations,
which are time-consuming and expensive to obtain. Instead, simulation-to-real
domain adaptation (SRDA) trains a DNN using unlimited synthetic data with
automatically generated labels and transfers the learned model to real
scenarios. Existing SRDA methods for LiDAR point cloud segmentation mainly
employ a multi-stage pipeline and focus on feature-level alignment. They
require prior knowledge of real-world statistics and ignore the pixel-level
dropout noise gap and the spatial feature gap between different domains. In
this paper, we propose a novel end-to-end framework, named ePointDA, to address
the above issues. Specifically, ePointDA consists of three modules:
self-supervised dropout noise rendering, statistics-invariant and
spatially-adaptive feature alignment, and transferable segmentation learning.
The joint optimization enables ePointDA to bridge the domain shift at the
pixel-level by explicitly rendering dropout noise for synthetic LiDAR and at
the feature-level by spatially aligning the features between different domains,
without requiring the real-world statistics. Extensive experiments adapting
from synthetic GTA-LiDAR to real KITTI and SemanticKITTI demonstrate the
superiority of ePointDA for LiDAR point cloud segmentation.Comment: Accepted by AAAI 202
Curriculum CycleGAN for Textual Sentiment Domain Adaptation with Multiple Sources
Sentiment analysis of user-generated reviews or comments on products and
services in social networks can help enterprises to analyze the feedback from
customers and take corresponding actions for improvement. To mitigate
large-scale annotations on the target domain, domain adaptation (DA) provides
an alternate solution by learning a transferable model from other labeled
source domains. Existing multi-source domain adaptation (MDA) methods either
fail to extract some discriminative features in the target domain that are
related to sentiment, neglect the correlations of different sources and the
distribution difference among different sub-domains even in the same source, or
cannot reflect the varying optimal weighting during different training stages.
In this paper, we propose a novel instance-level MDA framework, named
curriculum cycle-consistent generative adversarial network (C-CycleGAN), to
address the above issues. Specifically, C-CycleGAN consists of three
components: (1) pre-trained text encoder which encodes textual input from
different domains into a continuous representation space, (2) intermediate
domain generator with curriculum instance-level adaptation which bridges the
gap across source and target domains, and (3) task classifier trained on the
intermediate domain for final sentiment classification. C-CycleGAN transfers
source samples at instance-level to an intermediate domain that is closer to
the target domain with sentiment semantics preserved and without losing
discriminative features. Further, our dynamic instance-level weighting
mechanisms can assign the optimal weights to different source samples in each
training stage. We conduct extensive experiments on three benchmark datasets
and achieve substantial gains over state-of-the-art DA approaches. Our source
code is released at: https://github.com/WArushrush/Curriculum-CycleGAN.Comment: Accepted by WWW 202
- …