318 research outputs found
Inactivation and Survival of Bacteriophage Φ6 on Tvyek Suits
Healthcare providers encounter a wide range of hazards on the job, including exposure to infectious diseases. Protecting them from occupational infectious disease is very important. Healthcare workers use personal protective equipment (PPE) as a measure to decrease the risk of getting infected during patient care. For high-risk diseases like Ebola, Tyvek suits are coverall suits that protect the body and reduce the risk of body fluid exposure. However, a person removing a contaminated suit may also be exposed to virus. Previous studies have shown that enveloped viruses can survive on different types of surfaces, so the objective of this study is to determine the inactivation of bacteriophage Φ6, a surrogate for enveloped human virus, on the surface of Tyvek suits at two different relative humidity levels, 40% and 60% at 22°C. The results showed the inactivation rate of virus was higher at 60% RH than 40% RH. There was ~3log10 (99.9%) reduction of virus inactivation after 6 hours at 40% but ~3log10 (99.9%) inactivation took 9 hours at 60%. This suggests that enveloped viruses can survive on the surface of Tyvek suits for more than 6 hours, and should be considered a potential risk for contamination when they are taken off after use
Tackling the Non-IID Issue in Heterogeneous Federated Learning by Gradient Harmonization
Federated learning (FL) is a privacy-preserving paradigm for collaboratively
training a global model from decentralized clients. However, the performance of
FL is hindered by non-independent and identically distributed (non-IID) data
and device heterogeneity. In this work, we revisit this key challenge through
the lens of gradient conflicts on the server side. Specifically, we first
investigate the gradient conflict phenomenon among multiple clients and reveal
that stronger heterogeneity leads to more severe gradient conflicts. To tackle
this issue, we propose FedGH, a simple yet effective method that mitigates
local drifts through Gradient Harmonization. This technique projects one
gradient vector onto the orthogonal plane of the other within conflicting
client pairs. Extensive experiments demonstrate that FedGH consistently
enhances multiple state-of-the-art FL baselines across diverse benchmarks and
non-IID scenarios. Notably, FedGH yields more significant improvements in
scenarios with stronger heterogeneity. As a plug-and-play module, FedGH can be
seamlessly integrated into any FL framework without requiring hyperparameter
tuning
Patch-based 3D Natural Scene Generation from a Single Example
We target a 3D generative model for general natural scenes that are typically
unique and intricate. Lacking the necessary volumes of training data, along
with the difficulties of having ad hoc designs in presence of varying scene
characteristics, renders existing setups intractable. Inspired by classical
patch-based image models, we advocate for synthesizing 3D scenes at the patch
level, given a single example. At the core of this work lies important
algorithmic designs w.r.t the scene representation and generative patch
nearest-neighbor module, that address unique challenges arising from lifting
classical 2D patch-based framework to 3D generation. These design choices, on a
collective level, contribute to a robust, effective, and efficient model that
can generate high-quality general natural scenes with both realistic geometric
structure and visual appearance, in large quantities and varieties, as
demonstrated upon a variety of exemplar scenes.Comment: 23 pages, 26 figures, accepted by CVPR 2023. Project page:
http://weiyuli.xyz/Sin3DGen
Example-based Motion Synthesis via Generative Motion Matching
We present GenMM, a generative model that "mines" as many diverse motions as
possible from a single or few example sequences. In stark contrast to existing
data-driven methods, which typically require long offline training time, are
prone to visual artifacts, and tend to fail on large and complex skeletons,
GenMM inherits the training-free nature and the superior quality of the
well-known Motion Matching method. GenMM can synthesize a high-quality motion
within a fraction of a second, even with highly complex and large skeletal
structures. At the heart of our generative framework lies the generative motion
matching module, which utilizes the bidirectional visual similarity as a
generative cost function to motion matching, and operates in a multi-stage
framework to progressively refine a random guess using exemplar motion matches.
In addition to diverse motion generation, we show the versatility of our
generative framework by extending it to a number of scenarios that are not
possible with motion matching alone, including motion completion, key
frame-guided generation, infinite looping, and motion reassembly. Code and data
for this paper are at https://wyysf-98.github.io/GenMM/Comment: SIGGRAPH 2023. Project page: https://wyysf-98.github.io/GenMM/,
Video: https://www.youtube.com/watch?v=lehnxcade4
MoCo-Flow: Neural Motion Consensus Flow for Dynamic Humans in Stationary Monocular Cameras
Synthesizing novel views of dynamic humans from stationary monocular cameras
is a popular scenario. This is particularly attractive as it does not require
static scenes, controlled environments, or specialized hardware. In contrast to
techniques that exploit multi-view observations to constrain the modeling,
given a single fixed viewpoint only, the problem of modeling the dynamic scene
is significantly more under-constrained and ill-posed. In this paper, we
introduce Neural Motion Consensus Flow (MoCo-Flow), a representation that
models the dynamic scene using a 4D continuous time-variant function. The
proposed representation is learned by an optimization which models a dynamic
scene that minimizes the error of rendering all observation images. At the
heart of our work lies a novel optimization formulation, which is constrained
by a motion consensus regularization on the motion flow. We extensively
evaluate MoCo-Flow on several datasets that contain human motions of varying
complexity, and compare, both qualitatively and quantitatively, to several
baseline methods and variants of our methods. Pretrained model, code, and data
will be released for research purposes upon paper acceptance
sEMG-Based Continuous Estimation of Finger Kinematics via Large-Scale Temporal Convolutional Network
- …