210 research outputs found
Regulation of Microglial Activation in Stroke in Aged Mice: A Translational Study
Numerous neurochemical changes occur with aging and stroke mainly affects the elderly. Our previous study has found interferon regulatory factor 5 (IRF5) and 4 (IRF4) regulate neuroinflammation in young stroke mice. However, whether the IRF5-IRF4 regulatory axis has the same effect in aged brains is not known. In this study, aged (18-20-month-old), microglial IRF5 or IRF4 conditional knockout (CKO) mice were subjected to a 60-min middle cerebral artery occlusion (MCAO). Stroke outcomes were quantified at 3d after MCAO. Flow cytometry and ELISA were performed to evaluate microglial activation and immune responses. We found aged microglia express higher levels of IRF5 and lower levels of IRF4 than young microglia after stroke. IRF5 CKO aged mice had improved stroke outcomes; whereas worse outcomes were seen in IRF4 CKO vs. their flox controls. IRF5 CKO aged microglia had significantly lower levels of IL-1β and CD68 than controls; whereas significantly higher levels of IL-1β and TNF-α were seen in IRF4 CKO vs. control microglia. Plasma levels of TNF-α and MIP-1α were decreased in IRF5 CKO vs. flox aged mice, and IL-1β/IL-6 levels were increased in IRF4 CKO vs. controls. The anti-inflammatory cytokines (IL-4/IL-10) levels were higher in IRF5 CKO, and lower in IRF4 CKO aged mice vs. their flox controls. IRF5 and IRF4 signaling drives microglial pro- and anti-inflammatory response respectively; microglial IRF5 is detrimental and IRF4 beneficial for aged mice in stroke. IRF5-IRF4 axis is a promising target for developing new, effective therapeutic strategies for the cerebral ischemia
Computational Optics Meet Domain Adaptation: Transferring Semantic Segmentation Beyond Aberrations
Semantic scene understanding with Minimalist Optical Systems (MOS) in mobile
and wearable applications remains a challenge due to the corrupted imaging
quality induced by optical aberrations. However, previous works only focus on
improving the subjective imaging quality through computational optics, i.e.
Computational Imaging (CI) technique, ignoring the feasibility in semantic
segmentation. In this paper, we pioneer to investigate Semantic Segmentation
under Optical Aberrations (SSOA) of MOS. To benchmark SSOA, we construct
Virtual Prototype Lens (VPL) groups through optical simulation, generating
Cityscapes-ab and KITTI-360-ab datasets under different behaviors and levels of
aberrations. We look into SSOA via an unsupervised domain adaptation
perspective to address the scarcity of labeled aberration data in real-world
scenarios. Further, we propose Computational Imaging Assisted Domain Adaptation
(CIADA) to leverage prior knowledge of CI for robust performance in SSOA. Based
on our benchmark, we conduct experiments on the robustness of state-of-the-art
segmenters against aberrations. In addition, extensive evaluations of possible
solutions to SSOA reveal that CIADA achieves superior performance under all
aberration distributions, paving the way for the applications of MOS in
semantic scene understanding. Code and dataset will be made publicly available
at https://github.com/zju-jiangqi/CIADA.Comment: Code and dataset will be made publicly available at
https://github.com/zju-jiangqi/CIAD
STGIN: Spatial-Temporal Graph Interaction Network for Large-scale POI Recommendation
In Location-Based Services, Point-Of-Interest(POI) recommendation plays a
crucial role in both user experience and business opportunities. Graph neural
networks have been proven effective in providing personalized POI
recommendation services. However, there are still two critical challenges.
First, existing graph models attempt to capture users' diversified interests
through a unified graph, which limits their ability to express interests in
various spatial-temporal contexts. Second, the efficiency limitations of graph
construction and graph sampling in large-scale systems make it difficult to
adapt quickly to new real-time interests. To tackle the above challenges, we
propose a novel Spatial-Temporal Graph Interaction Network. Specifically, we
construct subgraphs of spatial, temporal, spatial-temporal, and global views
respectively to precisely characterize the user's interests in various
contexts. In addition, we design an industry-friendly framework to track the
user's latest interests. Extensive experiments on the real-world dataset show
that our method outperforms state-of-the-art models. This work has been
successfully deployed in a large e-commerce platform, delivering a 1.1% CTR and
6.3% RPM improvement.Comment: accepted by CIKM 202
Effects of tunable, 3D-bioprinted hydrogels on human brown adipocyte behavior and metabolic function
Obesity and its related health complications cause billions of dollars in healthcare costs annually in the United States, and there are yet to be safe and long-lasting anti-obesity approaches. Using brown adipose tissue (BAT) is a promising approach, as it uses fats for energy expenditure. However, the effect of the microenvironment on human thermogenic brown adipogenesis and how to generate clinically relevant sized and functioning BAT are still unknown. In our current study, we evaluated the effects of endothelial growth medium exposure on brown adipogenesis of human brown adipose progenitors (BAP). We found that pre-exposing BAP to angiogenic factors promoted brown adipogenic differentiation and metabolic activity. We further 3D bioprinted brown and white adipose progenitors within hydrogel-based bioink with controllable physicochemical properties and evaluated the cell responses in 3D bioprinted environments. We used soft, stiff, and stiff-porous constructs to encapsulate the cells. All three types had high cell viability and allowed for varying levels of function for both white and brown adipocytes. We found that the soft hydrogel constructs promoted white adipogenesis, while the stiff-porous hydrogel constructs improved both white and brown adipogenesis and were the optimal condition for promoting brown adipogenesis. Consistently, stiff-porous hydrogel constructs showed higher metabolic activities than stiff hydrogel constructs, as assessed by 2-deoxy glucose uptake (2-DOG) and oxygen consumption rate (OCR). These findings show that the physicochemical environments affect the brown adipogenesis and metabolic function, and further tuning will be able to optimize their functions. Our results also demonstrate that 3D bioprinting of brown adipose tissues with clinically relevant size and metabolic activity has the potential to be a viable option in the treatment of obesity and type 2 diabetes
RoboCoDraw: Robotic Avatar Drawing with GAN-based Style Transfer and Time-efficient Path Optimization
Robotic drawing has become increasingly popular as an entertainment and
interactive tool. In this paper we present RoboCoDraw, a real-time
collaborative robot-based drawing system that draws stylized human face
sketches interactively in front of human users, by using the Generative
Adversarial Network (GAN)-based style transfer and a Random-Key Genetic
Algorithm (RKGA)-based path optimization. The proposed RoboCoDraw system takes
a real human face image as input, converts it to a stylized avatar, then draws
it with a robotic arm. A core component in this system is the Avatar-GAN
proposed by us, which generates a cartoon avatar face image from a real human
face. AvatarGAN is trained with unpaired face and avatar images only and can
generate avatar images of much better likeness with human face images in
comparison with the vanilla CycleGAN. After the avatar image is generated, it
is fed to a line extraction algorithm and converted to sketches. An RKGA-based
path optimization algorithm is applied to find a time-efficient robotic drawing
path to be executed by the robotic arm. We demonstrate the capability of
RoboCoDraw on various face images using a lightweight, safe collaborative robot
UR5.Comment: Accepted by AAAI202
Chromatin Dynamics: Chromatin Remodeler, Epigenetic Modification and Diseases
The gene transcription patterns are regulated in response to extracellular stimuli and intracellular development programs. Recent studies have shown that chromatin dynamics which include nucleosome dynamics and histone modification play a crucial role in gene expression. Chromatin dynamic is regulated by chromatin modification enzymes including chromatin remodeling complex and histone posttranslational modifications. Multiple studies have shown that chromatin dynamics dysregulation and aberrant and histone modifications resulted in the occurrence of various diseases and cancers. Moreover, frequent mutations and chromosomal aberrations in the genes associated with subunits of the chromatin remodeling complexes have been detected in various cancer types. In this review, we highlight the current understanding of orchestration of nucleosome position, histone modification, and the importance of these properly regulated dynamics. We also discuss the consequences of aberrant chromatin dynamic which results in disease progression and provides insights for potential clinic applications
Minimalist and High-Quality Panoramic Imaging with PSF-aware Transformers
High-quality panoramic images with a Field of View (FoV) of 360-degree are
essential for contemporary panoramic computer vision tasks. However,
conventional imaging systems come with sophisticated lens designs and heavy
optical components. This disqualifies their usage in many mobile and wearable
applications where thin and portable, minimalist imaging systems are desired.
In this paper, we propose a Panoramic Computational Imaging Engine (PCIE) to
address minimalist and high-quality panoramic imaging. With less than three
spherical lenses, a Minimalist Panoramic Imaging Prototype (MPIP) is
constructed based on the design of the Panoramic Annular Lens (PAL), but with
low-quality imaging results due to aberrations and small image plane size. We
propose two pipelines, i.e. Aberration Correction (AC) and Super-Resolution and
Aberration Correction (SR&AC), to solve the image quality problems of MPIP,
with imaging sensors of small and large pixel size, respectively. To provide a
universal network for the two pipelines, we leverage the information from the
Point Spread Function (PSF) of the optical system and design a PSF-aware
Aberration-image Recovery Transformer (PART), in which the self-attention
calculation and feature extraction are guided via PSF-aware mechanisms. We
train PART on synthetic image pairs from simulation and put forward the PALHQ
dataset to fill the gap of real-world high-quality PAL images for low-level
vision. A comprehensive variety of experiments on synthetic and real-world
benchmarks demonstrates the impressive imaging results of PCIE and the
effectiveness of plug-and-play PSF-aware mechanisms. We further deliver
heuristic experimental findings for minimalist and high-quality panoramic
imaging. Our dataset and code will be available at
https://github.com/zju-jiangqi/PCIE-PART.Comment: The dataset and code will be available at
https://github.com/zju-jiangqi/PCIE-PAR
Fusing Monocular Images and Sparse IMU Signals for Real-time Human Motion Capture
Either RGB images or inertial signals have been used for the task of motion
capture (mocap), but combining them together is a new and interesting topic. We
believe that the combination is complementary and able to solve the inherent
difficulties of using one modality input, including occlusions, extreme
lighting/texture, and out-of-view for visual mocap and global drifts for
inertial mocap. To this end, we propose a method that fuses monocular images
and sparse IMUs for real-time human motion capture. Our method contains a dual
coordinate strategy to fully explore the IMU signals with different goals in
motion capture. To be specific, besides one branch transforming the IMU signals
to the camera coordinate system to combine with the image information, there is
another branch to learn from the IMU signals in the body root coordinate system
to better estimate body poses. Furthermore, a hidden state feedback mechanism
is proposed for both two branches to compensate for their own drawbacks in
extreme input cases. Thus our method can easily switch between the two kinds of
signals or combine them in different cases to achieve a robust mocap. %The two
divided parts can help each other for better mocap results under different
conditions. Quantitative and qualitative results demonstrate that by delicately
designing the fusion method, our technique significantly outperforms the
state-of-the-art vision, IMU, and combined methods on both global orientation
and local pose estimation. Our codes are available for research at
https://shaohua-pan.github.io/robustcap-page/.Comment: Accepted by SIGGRAPH ASIA 2023. Project page:
https://shaohua-pan.github.io/robustcap-page
- …