197 research outputs found
Astaxanthin protects against MPP+-induced oxidative stress in PC12 cells via the HO-1/NOX2 axis
BACKGROUND: Although the etiology of PD remains unclear, increasing evidence has shown that oxidative stress plays an important role in its pathogenesis and that of other neurodegenerative disorders. NOX2, a cytochrome subunit of NOX, transports electrons across the plasma membrane to generate ROS, leading to physiological and pathological processes. Heme oxygenase-1 (HO-1) can be rapidly induced by oxidative stress and other noxious stimuli in the brain or other tissues. Astaxanthin (ATX), a carotenoid with antioxidant properties, is 100–1000 times more effective than vitamin E. The present study investigated the neuroprotective effects of ATX on MPP(+)-induced oxidative stress in PC12 cells. RESULTS: MPP(+) significantly decreased MTT levels in a concentration-dependent manner. Hemin, SnPPIX and ATX didn’t exhibit any cytotoxic effects on PC12 cells. Pretreatment with ATX (5, 10, 20 μM), caused intracellular ROS production in the MPP(+) group to decrease by 13.06%, 22.13%, and 27.86%, respectively. MPP(+) increased NOX2, NRF2 and HO-1 protein expression compared with control (p < 0.05). Co-treatment with hemin or ATX suppressed NOX2 expression (p < 0.01), and greatly increased NRF2 and HO-1 expression (p < 0.01). MPP(+) treatment up-regulated both NOX2 (p < 0.01) and HO-1 (p < 0.01) mRNA levels. Co-treatment with hemin or ATX significantly increased HO-1 mRNA levels (p < 0.01), and decreased NOX2 mRNA levels (p < 0.01). MPP(+) increased NOX2 and HO-1 expression with considerable fluorescence extending out from the perinuclear region toward the periphery; this was attenuated by DPI. Co-treatment with hemin or ATX significantly up-regulated HO-1 expression and decreased NOX2 expression with considerable fluorescence intensity (stronger than the control and MPP(+) groups). CONCLUSIONS: ATX suppresses MPP(+)-induced oxidative stress in PC12 cells via the HO-1/NOX2 axis. ATX should be strongly considered as a potential neuroprotectant and adjuvant therapy for patients with Parkinson’s disease
Simple and Efficient Heterogeneous Graph Neural Network
Heterogeneous graph neural networks (HGNNs) have powerful capability to embed
rich structural and semantic information of a heterogeneous graph into node
representations. Existing HGNNs inherit many mechanisms from graph neural
networks (GNNs) over homogeneous graphs, especially the attention mechanism and
the multi-layer structure. These mechanisms bring excessive complexity, but
seldom work studies whether they are really effective on heterogeneous graphs.
This paper conducts an in-depth and detailed study of these mechanisms and
proposes Simple and Efficient Heterogeneous Graph Neural Network (SeHGNN). To
easily capture structural information, SeHGNN pre-computes the neighbor
aggregation using a light-weight mean aggregator, which reduces complexity by
removing overused neighbor attention and avoiding repeated neighbor aggregation
in every training epoch. To better utilize semantic information, SeHGNN adopts
the single-layer structure with long metapaths to extend the receptive field,
as well as a transformer-based semantic fusion module to fuse features from
different metapaths. As a result, SeHGNN exhibits the characteristics of simple
network structure, high prediction accuracy, and fast training speed. Extensive
experiments on five real-world heterogeneous graphs demonstrate the superiority
of SeHGNN over the state-of-the-arts on both accuracy and training speed.Comment: Accepted by AAAI 202
CAT: Enhancing Multimodal Large Language Model to Answer Questions in Dynamic Audio-Visual Scenarios
This paper focuses on the challenge of answering questions in scenarios that
are composed of rich and complex dynamic audio-visual components. Although
existing Multimodal Large Language Models (MLLMs) can respond to audio-visual
content, these responses are sometimes ambiguous and fail to describe specific
audio-visual events. To overcome this limitation, we introduce the CAT, which
enhances MLLM in three ways: 1) besides straightforwardly bridging audio and
video, we design a clue aggregator that aggregates question-related clues in
dynamic audio-visual scenarios to enrich the detailed knowledge required for
large language models. 2) CAT is trained on a mixed multimodal dataset,
allowing direct application in audio-visual scenarios. Notably, we collect an
audio-visual joint instruction dataset named AVinstruct, to further enhance the
capacity of CAT to model cross-semantic correlations. 3) we propose AI-assisted
ambiguity-aware direct preference optimization, a strategy specialized in
retraining the model to favor the non-ambiguity response and improve the
ability to localize specific audio-visual objects. Extensive experimental
results demonstrate that CAT outperforms existing methods on multimodal tasks,
especially in Audio-Visual Question Answering (AVQA) tasks. The codes and the
collected instructions are released at https://github.com/rikeilong/Bay-CAT
Rethinking Efficiency and Redundancy in Training Large-scale Graphs
Large-scale graphs are ubiquitous in real-world scenarios and can be trained
by Graph Neural Networks (GNNs) to generate representation for downstream
tasks. Given the abundant information and complex topology of a large-scale
graph, we argue that redundancy exists in such graphs and will degrade the
training efficiency. Unfortunately, the model scalability severely restricts
the efficiency of training large-scale graphs via vanilla GNNs. Despite recent
advances in sampling-based training methods, sampling-based GNNs generally
overlook the redundancy issue. It still takes intolerable time to train these
models on large-scale graphs. Thereby, we propose to drop redundancy and
improve efficiency of training large-scale graphs with GNNs, by rethinking the
inherent characteristics in a graph.
In this paper, we pioneer to propose a once-for-all method, termed DropReef,
to drop the redundancy in large-scale graphs. Specifically, we first conduct
preliminary experiments to explore potential redundancy in large-scale graphs.
Next, we present a metric to quantify the neighbor heterophily of all nodes in
a graph. Based on both experimental and theoretical analysis, we reveal the
redundancy in a large-scale graph, i.e., nodes with high neighbor heterophily
and a great number of neighbors. Then, we propose DropReef to detect and drop
the redundancy in large-scale graphs once and for all, helping reduce the
training time while ensuring no sacrifice in the model accuracy. To demonstrate
the effectiveness of DropReef, we apply it to recent state-of-the-art
sampling-based GNNs for training large-scale graphs, owing to the high
precision of such models. With DropReef leveraged, the training efficiency of
models can be greatly promoted. DropReef is highly compatible and is offline
performed, benefiting the state-of-the-art sampling-based GNNs in the present
and future to a significant extent.Comment: 11 Page
A Survey of Graph Pre-processing Methods: From Algorithmic to Hardware Perspectives
Graph-related applications have experienced significant growth in academia
and industry, driven by the powerful representation capabilities of graph.
However, efficiently executing these applications faces various challenges,
such as load imbalance, random memory access, etc. To address these challenges,
researchers have proposed various acceleration systems, including software
frameworks and hardware accelerators, all of which incorporate graph
pre-processing (GPP). GPP serves as a preparatory step before the formal
execution of applications, involving techniques such as sampling, reorder, etc.
However, GPP execution often remains overlooked, as the primary focus is
directed towards enhancing graph applications themselves. This oversight is
concerning, especially considering the explosive growth of real-world graph
data, where GPP becomes essential and even dominates system running overhead.
Furthermore, GPP methods exhibit significant variations across devices and
applications due to high customization. Unfortunately, no comprehensive work
systematically summarizes GPP. To address this gap and foster a better
understanding of GPP, we present a comprehensive survey dedicated to this area.
We propose a double-level taxonomy of GPP, considering both algorithmic and
hardware perspectives. Through listing relavent works, we illustrate our
taxonomy and conduct a thorough analysis and summary of diverse GPP techniques.
Lastly, we discuss challenges in GPP and potential future directions
- …