75 research outputs found
Detecting Slow Wave Sleep Using a Single EEG Signal Channel
Background: In addition to the cost and complexity of processing multiple signal channels, manual sleep staging is also tedious, time consuming, and error-prone. The aim of this paper is to propose an automatic slow wave sleep (SWS) detection method that uses only one channel of the electroencephalography (EEG) signal.
New Method: The proposed approach distinguishes itself from previous automatic sleep staging methods by using three specially designed feature groups. The first feature group characterizes the waveform pattern of the EEG signal. The remaining two feature groups are developed to resolve the difficulties caused by interpersonal EEG signal differences.
Results and comparison with existing methods: The proposed approach was tested with 1,003 subjects, and the SWS detection results show kappa coefficient at 0.66, an accuracy level of 0.973, a sensitivity score of 0.644 and a positive predictive value of 0.709. By excluding sleep apnea patients and persons whose age is older than 55, the SWS detection results improved to kappa coefficient, 0.76; accuracy, 0.963; sensitivity, 0.758; and positive predictive value, 0.812.
Conclusions: With newly developed signal features, this study proposed and tested a single-channel EEG-based SWS detection method. The effectiveness of the proposed approach was demonstrated by applying it to detect the SWS of 1003 subjects. Our test results show that a low SWS ratio and sleep apnea can degrade the performance of SWS detection. The results also show that a large and accurately staged sleep dataset is of great importance when developing automatic sleep staging methods
Constraining the nuclear symmetry energy and properties of neutron star from GW170817 by Bayesian analysis
Based on the distribution of tidal deformabilities and component masses of
binary neutron star merger GW170817, the parametric equation of states (EOS)
are employed to probe the nuclear symmetry energy and the properties of neutron
star. To obtain a proper distribution of the parameters of the EOS that is
consistent with the observation, Bayesian analysis is used and the constraints
of causality and maximum mass are considered. From this analysis, it is found
that the symmetry energy at twice the saturation density of nuclear matter can
be constrained within = MeV at
90\% credible level. Moreover, the constraints on the radii and dimensionless
tidal deformabilities of canonical neutron stars are also demonstrated through
this analysis, and the corresponding constraints are 10.80 km
13.20 km and at 90\% credible level, with the most
probable value of = 12.60 km and = 500,
respectively. With respect to the prior, our result (posterior result) prefers
a softer EOS, corresponding to a lower expected value of symmetry energy, a
smaller radius and a smaller tidal deformability.Comment: 15 pages, 15 figure
Bridging Data-Driven and Knowledge-Driven Approaches for Safety-Critical Scenario Generation in Automated Vehicle Validation
Automated driving vehicles~(ADV) promise to enhance driving efficiency and
safety, yet they face intricate challenges in safety-critical scenarios. As a
result, validating ADV within generated safety-critical scenarios is essential
for both development and performance evaluations. This paper investigates the
complexities of employing two major scenario-generation solutions: data-driven
and knowledge-driven methods. Data-driven methods derive scenarios from
recorded datasets, efficiently generating scenarios by altering the existing
behavior or trajectories of traffic participants but often falling short in
considering ADV perception; knowledge-driven methods provide effective coverage
through expert-designed rules, but they may lead to inefficiency in generating
safety-critical scenarios within that coverage. To overcome these challenges,
we introduce BridgeGen, a safety-critical scenario generation framework,
designed to bridge the benefits of both methodologies. Specifically, by
utilizing ontology-based techniques, BridgeGen models the five scenario layers
in the operational design domain (ODD) from knowledge-driven methods, ensuring
broad coverage, and incorporating data-driven strategies to efficiently
generate safety-critical scenarios. An optimized scenario generation toolkit is
developed within BridgeGen. This expedites the crafting of safety-critical
scenarios through a combination of traditional optimization and reinforcement
learning schemes. Extensive experiments conducted using Carla simulator
demonstrate the effectiveness of BridgeGen in generating diverse
safety-critical scenarios
DiffusionGPT: LLM-Driven Text-to-Image Generation System
Diffusion models have opened up new avenues for the field of image
generation, resulting in the proliferation of high-quality models shared on
open-source platforms. However, a major challenge persists in current
text-to-image systems are often unable to handle diverse inputs, or are limited
to single model results. Current unified attempts often fall into two
orthogonal aspects: i) parse Diverse Prompts in input stage; ii) activate
expert model to output. To combine the best of both worlds, we propose
DiffusionGPT, which leverages Large Language Models (LLM) to offer a unified
generation system capable of seamlessly accommodating various types of prompts
and integrating domain-expert models. DiffusionGPT constructs domain-specific
Trees for various generative models based on prior knowledge. When provided
with an input, the LLM parses the prompt and employs the Trees-of-Thought to
guide the selection of an appropriate model, thereby relaxing input constraints
and ensuring exceptional performance across diverse domains. Moreover, we
introduce Advantage Databases, where the Tree-of-Thought is enriched with human
feedback, aligning the model selection process with human preferences. Through
extensive experiments and comparisons, we demonstrate the effectiveness of
DiffusionGPT, showcasing its potential for pushing the boundaries of image
synthesis in diverse domains
Nanostructured Ni2SeS on Porous-Carbon Skeletons as Highly Efficient Electrocatalyst for Hydrogen Evolution in Acidic Medium
Nickel dichalcogenides have received extensive attention as promising noble-metal-free nanocatalysts for a hydrogen evolution reaction. Nonetheless, their catalytic performance is restricted by the sluggish reaction kinetics, limited exposed active sites, and poor conductivity. In this work, we report on an effective strategy to solve those problems by using an as-designed new porous-C/Ni2SeS nanocatalyst with the Ni2SeS nanostubs anchored on with porous-carbon skeletons process. On the basis of three advantages, as the enhancement of the intrinsic activity using the ternary sulfoselenide, increased number of exposed active sites due to the 3D hollow substrate, and increased conductivity caused by porous-carbon skeletons, the resulting porous-C/Ni2SeS requires an overpotential of only 121 mV at a current density of 10 mA cm–2 with a Tafel slope of 78 mV dec–1 for hydrogen evolution in acidic media and a good long-term stability. Density functional theory calculations also show that the Gibbs free energy of hydrogen adsorption of the Ni2SeS was −0.23 eV, which not only is close to the ideal value (0 eV) and Pt reference (−0.09 eV) but also is lower than those of NiS2 and NiSe2; large electrical states exist in the vicinity of the Fermi level, which further improves its electrocatalytic performance. This work provides new insights into the rational design of ternary dichalcogenides and hollow structure materials for practical applications in HER catalysis and energy fields
TNF- α
Ankylosing spondylitis (AS) is an autoimmune disease with unknown etiology. Dysregulated mesenchymal stem cells (MSCs) apoptosis may contribute to the pathogenesis of autoimmune diseases. However, apoptosis of MSCs from patients with AS (ASMSCs) has not been investigated yet. The present study aims to assess the apoptosis of bone marrow-derived ASMSCs and to investigate the underlying mechanisms of altered ASMSCs apoptosis. We successfully induced the apoptosis of ASMSCs and MSCs from healthy donors (HDMSCs) using the combination of tumor necrosis factor alpha (TNF-α) and cycloheximide (CHX). We found that ASMSCs treated with TNF-α and CHX showed higher apoptosis levels compared to HDMSCs. During apoptosis, ASMSCs expressed significantly more TRAIL-R2, which activated both the death receptor pathway and mitochondria pathway by increasing the expression of FADD, cleaved caspase-8, cytosolic cytochrome C, and cleaved caspase-3. Inhibiting TRAIL-R2 expression using shRNA eliminated the apoptosis differences between HDMSCs and ASMSCs by partially reducing ASMSCs apoptosis but minimally affecting that of HDMSCs. Furthermore, the expression of FADD, cleaved caspase-8, cytosolic cytochrome C, and cleaved caspase-3 were comparable between HDMSCs and ASMSCs after TRAIL-R2 inhibition. These results indicated that increased TRAIL-R2 expression results in enhanced ASMSCs apoptosis and may contribute to AS pathogenesis
Software-Hardware Co-design for Fast and Scalable Training of Deep Learning Recommendation Models
Deep learning recommendation models (DLRMs) are used across many
business-critical services at Facebook and are the single largest AI
application in terms of infrastructure demand in its data-centers. In this
paper we discuss the SW/HW co-designed solution for high-performance
distributed training of large-scale DLRMs. We introduce a high-performance
scalable software stack based on PyTorch and pair it with the new evolution of
Zion platform, namely ZionEX. We demonstrate the capability to train very large
DLRMs with up to 12 Trillion parameters and show that we can attain 40X speedup
in terms of time to solution over previous systems. We achieve this by (i)
designing the ZionEX platform with dedicated scale-out network, provisioned
with high bandwidth, optimal topology and efficient transport (ii) implementing
an optimized PyTorch-based training stack supporting both model and data
parallelism (iii) developing sharding algorithms capable of hierarchical
partitioning of the embedding tables along row, column dimensions and load
balancing them across multiple workers; (iv) adding high-performance core
operators while retaining flexibility to support optimizers with fully
deterministic updates (v) leveraging reduced precision communications,
multi-level memory hierarchy (HBM+DDR+SSD) and pipelining. Furthermore, we
develop and briefly comment on distributed data ingestion and other supporting
services that are required for the robust and efficient end-to-end training in
production environments
- …