128 research outputs found
Salivary testosterone and cortisol response in acute stress modulated by seven sessions of mindfulness meditation in young males
Stress is an established risk factor for negative health outcomes. Salivary cortisol and testosterone concentrations increase in response to acute psychosocial stress. It’s crucial to reduce stress for health and well-being through evidence-based interventions. Body-mind interventions such as meditation and Tai Chi have shown reduced cortisol levels but mixed results in testosterone concentration after stress. To address this research gap, we conducted a pilot randomized controlled trial to examine the modulating effects of a short-term (seven 20-minute sessions) mindfulness meditation on testosterone and cortisol in response to acute stress. Using one form of mindfulness meditation – Integrative Body-Mind Training (IBMT) and an active control–relaxation training (RT), we assessed salivary cortisol and testosterone concentrations at three stages of stress intervention – rest, stress, and an additional 20-min IBMT or RT practice. We found increased cortisol and testosterone concentrations after acute stress in both groups, but testosterone rise was not associated with cortisol rise. Moreover, an additional practice immediately after stress produced higher testosterone concentrations in the IBMT group than the RT group, whereas cortisol concentration increased in the RT group than in the IBMT group at the same time point. These findings indicate that brief mindfulness intervention modulates a dual-hormone profile of testosterone and cortisol in response to acute stress presumably via the co-regulation of hypothalamus-pituitary-adrenal and hypothalamus-pituitary-testicular axes
Biologically inspired structure learning with reverse knowledge distillation for spiking neural networks
Spiking neural networks (SNNs) have superb characteristics in sensory
information recognition tasks due to their biological plausibility. However,
the performance of some current spiking-based models is limited by their
structures which means either fully connected or too-deep structures bring too
much redundancy. This redundancy from both connection and neurons is one of the
key factors hindering the practical application of SNNs. Although Some pruning
methods were proposed to tackle this problem, they normally ignored the fact
the neural topology in the human brain could be adjusted dynamically. Inspired
by this, this paper proposed an evolutionary-based structure construction
method for constructing more reasonable SNNs. By integrating the knowledge
distillation and connection pruning method, the synaptic connections in SNNs
can be optimized dynamically to reach an optimal state. As a result, the
structure of SNNs could not only absorb knowledge from the teacher model but
also search for deep but sparse network topology. Experimental results on
CIFAR100 and DVS-Gesture show that the proposed structure learning method can
get pretty well performance while reducing the connection redundancy. The
proposed method explores a novel dynamical way for structure learning from
scratch in SNNs which could build a bridge to close the gap between deep
learning and bio-inspired neural dynamics
Case report: Thoughts on two cases of total anomalous pulmonary venous connection complicated with pulmonary artery hypertension
The two primary pathological alterations of total anomalous pulmonary venous connection (TAPVC), a rare cyanotic congenital heart disease (CHD), are right heart failure and pulmonary artery hypertension (PAH). The timing and prognosis of surgery depend on the level of pulmonary hypertension. Surgery will not be an option after Eisenmenger syndrome appears. In light of this, it is crucial to assess patients’ PAH. In order to aid in the following treatment of related types of diseases, this article studied and compared the echocardiographic features and disease development of one adult and one child TAPVC patients complicated with PAH
Exploring Memorization in Fine-tuned Language Models
LLMs have shown great capabilities in various tasks but also exhibited
memorization of training data, thus raising tremendous privacy and copyright
concerns. While prior work has studied memorization during pre-training, the
exploration of memorization during fine-tuning is rather limited. Compared with
pre-training, fine-tuning typically involves sensitive data and diverse
objectives, thus may bring unique memorization behaviors and distinct privacy
risks. In this work, we conduct the first comprehensive analysis to explore
LMs' memorization during fine-tuning across tasks. Our studies with
open-sourced and our own fine-tuned LMs across various tasks indicate that
fine-tuned memorization presents a strong disparity among tasks. We provide an
understanding of this task disparity via sparse coding theory and unveil a
strong correlation between memorization and attention score distribution. By
investigating its memorization behavior, multi-task fine-tuning paves a
potential strategy to mitigate fine-tuned memorization
Visual Robotic Manipulation with Depth-Aware Pretraining
Recent work on visual representation learning has shown to be efficient for
robotic manipulation tasks. However, most existing works pretrained the visual
backbone solely on 2D images or egocentric videos, ignoring the fact that
robots learn to act in 3D space, which is hard to learn from 2D observation. In
this paper, we examine the effectiveness of pretraining for vision backbone
with public-available large-scale 3D data to improve manipulation policy
learning. Our method, namely Depth-aware Pretraining for Robotics (DPR),
enables an RGB-only backbone to learn 3D scene representations from
self-supervised contrastive learning, where depth information serves as
auxiliary knowledge. No 3D information is necessary during manipulation policy
learning and inference, making our model enjoy both efficiency and
effectiveness in 3D space manipulation. Furthermore, we introduce a new way to
inject robots' proprioception into the policy networks that makes the
manipulation model robust and generalizable. We demonstrate in experiments that
our proposed framework improves performance on unseen objects and visual
environments for various robotics tasks on both simulated and real robots.Comment: submitted to ICRA202
- …