167 research outputs found
Gender Differences in Depressive Traits among Rural and Urban Chinese Adolescent Students: Secondary Data Analysis of Nationwide Survey CFPS
Many previous studies have indicated that urban adolescents show a higher level of mental health in China compared to rural adolescents. Specifically, girls in rural areas represented a high-risk group prior to the 21st century, demonstrating more suicidal behaviour and ideation than those in the urban areas because of the severe gender inequality in rural China. However, because of the urbanisation process and centralised policy to eliminate gender inequality in recent decades, the regional and gender differences in mental health might decrease. This research aimed to probe the gender and regional differences in depressive traits among adolescent students currently in China. We adopted the national survey dataset Chinese Family Panel Studies (CFPS) conducted in 2018. Accordingly, 2173 observations from 10–15-year-old subjects were included. CFPS utilised an eight-item questionnaire to screen individuals’ depressive traits. Two dimensions of depressive traits were confirmed by CFA, namely depressed affect and anhedonia. The measurement invariance tests suggested that the two-factor model was applicable for both males and females and rural and urban students. Based on the extracted values from the CFA model, MANOVA results revealed that, compared to boys, girls experienced more depressed affect. Moreover, rural students demonstrated more anhedonia symptoms. There was no interaction between gender and region. The results suggest that, even though the gender and regional differences are small, being a female and coming from a rural area are still potential risk factors for developing depressive traits among adolescent students in China
Deep Reinforcement Learning-Assisted Federated Learning for Robust Short-term Utility Demand Forecasting in Electricity Wholesale Markets
Short-term load forecasting (STLF) plays a significant role in the operation
of electricity trading markets. Considering the growing concern of data
privacy, federated learning (FL) is increasingly adopted to train STLF models
for utility companies (UCs) in recent research. Inspiringly, in wholesale
markets, as it is not realistic for power plants (PPs) to access UCs' data
directly, FL is definitely a feasible solution of obtaining an accurate STLF
model for PPs. However, due to FL's distributed nature and intense competition
among UCs, defects increasingly occur and lead to poor performance of the STLF
model, indicating that simply adopting FL is not enough. In this paper, we
propose a DRL-assisted FL approach, DEfect-AwaRe federated soft actor-critic
(DearFSAC), to robustly train an accurate STLF model for PPs to forecast
precise short-term utility electricity demand. Firstly. we design a STLF model
based on long short-term memory (LSTM) using just historical load data and time
data. Furthermore, considering the uncertainty of defects occurrence, a deep
reinforcement learning (DRL) algorithm is adopted to assist FL by alleviating
model degradation caused by defects. In addition, for faster convergence of FL
training, an auto-encoder is designed for both dimension reduction and quality
evaluation of uploaded models. In the simulations, we validate our approach on
real data of Helsinki's UCs in 2019. The results show that DearFSAC outperforms
all the other approaches no matter if defects occur or not
BAGEL: Backdoor Attacks against Federated Contrastive Learning
Federated Contrastive Learning (FCL) is an emerging privacy-preserving
paradigm in distributed learning for unlabeled data. In FCL, distributed
parties collaboratively learn a global encoder with unlabeled data, and the
global encoder could be widely used as a feature extractor to build models for
many downstream tasks. However, FCL is also vulnerable to many security threats
(e.g., backdoor attacks) due to its distributed nature, which are seldom
investigated in existing solutions. In this paper, we study the backdoor attack
against FCL as a pioneer research, to illustrate how backdoor attacks on
distributed local clients act on downstream tasks. Specifically, in our system,
malicious clients can successfully inject a backdoor into the global encoder by
uploading poisoned local updates, thus downstream models built with this global
encoder will also inherit the backdoor. We also investigate how to inject
backdoors into multiple downstream models, in terms of two different backdoor
attacks, namely the \textit{centralized attack} and the \textit{decentralized
attack}. Experiment results show that both the centralized and the
decentralized attacks can inject backdoors into downstream models effectively
with high attack success rates. Finally, we evaluate two defense methods
against our proposed backdoor attacks in FCL, which indicates that the
decentralized backdoor attack is more stealthy and harder to defend
A Highly Sensitive Intensity-Modulated Optical Fiber Magnetic Field Sensor Based on the Magnetic Fluid and Multimode Interference
Fiber-optic magnetic field sensing is an important method of magnetic field monitoring, which is essential for the safety of civil infrastructures, especially for power plant. We theoretically and experimentally demonstrated an optical fiber magnetic field sensor based on a single-mode-multimode-single-mode (SMS) structure immersed into the magnetic fluid (MF). The length of multimode section fiber is determined based on the self-image effect through the simulation. Due to variation characteristics of the refractive index and absorption coefficient of MF under different magnetic fields, an effective method to improve the sensitivity of SMS fiber structure is realized based on the intensity modulation method. This sensor shows a high sensitivity up to 0.097 dB/Oe and a high modulation depth up to 78% in a relatively linear range, for the no-core fiber (NCF) with the diameter of 125 μm and length of 59.8 mm as the multimode section. This optical fiber sensor possesses advantages of low cost, ease of fabrication, high sensitivity, simple structure, and compact size, with great potential applications in measuring the magnetic field
Large Language Model Alignment: A Survey
Recent years have witnessed remarkable progress made in large language models
(LLMs). Such advancements, while garnering significant attention, have
concurrently elicited various concerns. The potential of these models is
undeniably vast; however, they may yield texts that are imprecise, misleading,
or even detrimental. Consequently, it becomes paramount to employ alignment
techniques to ensure these models to exhibit behaviors consistent with human
values.
This survey endeavors to furnish an extensive exploration of alignment
methodologies designed for LLMs, in conjunction with the extant capability
research in this domain. Adopting the lens of AI alignment, we categorize the
prevailing methods and emergent proposals for the alignment of LLMs into outer
and inner alignment. We also probe into salient issues including the models'
interpretability, and potential vulnerabilities to adversarial attacks. To
assess LLM alignment, we present a wide variety of benchmarks and evaluation
methodologies. After discussing the state of alignment research for LLMs, we
finally cast a vision toward the future, contemplating the promising avenues of
research that lie ahead.
Our aspiration for this survey extends beyond merely spurring research
interests in this realm. We also envision bridging the gap between the AI
alignment research community and the researchers engrossed in the capability
exploration of LLMs for both capable and safe LLMs.Comment: 76 page
Evolution of Publications, Subjects, and Co-authorships in Network-On-Chip Research From a Complex Network Perspective
The academia and industry have been pursuing network-on-chip (NoC) related research since two decades ago when there was an urgency to respond to the scaling and technological challenges imposed on intra-chip communication in SoC designs. Like any other research topic, NoC inevitably goes through its life cycle: A. it started up (2000-2007) and quickly gained traction in its own right; B. it then entered the phase of growth and shakeout (2008-2013) with the research outcomes peaked in 2010 and remained high for another four/five years; C. NoC research was considered mature and stable (2014-2020), with signs showing a steady slowdown. Although from time to time, excellent survey articles on different subjects/aspects of NoC appeared in the open literature, yet there is no general consensus on where we are in this NoC roadmap and where we are heading, largely due to lack of an overarching methodology and tool to assess and quantify the research outcomes and evolution. In this paper, we address this issue from the perspective of three specific complex networks, namely the citation network, the subject citation network, and the co-authorship network. The network structure parameters (e.g., modularity, diameter, etc.) and graph dynamics of the three networks are extracted and analyzed, which helps reveal and explain the reasons and the driving forces behind all the changes observed in NoC research over 20 years. Additional analyses are performed in this study to link interesting phenomena surrounding the NoC area. They include: (1) relationships between communities in citation networks and NoC subjects, (2) measure and visualization of a subject\u27s influence score and its evolution, (3) knowledge flow among the six most popular NoC subjects and their relationships, (4) evolution of various subjects in terms of number of publications, (5) collaboration patterns and cross-community collaboration among the authors in NoC research, (6) interesting observation of career lifetime and productivity among NoC researchers, and finally (7) investigation of whether or not new authors are chasing hot subjects in NoC. All these analyses have led to a prediction of publications, subjects, and co-authorship in NoC research in the near future, which is also presented in the paper
Metformin Treatment is Associated with Mortality in Patients with Type 2 Diabetes and Chronic Heart Failure in the Intensive Care Unit: A Retrospective Cohort Study
Objective: Patients receiving intensive care often have diabetes mellitus (DM) together with chronic heart failure (CHF). In these patients, the use of metformin in intensive care is controversial. This study was aimed at assessing the mortality rates of patients with DM and CHF treated with metformin. Methods: The Medical Information Mart for Intensive Care database was used to identify patients with type 2 diabetes mellitus (T2DM) and CHF. A 90-day mortality comparison was conducted between patients who were and were not administered metformin. Propensity score matching analysis and multivariable Cox proportional hazard regression were used to ensure the robustness of our results. Results: A total of 2153 patients (180 receiving metformin and 1973 not receiving metformin) with T2DM and CHF were included in the study. The 90-day mortality rates were 30.5% (601/1971) and 5.5% (10/182) in the non-metformin and metformin groups, respectively. In the propensity score matching analyses, metformin use was associated with a 71% lower 90-day mortality (hazard ratio, 0.29; 95% confidence interval, 0.14–0.59; P < 0.001). The results were insensitive to change when sensitivity analyses were performed. Conclusion: Metformin treatment may decrease the mortality risk in critically ill patients with T2DM and CHF in the intensive care unit
Spatiotemporal heterogeneity and impact factors of hepatitis B and C in China from 2010 to 2018: Bayesian space–time hierarchy model
IntroductionViral hepatitis is a global public health problem, and China still faces great challenges to achieve the WHO goal of eliminating hepatitis.MethodsThis study focused on hepatitis B and C, aiming to explore the long-term spatiotemporal heterogeneity of hepatitis B and C incidence in China from 2010 to 2018 and quantify the impact of socioeconomic factors on their risk through Bayesian spatiotemporal hierarchical model.ResultsThe results showed that the risk of hepatitis B and C had significant spatial and temporal heterogeneity. The risk of hepatitis B showed a slow downward trend, and the high-risk provinces were mainly distributed in the southeast and northwest regions, while the risk of hepatitis C had a clear growth trend, and the high-risk provinces were mainly distributed in the northern region. In addition, for hepatitis B, illiteracy and hepatitis C prevalence were the main contributing factors, while GDP per capita, illiteracy rate and hepatitis B prevalence were the main contributing factors to hepatitis C.DisussionThis study analyzed the spatial and temporal heterogeneity of hepatitis B and C and their contributing factors, which can serve as a basis for monitoring efforts. Meanwhile, the data provided by this study will contribute to the effective allocation of resources to eliminate viral hepatitis and the design of interventions at the provincial level
- …