350 research outputs found
CROSS-RACE FRIENDSHIPS AND ADJUSTMENT : LONGITUDINAL STUDIES OF ASIAN AMERICAN ADOLESCENTS
Asian American adolescents\u2019 cross-race friendships are poorly understood. Using data from the National Longitudinal Study for Adolescent to Adult Health, two longitudinal studies (Ns = 915 and 1,154) investigated the associations between cross-race friendships and psychosocial and academic adjustment among Asian American adolescents. Study 1 examined the influence of cross-race friendships (derived from quantity and quality measures) on trajectories of perception of peer prejudice at school. Results showed that cross-race friendships were associated with weaker perception of peer prejudice. Cross-race friendships measured as quantity had an immediate but short effect, while cross-race friendships measured as quality exerted a delayed but long-term influence over how Asian American adolescents perceive peer prejudice at school. Similar findings were observed for friendships with other non-White groups (but not with the White group and not for cross-ethnic friendships). Study 2 explored the directionality in associations between cross-race best friendships (i.e., the proportion of cross-race friends in one\u2019s best female and male friend network) and psychological well-being and academic adjustment (school attachment and GPA). Results identified an overall linear decline in cross-race best friendships with age among Asian American adolescents. Cross-race best friendships positively influenced later self-esteem, but not the other way around. Higher levels of school attachment predicted greater decrease in cross-race best friendships, and declines in cross-race best friendships were accompanied by decreases in GPA for Asian American adolescents.Thesis (Ph.D.)--Michigan State University. Human Development and Family Studies - Doctor of Philosophy, 2021Includes bibliographical reference
ControlLM: Crafting Diverse Personalities for Language Models
As language models continue to scale in size and capability, they display an
array of emerging behaviors, both beneficial and concerning. This heightens the
need to control model behaviors. We hope to be able to control the personality
traits of language models at the inference-time so as to have various character
features, on top of which the requirements of different types of tasks can be
met. Personality is a higher-level and more abstract behavioral representation
for language models. We introduce ControlLM, which leverages differential
activation patterns, derived from contrasting behavioral prompts in the model's
latent space, to influence the model's personality traits at inference. This
approach allows for the precise, real-time adjustment of model behavior. First,
we demonstrate ControlLM's capacity to elicit diverse persona behaviors without
any training, while precision control allows personality traits to closely
match average human values. Subsequently, we showcase improved reasoning and
question answering through selective amplification of beneficial attributes
like conscientiousness and friendliness. We hope that this work will inspire
research on controlling human-like behaviors of language models and provide
insights for future research. Our code is publicly available at:
https://github.com/wengsyx/ControlLM.Comment: 17 page
Effect of temozolomide combined with radiotherapy on survival and MGMT protein expression in recurrent malignant glioma patients
Purpose: To investigate the effect of temozolomide (TMZ) combined with radiotherapy (RT) on O-6- methylguanine-DNA methyltransferase (MGMT) protein and survival of recurrent malignant glioma patients.
Methods: Ninety-two patients with malignant glioma in our hospital from January 2014 to January 2015 were assigned to study and control groups using the random table method. Subjects in the control group received radiotherapy (total dose in the range of 60 – 75 Gy), while those in the study group were given TMZ orally (75 mg/m2) daily in addition to radiotherapy, as well as TMZ at 150 – 200 mg/m2. After treatment, clinical effectiveness was compared for the two groups. Changes in methylation of MGMT gene were determined in the two groups. The patients were followed up for 3 years, and the degrees of survival and recurrence were recorded.
Results: Total effectiveness of clinical treatment was markedly higher in the study group (76.09 %) than in the control group (45.65 %; p < 0.05). One month after radiotherapy, significant decrease in MGMT gene methylation was seen in patients in the study group, relative to control patients (p < 0.05). Patients in the study group had lower median recurrence but higher degree of survival in the 2nd and 3rd years, relative to control patients (p < 0.05).
Conclusion: The combination of temozolomide and radiotherapy is more effective than radiotherapy in the treatment of recurrent malignant glioma. The combined treatment significantly inhibits tumor recurrence in patients, and improves their prognosis and standard of life
S3Eval: A Synthetic, Scalable, Systematic Evaluation Suite for Large Language Models
The rapid development of Large Language Models (LLMs) has led to great
strides in model capabilities like reasoning and long-context understanding.
However, as LLMs are able to process longer contexts, it becomes more
challenging to evaluate whether they have acquired certain capabilities, since
the length of text (e.g., 100K tokens) they can process far exceeds what humans
can reliably assess in a reasonable duration. In this paper, we propose using
complex synthetic tasks as a proxy evaluation method, and present S3Eval, a
Synthetic, Scalable, Systematic evaluation suite for LLMs evaluation. As a
synthetic benchmark, S3Eval enables the creation of any number of evaluation
examples that are theoretically invisible to LLMs, mitigating the test set
contamination issue. The synthetic nature of S3Eval provides users full control
over the dataset, allowing them to systematically probe LLM capabilities by
scaling text length and varying task difficulty across diverse scenarios. The
strong correlation between S3Eval performance and scores of real-world
benchmarks like Big-Bench Hard (BBH) demonstrates the soundness of using S3Eval
for evaluation of LLMs. The in-depth analysis also uncover additional insights,
including performance drop when the answer is sparsely distributed or located
in the middle context, as well as some counter-intuitive trends of model
performance.Comment: Work in progres
ExpNote: Black-box Large Language Models are Better Task Solvers with Experience Notebook
Black-box Large Language Models (LLMs) have shown great power in solving
various tasks and are considered general problem solvers. However, LLMs still
fail in many specific tasks although understand the task instruction. In this
paper, we focus on the problem of boosting the ability of black-box LLMs to
solve downstream tasks. We propose ExpNote, an automated framework to help LLMs
better adapt to unfamiliar tasks through reflecting and noting experiences from
training data and retrieving them from external memory during testing. We
evaluate ExpNote on multiple tasks and the experimental results demonstrate
that the proposed method significantly improves the performance of black-box
LLMs. The data and code are available at
https://github.com/forangel2014/ExpNoteComment: EMNLP 2023 finding
- …
