2 research outputs found
Alg{\i}lanan Stres Testinin Makine \"O\u{g}renmesi ile Analiz Edilmesi
The aim of this study is to reanalyze the perceived stress test using machine
learning to determine the perceived stress levels of 150 individuals and
measure the impact of the test questions. The test consists of 14 questions,
each scored on a scale of 0 to 4, resulting in a total score range of 0-56. Out
of these questions, 7 are formulated in a negative context and scored
accordingly, while the remaining 7 are formulated in a positive context and
scored in reverse. The test is also designed to identify two sub-factors:
perceived self-efficacy and stress/discomfort perception. The main objectives
of this research are to demonstrate that test questions may not have equal
importance using artificial intelligence techniques, reveal which questions
exhibit variations in the society using machine learning, and ultimately
demonstrate the existence of distinct patterns observed psychologically. This
study provides a different perspective from the existing psychology literature
by repeating the test through machine learning. Additionally, it questions the
accuracy of the scale used to interpret the results of the perceived stress
test and emphasizes the importance of considering differences in the
prioritization of test questions. The findings of this study offer new insights
into coping strategies and therapeutic approaches in dealing with stress.
Source code: https://github.com/toygarr/ppl-r-stressedComment: in Turkish languag
Beyond Known Reality: Exploiting Counterfactual Explanations for Medical Research
This study employs counterfactual explanations to explore "what if?"
scenarios in medical research, with the aim of expanding our understanding
beyond existing boundaries. Specifically, we focus on utilizing MRI features
for diagnosing pediatric posterior fossa brain tumors as a case study. The
field of artificial intelligence and explainability has witnessed a growing
number of studies and increasing scholarly interest. However, the lack of
human-friendly interpretations in explaining the outcomes of machine learning
algorithms has significantly hindered the acceptance of these methods by
clinicians in their clinical practice. To address this, our approach
incorporates counterfactual explanations, providing a novel way to examine
alternative decision-making scenarios. These explanations offer personalized
and context-specific insights, enabling the validation of predictions and
clarification of variations under diverse circumstances. Importantly, our
approach maintains both statistical and clinical fidelity, allowing for the
examination of distinct tumor features through alternative realities.
Additionally, we explore the potential use of counterfactuals for data
augmentation and evaluate their feasibility as an alternative approach in
medical research. The results demonstrate the promising potential of
counterfactual explanations to enhance trust and acceptance of AI-driven
methods in clinical settings