781 research outputs found
Grounded in Teachers’ Reality: A Collective Case Study on Middle School Teachers’ Self-Efficacy for Equity-Centered Trauma-Informed Educational Practices
The current body of literature clearly demonstrates the high prevalence of student trauma and the significant impact of trauma on adolescents\u27 well-being and academic outcomes. Middle school teachers are uniquely positioned to support adolescents experiencing trauma using trauma-informed educational practices (TIEP), however, more work is needed to understand their self-efficacy beliefs for using TIEP. Thus, the purpose of this qualitative collective case study is to explore middle-school teachers\u27 self-efficacy (TSE) beliefs for trauma-informed educational practices particularly as they relate to centering equity, and what factors impact those beliefs. Specifically, my study asks the following research questions, (1) How do middle school teachers describe their self-efficacy beliefs towards trauma-informed educational practices? (2) How do middle school teachers describe their self-efficacy beliefs as they relate to centering equity in trauma-informed educational practices? (3) How do middle school teachers describe factors that impact their self-efficacy beliefs towards trauma-informed educational practices? (3a) How do middle school teachers describe COVID-19 and its consequences impacting their self-efficacy towards trauma-informed educational practices? And (3b) How do middle school teachers describe the continued displays of racial and social injustice and responses to them as impacting their self-efficacy as they relate to centering equity in trauma-informed educational practices? Four middle school teachers from an urban faith-based independent school were included as cases in this study. Multiple sources of evidence (demographic questionnaire, semi-structured individual interview, follow-up in-depth member checking interview, teacher beliefs questionnaire) were collected to provide a comprehensive understanding of each case. Within– and cross-case analyses were conducted to identify similarities and differences across cases. Findings indicate that these teachers hold high self-efficacy beliefs for TIEP, specifically in terms of empowering and connecting with students. Participants also reported high levels of TSE for equity-centered TIEP, particularly on an individual level. Further, teachers were mixed in their level of self-efficacy towards preventing trauma. Teachers largely pointed to their prior knowledge and experience as well as the broader school, community, and state level context as impacting their self-efficacy beliefs. Teachers shared the impact of COVID-19 and public displays of continued social and racial injustices on their students and their self-efficacy, including increased perceived importance of TIEP and higher TSE for facilitating conversations around race and equity. Implications for theory include using the proposed conceptual framework to further examine TSE for TIEP, particularly as those beliefs relate to equity in TIEP. Pre-service and in-service training for educators should work to better prepare teachers to respond to and prevent trauma as aspects of TIEP. Further, teachers should be supported in their understanding of trauma as situated within broader systems that perpetuate trauma (Goldin et al., 2023) and promote responding to student behavior with TIEP rather than discipline. School leadership should promote collaborative school environments and implement policies and practices that support consistent teacher implementation of TIEP
Certificates for decision problems in temporal logic using context-based tableaux and sequent calculi.
115 p.Esta tesis trata de resolver problemas de Satisfactibilidad y Model Checking, aportando certificados del resultado. En ella, se trabaja con tres lógicas temporales: Propositional Linear Temporal Logic (PLTL), Computation Tree Logic (CTL) y Extended Computation Tree Logic (ECTL). Primero se presenta el trabajo realizado sobre Certified Satisfiability. Ahí se muestra una adaptación del ya existente método dual de tableaux y secuentes basados en contexto para satisfactibilidad de fórmulas PLTL en Negation Normal Form. Se ha trabajado la generación de certificados en el caso en el que las fórmulas son insactisfactibles. Por último, se aporta una prueba de soundness del método. Segundo, se ha optimizado con Sat Solvers el método de Certified Satisfiability para el contexto de Certified Model Checking. Se aportan varios ejemplos de sistemas y propiedades. Tercero, se ha creado un nuevo método dual de tableaux y secuentes basados en contexto para realizar Certified Satisfiability para fórmulas CTL yECTL. Se presenta el método y un algoritmo que genera tanto el modelo en el caso de que las fórmulas son satisfactibles como la prueba en el caso en que no lo sean. Por último, se presenta una implementación del método para CTL y una experimentación comparando el método propuesto con otro método de similares características
The value-free ideal in codes of conduct for research integrity
While the debate on values in science focuses on normative questions on the level of the individual (e.g. should researchers try to make their work as value free as possible?), comparatively little attention has been paid to the institutional and professional norms that researchers are expected to follow. To address this knowledge gap, we conduct a content analysis of leading national codes of conduct for research integrity of European countries, and structure our analysis around the question: do these documents allow for researchers to be influenced by “non-epistemic” (moral, cultural, commercial, political, etc.) values or do they prohibit such influence in compliance with the value-free ideal (VFI) of science?
Our results return a complex picture. On the one hand, codes of conduct consider many non-epistemic values to be a legitimate influence on the decision-making of researchers. On the other, most of these documents include what we call VFI-like positions: passages claiming that researchers should be free and independent from any external influence. This shows that while many research integrity documents do not fully endorse the VFI, they do not reject it and continue to be implicitly influenced by it. This results in internal tensions and underdetermined guidance on non-epistemic values, that may limit some of the uses of research integrity codes, especially for purposes of ethical self-regulation. While codes of conduct cannot be expected to decide how researchers should act in every instance, we do suggest that they acknowledge the challenges of how to integrate non-epistemic values in research in a more explicit fashion
When Personalization Harms: Reconsidering the Use of Group Attributes in Prediction
Machine learning models are often personalized with categorical attributes
that are protected, sensitive, self-reported, or costly to acquire. In this
work, we show models that are personalized with group attributes can reduce
performance at a group level. We propose formal conditions to ensure the "fair
use" of group attributes in prediction tasks by training one additional model
-- i.e., collective preference guarantees to ensure that each group who
provides personal data will receive a tailored gain in performance in return.
We present sufficient conditions to ensure fair use in empirical risk
minimization and characterize failure modes that lead to fair use violations
due to standard practices in model development and deployment. We present a
comprehensive empirical study of fair use in clinical prediction tasks. Our
results demonstrate the prevalence of fair use violations in practice and
illustrate simple interventions to mitigate their harm.Comment: ICML 2023 Ora
The FormAI Dataset: Generative AI in Software Security Through the Lens of Formal Verification
This paper presents the FormAI dataset, a large collection of 112, 000
AI-generated compilable and independent C programs with vulnerability
classification. We introduce a dynamic zero-shot prompting technique
constructed to spawn diverse programs utilizing Large Language Models (LLMs).
The dataset is generated by GPT-3.5-turbo and comprises programs with varying
levels of complexity. Some programs handle complicated tasks like network
management, table games, or encryption, while others deal with simpler tasks
like string manipulation. Every program is labeled with the vulnerabilities
found within the source code, indicating the type, line number, and vulnerable
function name. This is accomplished by employing a formal verification method
using the Efficient SMT-based Bounded Model Checker (ESBMC), which uses model
checking, abstract interpretation, constraint programming, and satisfiability
modulo theories to reason over safety/security properties in programs. This
approach definitively detects vulnerabilities and offers a formal model known
as a counterexample, thus eliminating the possibility of generating false
positive reports. We have associated the identified vulnerabilities with Common
Weakness Enumeration (CWE) numbers. We make the source code available for the
112, 000 programs, accompanied by a separate file containing the
vulnerabilities detected in each program, making the dataset ideal for training
LLMs and machine learning algorithms. Our study unveiled that according to
ESBMC, 51.24% of the programs generated by GPT-3.5 contained vulnerabilities,
thereby presenting considerable risks to software safety and security.Comment: https://github.com/FormAI-Datase
Ditransitives in germanic languages. Synchronic and diachronic aspects
This volume brings together twelve empirical studies on ditransitive constructions in Germanic languages and their varieties, past and present. Specifically, the volume includes contributions on a wide variety of Germanic languages, including English, Dutch, and German, but also Danish, Swedish, and Norwegian, as well as lesser-studied ones such as Faroese. While the first part of the volume focuses on diachronic aspects, the second part showcases a variety of synchronic aspects relating to ditransitive patterns. Methodologically, the volume covers both experimental and corpus-based studies. Questions addressed by the papers in the volume are, among others, issues like the cross-linguistic pervasiveness and cognitive reality of factors involved in the choice between different ditransitive constructions, or differences and similarities in the diachronic development of ditransitives. The volume’s broad scope and comparative perspective offers comprehensive insights into well-known phenomena and furthers our understanding of variation across languages of the same family
The BIRD Study:How should best interests decisions concerning end-stage kidney disease care for adults be made?
This thesis investigates “best interests” decisions concerning the care of adults with or approaching end-stage kidney failure. I focus on the ethico-legal dimensions of questions of dialysis provision versus conservative kidney management. Through an empirical bioethics approach, I complement my normative inquiry with qualitative exploration of the views and experiences of three stakeholder groups: nephrologists, renal nurses, and “consultees” (family members).Limited existing literature lacks consensus on how these decisions should be made, but overwhelmingly recognises difficulties in involving various stakeholders and manoeuvring towards an appropriate decision without conflict. There is acknowledgement of the complexity of balancing medical and non-medical factors, with particular reference to what the patient might value. Participants in my own empirical research similarly highlighted areas of conflict in their own experiences. Whilst wanting to respect the patient’s own care preferences, healthcare professionals and consultees alike spoke of a difficulty in accurately identifying such preferences. For professionals, resulting disagreements had the potential to lead them down the “path of least resistance” in trying to maintain relationships with those close to the patient.Employing a process of reflective equilibrium, I combine my own intuitions with the perspectives identified in the literature and my empirical data to reach a set of coherent positions on how these best interests decisions should be made. I argue that active discussions should begin in advance of any significant care decision arising. These should focus on exploring not only what care options the patient might want, but also how the patient might want any future best interests decision to be approached. Further, these discussions should include the clarification of stakeholder roles in best interests decisions and sensitively set expectations – following which, strong communication should remain consistent. In addition, I highlight where research is needed to supplement my recommendations
Impossibility Theorems for Feature Attribution
Despite a sea of interpretability methods that can produce plausible
explanations, the field has also empirically seen many failure cases of such
methods. In light of these results, it remains unclear for practitioners how to
use these methods and choose between them in a principled way. In this paper,
we show that for moderately rich model classes (easily satisfied by neural
networks), any feature attribution method that is complete and linear -- for
example, Integrated Gradients and SHAP -- can provably fail to improve on
random guessing for inferring model behaviour. Our results apply to common
end-tasks such as characterizing local model behaviour, identifying spurious
features, and algorithmic recourse. One takeaway from our work is the
importance of concretely defining end-tasks: once such an end-task is defined,
a simple and direct approach of repeated model evaluations can outperform many
other complex feature attribution methods.Comment: 36 pages, 4 figures. Significantly expanded experiment
Context and uncertainty in decisions from experience
From the moment we wake up each morning, we are faced with countless choices. Should we press snooze on our alarm? Have toast or cereal for breakfast? Bring an umbrella? Agree to work on that new project? Go to the gym or eat a whole pizza while watching Netflix? The challenge when studying decision-making is to collapse these diverse scenarios into feasible experimental methods. The standard theoretical approach is to represent options using outcomes and probabilities and this has provided a rationale for studying decisions using gambling tasks. These tasks typically involve repeated choices between a single pair of options and outcomes that are determined probabilistically. Thus, the two sections in this thesis ask a simple question: are we missing something by using pairs of options that are divorced from the context in which we make choices outside the psychology laboratory?
The first section focuses on the impact of extreme outcomes within a decision context. Chapter 2 addresses whether there is a rational explanation for why these outcomes appear in decisions from experience and numerous other cognitive domains. Chapters 3-5 describe six experiments that distinguish between plausible theories based on whether they measure extremity as categorical, ordinal, or continuous; whether extremity refers to the centre, the edges, or neighbouring outcomes; whether outcomes are represented as types or tokens; and whether extreme outcomes are defined using temporal or distributional characteristics. In the second section, we shift our focus to how people perceive uncertainty. We examine a distinction between uncertainty that is attributed to inadequate knowledge and uncertainty that is attributed to an inherently random process. Chapter 6 describes three experiments that examine whether allowing participants to map their uncertainty onto observable variability leads them to perceive it as potentially resolvable rather than purely stochastic. We then examine how this influences whether they seek additional information. In summary, the experiments described in these two sections demonstrate the importance of context and uncertainty in understanding how we make decisions
- …