1,613 research outputs found

    An examination of the verbal behaviour of intergroup discrimination

    Get PDF
    This thesis examined relationships between psychological flexibility, psychological inflexibility, prejudicial attitudes, and dehumanization across three cross-sectional studies with an additional proposed experimental study. Psychological flexibility refers to mindful attention to the present moment, willing acceptance of private experiences, and engaging in behaviours congruent with one’s freely chosen values. Inflexibility, on the other hand, indicates a tendency to suppress unwanted thoughts and emotions, entanglement with one’s thoughts, and rigid behavioural patterns. Study 1 found limited correlations between inflexibility and sexism, racism, homonegativity, and dehumanization. Study 2 demonstrated more consistent positive associations between inflexibility and prejudice. And Study 3 controlled for right-wing authoritarianism and social dominance orientation, finding inflexibility predicted hostile sexism and racism beyond these factors. While showing some relationships, particularly with sexism and racism, psychological inflexibility did not consistently correlate with varied prejudices across studies. The proposed randomized controlled trial aims to evaluate an Acceptance and Commitment Therapy intervention to reduce sexism through enhanced psychological flexibility. Overall, findings provide mixed support for the utility of flexibility-based skills in addressing complex societal prejudices. Research should continue examining flexibility integrated with socio-cultural approaches to promote equity

    Multidisciplinary perspectives on Artificial Intelligence and the law

    Get PDF
    This open access book presents an interdisciplinary, multi-authored, edited collection of chapters on Artificial Intelligence (‘AI’) and the Law. AI technology has come to play a central role in the modern data economy. Through a combination of increased computing power, the growing availability of data and the advancement of algorithms, AI has now become an umbrella term for some of the most transformational technological breakthroughs of this age. The importance of AI stems from both the opportunities that it offers and the challenges that it entails. While AI applications hold the promise of economic growth and efficiency gains, they also create significant risks and uncertainty. The potential and perils of AI have thus come to dominate modern discussions of technology and ethics – and although AI was initially allowed to largely develop without guidelines or rules, few would deny that the law is set to play a fundamental role in shaping the future of AI. As the debate over AI is far from over, the need for rigorous analysis has never been greater. This book thus brings together contributors from different fields and backgrounds to explore how the law might provide answers to some of the most pressing questions raised by AI. An outcome of the Católica Research Centre for the Future of Law and its interdisciplinary working group on Law and Artificial Intelligence, it includes contributions by leading scholars in the fields of technology, ethics and the law.info:eu-repo/semantics/publishedVersio

    Weak-Instrument and Pleiotropy-Robust Methods for Mendelian randomisation, with Applications to Mental Health

    Get PDF
    This PhD dissertation focused on developing and applying new methods for Mendelian Randomisation (MR), a technique that uses genetic variants as instrumental variables in order to assess causal effects of exposures on health outcomes. The major focus of the applied research is psychiatric research and mental health, with a range of analyses that address the topic of causal risk factors for depression with the use of these genetics-informed methods. The first contribution of this dissertation is the development of new methods for pleiotropy-robust MR by leveraging sex specificity of phenotypes. These methods allow for more accurate and robust estimation of causal effects by cancelling out potential pleiotropic effects of genetic instruments. The second contribution is a new method for appraising high-dimensional correlated variables in multivariable MR. This method allows for the inclusion of multiple correlated variables as exposures in MR analyses, through a transformation to groups of exposures that have attractive statistical properties and biological meaning. Finally, the dissertation provides an applied analysis of how inflammation and BMI affect a range of depression phenotypes with cutting-edge methods. This analysis replicates previous results on the harmful effects of overweight on mood and challenges the independent effect of inflammation as proxied by CRP. The introduction of the dissertation is divided into two parts. The first part provides a walkthrough of the epidemiological concepts of bias, randomisation, and causal inference with observational data. The second part is a specific introduction to MR, including its underlying assumptions and limitations, as well as detailed discussion of developments that make it more robust. Overall, this dissertation contributes new methods and applied analyses to the field of MR, with potential implications for researchers and practitioners

    Effects of municipal smoke-free ordinances on secondhand smoke exposure in the Republic of Korea

    Get PDF
    ObjectiveTo reduce premature deaths due to secondhand smoke (SHS) exposure among non-smokers, the Republic of Korea (ROK) adopted changes to the National Health Promotion Act, which allowed local governments to enact municipal ordinances to strengthen their authority to designate smoke-free areas and levy penalty fines. In this study, we examined national trends in SHS exposure after the introduction of these municipal ordinances at the city level in 2010.MethodsWe used interrupted time series analysis to assess whether the trends of SHS exposure in the workplace and at home, and the primary cigarette smoking rate changed following the policy adjustment in the national legislation in ROK. Population-standardized data for selected variables were retrieved from a nationally representative survey dataset and used to study the policy action’s effectiveness.ResultsFollowing the change in the legislation, SHS exposure in the workplace reversed course from an increasing (18% per year) trend prior to the introduction of these smoke-free ordinances to a decreasing (−10% per year) trend after adoption and enforcement of these laws (β2 = 0.18, p-value = 0.07; β3 = −0.10, p-value = 0.02). SHS exposure at home (β2 = 0.10, p-value = 0.09; β3 = −0.03, p-value = 0.14) and the primary cigarette smoking rate (β2 = 0.03, p-value = 0.10; β3 = 0.008, p-value = 0.15) showed no significant changes in the sampled period. Although analyses stratified by sex showed that the allowance of municipal ordinances resulted in reduced SHS exposure in the workplace for both males and females, they did not affect the primary cigarette smoking rate as much, especially among females.ConclusionStrengthening the role of local governments by giving them the authority to enact and enforce penalties on SHS exposure violation helped ROK to reduce SHS exposure in the workplace. However, smoking behaviors and related activities seemed to shift to less restrictive areas such as on the streets and in apartment hallways, negating some of the effects due to these ordinances. Future studies should investigate how smoke-free policies beyond public places can further reduce the SHS exposure in ROK

    Connecting the Dots in Trustworthy Artificial Intelligence: From AI Principles, Ethics, and Key Requirements to Responsible AI Systems and Regulation

    Full text link
    Trustworthy Artificial Intelligence (AI) is based on seven technical requirements sustained over three main pillars that should be met throughout the system's entire life cycle: it should be (1) lawful, (2) ethical, and (3) robust, both from a technical and a social perspective. However, attaining truly trustworthy AI concerns a wider vision that comprises the trustworthiness of all processes and actors that are part of the system's life cycle, and considers previous aspects from different lenses. A more holistic vision contemplates four essential axes: the global principles for ethical use and development of AI-based systems, a philosophical take on AI ethics, a risk-based approach to AI regulation, and the mentioned pillars and requirements. The seven requirements (human agency and oversight; robustness and safety; privacy and data governance; transparency; diversity, non-discrimination and fairness; societal and environmental wellbeing; and accountability) are analyzed from a triple perspective: What each requirement for trustworthy AI is, Why it is needed, and How each requirement can be implemented in practice. On the other hand, a practical approach to implement trustworthy AI systems allows defining the concept of responsibility of AI-based systems facing the law, through a given auditing process. Therefore, a responsible AI system is the resulting notion we introduce in this work, and a concept of utmost necessity that can be realized through auditing processes, subject to the challenges posed by the use of regulatory sandboxes. Our multidisciplinary vision of trustworthy AI culminates in a debate on the diverging views published lately about the future of AI. Our reflections in this matter conclude that regulation is a key for reaching a consensus among these views, and that trustworthy and responsible AI systems will be crucial for the present and future of our society.Comment: 30 pages, 5 figures, under second revie

    Guided rewriting and constraint satisfaction for parallel GPU code generation

    Get PDF
    Graphics Processing Units (GPUs) are notoriously hard to optimise for manually due to their scheduling and memory hierarchies. What is needed are good automatic code generators and optimisers for such parallel hardware. Functional approaches such as Accelerate, Futhark and LIFT leverage a high-level algorithmic Intermediate Representation (IR) to expose parallelism and abstract the implementation details away from the user. However, producing efficient code for a given accelerator remains challenging. Existing code generators depend on the user input to choose a subset of hard-coded optimizations or automated exploration of implementation search space. The former suffers from the lack of extensibility, while the latter is too costly due to the size of the search space. A hybrid approach is needed, where a space of valid implementations is built automatically and explored with the aid of human expertise. This thesis presents a solution combining user-guided rewriting and automatically generated constraints to produce high-performance code. The first contribution is an automatic tuning technique to find a balance between performance and memory consumption. Leveraging its functional patterns, the LIFT compiler is empowered to infer tuning constraints and limit the search to valid tuning combinations only. Next, the thesis reframes parallelisation as a constraint satisfaction problem. Parallelisation constraints are extracted automatically from the input expression, and a solver is used to identify valid rewriting. The constraints truncate the search space to valid parallel mappings only by capturing the scheduling restrictions of the GPU in the context of a given program. A synchronisation barrier insertion technique is proposed to prevent data races and improve the efficiency of the generated parallel mappings. The final contribution of this thesis is the guided rewriting method, where the user encodes a design space of structural transformations using high-level IR nodes called rewrite points. These strongly typed pragmas express macro rewrites and expose design choices as explorable parameters. The thesis proposes a small set of reusable rewrite points to achieve tiling, cache locality, data reuse and memory optimisation. A comparison with the vendor-provided handwritten kernel ARM Compute Library and the TVM code generator demonstrates the effectiveness of this thesis' contributions. With convolution as a use case, LIFT-generated direct and GEMM-based convolution implementations are shown to perform on par with the state-of-the-art solutions on a mobile GPU. Overall, this thesis demonstrates that a functional IR yields well to user-guided and automatic rewriting for high-performance code generation

    Talking about personal recovery in bipolar disorder: Integrating health research, natural language processing, and corpus linguistics to analyse peer online support forum posts

    Get PDF
    Background: Personal recovery, ‘living a satisfying, hopeful and contributing lifeeven with the limitations caused by the illness’ (Anthony, 1993) is of particular value in bipolar disorder where symptoms often persist despite treatment. So far, personal recovery has only been studied in researcher-constructed environments (interviews, focus groups). Support forum posts can serve as a complementary naturalistic data source. Objective: The overarching aim of this thesis was to study personal recovery experiences that people living with bipolar disorder have shared in online support forums through integrating health research, NLP, and corpus linguistics in a mixed methods approach within a pragmatic research paradigm, while considering ethical issues and involving people with lived experience. Methods: This mixed-methods study analysed: 1) previous qualitative evidence on personal recovery in bipolar disorder from interviews and focus groups 2) who self-reports a bipolar disorder diagnosis on the online discussion platform Reddit 3) the relationship of mood and posting in mental health-specific Reddit forums (subreddits) 4) discussions of personal recovery in bipolar disorder subreddits. Results: A systematic review of qualitative evidence resulted in the first framework for personal recovery in bipolar disorder, POETIC (Purpose & meaning, Optimism & hope, Empowerment, Tensions, Identity, Connectedness). Mainly young or middle-aged US-based adults self-report a bipolar disorder diagnosis on Reddit. Of these, those experiencing more intense emotions appear to be more likely to post in mental health support subreddits. Their personal recovery-related discussions in bipolar disorder subreddits primarily focussed on three domains: Purpose & meaning (particularly reproductive decisions, work), Connectedness (romantic relationships, social support), Empowerment (self-management, personal responsibility). Support forum data highlighted personal recovery issues that exclusively or more frequently came up online compared to previous evidence from interviews and focus groups. Conclusion: This project is the first to analyse non-reactive data on personal recovery in bipolar disorder. Indicating the key areas that people focus on in personal recovery when posting freely and the language they use provides a helpful starting point for formal and informal carers to understand the concerns of people diagnosed with bipolar disorder and to consider how best to offer support

    Artificial Intelligence and International Conflict in Cyberspace

    Get PDF
    This edited volume explores how artificial intelligence (AI) is transforming international conflict in cyberspace. Over the past three decades, cyberspace developed into a crucial frontier and issue of international conflict. However, scholarly work on the relationship between AI and conflict in cyberspace has been produced along somewhat rigid disciplinary boundaries and an even more rigid sociotechnical divide – wherein technical and social scholarship are seldomly brought into a conversation. This is the first volume to address these themes through a comprehensive and cross-disciplinary approach. With the intent of exploring the question ‘what is at stake with the use of automation in international conflict in cyberspace through AI?’, the chapters in the volume focus on three broad themes, namely: (1) technical and operational, (2) strategic and geopolitical and (3) normative and legal. These also constitute the three parts in which the chapters of this volume are organised, although these thematic sections should not be considered as an analytical or a disciplinary demarcation

    Optimizing T Cell Manufacturing and Quality Using Functionalized Degradable Microscaffolds

    Get PDF
    Adoptive cell therapy using chimeric antigen receptor (CAR) T cells have shown promise in treating cancer, but manufacturing large numbers of high quality cells remains challenging. Currently approved T cell expansion technologies involve anti-CD3 and anti-CD28 antibodies, usually mounted on magnetic beads. This method fails to recapitulate many key signals found in vivo and is also heavily licensed by a few companies, limiting its long-term usefulness to manufactures and clinicians. Furthermore, highly potent, anti-tumor T cells are generally less-differentiated subtypes such as central memory and stem memory T cells. Despite this understanding, little has been done to optimize T cell expansion for generating these subtypes, including measurement and feedback control strategies that are necessary for any modern manufacturing process. The goal of this dissertation was to develop a microcarrier-based degradable microscaffold (DMS) T cell expansion system and determine biologically-meaningful critical quality attitudes and critical process parameters that could be used to optimize for highly-potent T cells. We developed and characterized the DMS system, including quality control steps. We also demonstrated the feasibility of expanding high-quality T cells. We used Design of Experiments methodology to optimize the DMS platform, and we developed a computational pipeline to identify and model the effects of measurable critical quality attributes and critical process parameters on the final product. Finally, we demonstrated the effectiveness of the DMS platform in vivo. This thesis lays the groundwork for a novel T cell expansion method which can be utilized at scale for clinical trials and beyond.Ph.D
    • …
    corecore