418 research outputs found

    FUSE Measurements of Interstellar Fluorine

    Full text link
    The source of fluorine is not well understood, although core-collapse supernovae, Wolf-Rayet stars, and asymptotic giant branch stars have been suggested. A search for evidence of the nu process during Type II supernovae is presented. Absorption from interstellar F I is seen in spectra of HD 208440 and HD 209339A acquired with the Far Ultraviolet Spectroscopic Explorer. In order to extract the column density for F I from the line at 954 A, absorption from H2 has to be modeled and then removed. Our analysis indicates that for H2 column densities less than about 3 x 10^20 cm^-2, the amount of F I can be determined from lambda 954. For these two sight lines, there is no clear indication for enhanced F abundances resulting from the nu process in a region shaped by past supernovae.Comment: 17 pages, 4 figures, accepted for publication in Ap

    Application of the NIST AI Risk Management Framework to Surveillance Technology

    Full text link
    This study offers an in-depth analysis of the application and implications of the National Institute of Standards and Technology's AI Risk Management Framework (NIST AI RMF) within the domain of surveillance technologies, particularly facial recognition technology. Given the inherently high-risk and consequential nature of facial recognition systems, our research emphasizes the critical need for a structured approach to risk management in this sector. The paper presents a detailed case study demonstrating the utility of the NIST AI RMF in identifying and mitigating risks that might otherwise remain unnoticed in these technologies. Our primary objective is to develop a comprehensive risk management strategy that advances the practice of responsible AI utilization in feasible, scalable ways. We propose a six-step process tailored to the specific challenges of surveillance technology that aims to produce a more systematic and effective risk management practice. This process emphasizes continual assessment and improvement to facilitate companies in managing AI-related risks more robustly and ensuring ethical and responsible deployment of AI systems. Additionally, our analysis uncovers and discusses critical gaps in the current framework of the NIST AI RMF, particularly concerning its application to surveillance technologies. These insights contribute to the evolving discourse on AI governance and risk management, highlighting areas for future refinement and development in frameworks like the NIST AI RMF.Comment: 14 pages, 2 figure

    Beyond Behaviorist Representational Harms: A Plan for Measurement and Mitigation

    Full text link
    Algorithmic harms are commonly categorized as either allocative or representational. This study specifically addresses the latter, focusing on an examination of current definitions of representational harms to discern what is included and what is not. This analysis motivates our expansion beyond behavioral definitions to encompass harms to cognitive and affective states. The paper outlines high-level requirements for measurement: identifying the necessary expertise to implement this approach and illustrating it through a case study. Our work highlights the unique vulnerabilities of large language models to perpetrating representational harms, particularly when these harms go unmeasured and unmitigated. The work concludes by presenting proposed mitigations and delineating when to employ them. The overarching aim of this research is to establish a framework for broadening the definition of representational harms and to translate insights from fairness research into practical measurement and mitigation praxis.Comment: 23 pages, 7 figure

    Commercial AI, Conflict, and Moral Responsibility: A theoretical analysis and practical approach to the moral responsibilities associated with dual-use AI technology

    Full text link
    This paper presents a theoretical analysis and practical approach to the moral responsibilities when developing AI systems for non-military applications that may nonetheless be used for conflict applications. We argue that AI represents a form of crossover technology that is different from previous historical examples of dual- or multi-use technology as it has a multiplicative effect across other technologies. As a result, existing analyses of ethical responsibilities around dual-use technologies do not necessarily work for AI systems. We instead argue that stakeholders involved in the AI system lifecycle are morally responsible for uses of their systems that are reasonably foreseeable. The core idea is that an agent's moral responsibility for some action is not necessarily determined by their intentions alone; we must also consider what the agent could reasonably have foreseen to be potential outcomes of their action, such as the potential use of a system in conflict even when it is not designed for that. In particular, we contend that it is reasonably foreseeable that: (1) civilian AI systems will be applied to active conflict, including conflict support activities, (2) the use of civilian AI systems in conflict will impact applications of the law of armed conflict, and (3) crossover AI technology will be applied to conflicts that fall short of armed conflict. Given these reasonably foreseeably outcomes, we present three technically feasible actions that developers of civilian AIs can take to potentially mitigate their moral responsibility: (a) establishing systematic approaches to multi-perspective capability testing, (b) integrating digital watermarking in model weight matrices, and (c) utilizing monitoring and reporting mechanisms for conflict-related AI applications.Comment: 9 page

    Causal Pluralism in Philosophy: Empirical Challenges and Alternative Proposals

    Get PDF
    An increasing number of arguments for causal pluralism invoke empirical psychological data. Different aspects of causal cognition-specifically, causal perception and causal inference-are thought to involve distinct cognitive processes and representations, and they thereby distinctively support transference and dependency theories of causation, respectively. We argue that this dualistic picture of causal concepts arises from methodological differences, rather than from an actual plurality of concepts. Hence, philosophical causal pluralism is not particularly supported by the empirical data. Serious engagement with cognitive science reveals that the connection between psychological concepts of causation and philosophical notions is substantially more complicated than is traditionally presumed

    Causal Pluralism in Philosophy: Empirical Challenges and Alternative Proposals

    Get PDF
    An increasing number of arguments for causal pluralism invoke empirical psychological data. Different aspects of causal cognition-specifically, causal perception and causal inference-are thought to involve distinct cognitive processes and representations, and they thereby distinctively support transference and dependency theories of causation, respectively. We argue that this dualistic picture of causal concepts arises from methodological differences, rather than from an actual plurality of concepts. Hence, philosophical causal pluralism is not particularly supported by the empirical data. Serious engagement with cognitive science reveals that the connection between psychological concepts of causation and philosophical notions is substantially more complicated than is traditionally presumed

    GRACE-C: Generalized Rate Agnostic Causal Estimation via Constraints

    Full text link
    Graphical structures estimated by causal learning algorithms from time series data can provide misleading causal information if the causal timescale of the generating process fails to match the measurement timescale of the data. Existing algorithms provide limited resources to respond to this challenge, and so researchers must either use models that they know are likely misleading, or else forego causal learning entirely. Existing methods face up-to-four distinct shortfalls, as they might 1) require that the difference between causal and measurement timescales is known; 2) only handle very small number of random variables when the timescale difference is unknown; 3) only apply to pairs of variables; or 4) be unable to find a solution given statistical noise in the data. This research addresses these challenges. Our approach combines constraint programming with both theoretical insights into the problem structure and prior information about admissible causal interactions to achieve multiple orders of magnitude in speed-up. The resulting system maintains theoretical guarantees while scaling to significantly larger sets of random variables (>100) without knowledge of timescale differences. This method is also robust to edge misidentification and can use parametric connection strengths, while optionally finding the optimal solution among many possible ones.Comment: published in International Conference on Learning Representation (Spotlight
    corecore