3 research outputs found

    The Grand Illusion: The Myth of Software Portability and Implications for ML Progress

    Full text link
    Pushing the boundaries of machine learning often requires exploring different hardware and software combinations. However, the freedom to experiment across different tooling stacks can be at odds with the drive for efficiency, which has produced increasingly specialized AI hardware and incentivized consolidation around a narrow set of ML frameworks. Exploratory research can be restricted if software and hardware are co-evolving, making it even harder to stray away from mainstream ideas that work well with popular tooling stacks. While this friction increasingly impacts the rate of innovation in machine learning, to our knowledge the lack of portability in tooling has not been quantified. In this work, we ask: How portable are popular ML software frameworks? We conduct a large-scale study of the portability of mainstream ML frameworks across different hardware types. Our findings paint an uncomfortable picture -- frameworks can lose more than 40% of their key functions when ported to other hardware. Worse, even when functions are portable, the slowdown in their performance can be extreme and render performance untenable. Collectively, our results reveal how costly straying from a narrow set of hardware-software combinations can be - and suggest that specialization of hardware impedes innovation in machine learning research.Comment: 28 pages, 13 figures, repo can be found at associated https://github.com/for-ai/portabilit

    Investigating Factored Cognition in Large Language Models For Answering Ethically Nuanced Questions

    Full text link
    <p>Large language models (LLMs) are becoming ubiquitous and are often used to answer difficult questions that have important ethical and moral dimensions. However, most LLMs are trained with a unidimensional ethical framework imparted by its designers. To begin to remedy this problem, we employ factored cognition to augment the interpretability of the model's ethical and moral reasoning. In this paper, we demonstrate our API which takes in a question and breaks it into subquestions that prompt the model for expansions on the problem that explore a wider moral space. The answers to the subquestions are then collected and compiled into a more interpretable response that better illustrates the process by which the model arrived at an answer. We benchmark our approach to establish that model performance is slightly decreased, but mostly left intact compared to the standalone model in moral benchmarks. </p&gt

    Case Studies of AI Policy Development in Africa

    Full text link
    Artificial Intelligence (AI) requires new ways of evaluating national technology use and strategy for African nations. We conduct a survey of existing 'readiness' assessments both for general digital adoption and for AI policy in particular. We conclude that existing global readiness assessments do not fully capture African states' progress in AI readiness and lay the groundwork for how assessments can be better used for the African context. We consider the extent to which these indicators map to the African context and what these indicators miss in capturing African states' on-the-ground work in meeting AI capability. Through case studies of four African nations of diverse geographic and economic dimensions, we identify nuances missed by global assessments and offer high-level policy considerations for how states can best improve their AI readiness standards and prepare their societies to capture the benefits of AI
    corecore