66,559 research outputs found

    Model Cards for Model Reporting

    Full text link
    Trained machine learning models are increasingly used to perform high-impact tasks in areas such as law enforcement, medicine, education, and employment. In order to clarify the intended use cases of machine learning models and minimize their usage in contexts for which they are not well suited, we recommend that released models be accompanied by documentation detailing their performance characteristics. In this paper, we propose a framework that we call model cards, to encourage such transparent model reporting. Model cards are short documents accompanying trained machine learning models that provide benchmarked evaluation in a variety of conditions, such as across different cultural, demographic, or phenotypic groups (e.g., race, geographic location, sex, Fitzpatrick skin type) and intersectional groups (e.g., age and race, or sex and Fitzpatrick skin type) that are relevant to the intended application domains. Model cards also disclose the context in which models are intended to be used, details of the performance evaluation procedures, and other relevant information. While we focus primarily on human-centered machine learning models in the application fields of computer vision and natural language processing, this framework can be used to document any trained machine learning model. To solidify the concept, we provide cards for two supervised models: One trained to detect smiling faces in images, and one trained to detect toxic comments in text. We propose model cards as a step towards the responsible democratization of machine learning and related AI technology, increasing transparency into how well AI technology works. We hope this work encourages those releasing trained machine learning models to accompany model releases with similar detailed evaluation numbers and other relevant documentation

    Recommender systems and their ethical challenges

    Get PDF
    This article presents the first, systematic analysis of the ethical challenges posed by recommender systems through a literature review. The article identifies six areas of concern, and maps them onto a proposed taxonomy of different kinds of ethical impact. The analysis uncovers a gap in the literature: currently user-centred approaches do not consider the interests of a variety of other stakeholders—as opposed to just the receivers of a recommendation—in assessing the ethical impacts of a recommender system

    Vegetarians versus Vegans

    Get PDF
    There is a gap in literature related to studies that explicitly compare vegetarians to vegans, as existing studies typically group these diets together and examine carnivorous and omnivorous diets in order to identify similarities and differences. The purpose of this study is to look at vegetarian and vegan diets to see if differences in attitudes towards animals, perceptions of animal usage, consumption, and morality exist. Participants included vegetarian and vegan Liberty University students (graduate and undergraduate, online and residential) who were at least 18 years old. Overall, one hundred students took part in the study, 50 of them were vegetarian and 50 of them were vegan. They completed an anonymous online survey measuring demographics, attitudes towards animals, perceptions of animal usage, and morality. Overall, there was a statistically significant difference between vegans and vegetarians in attitudes toward animals as well as their perception of human and animal similarity. There was also a significant difference between one of the five domains regarding moral decision-making, the fairness/reciprocity domain. However, in the other four domains (harm/care, in-group/loyalty, authority/respect, and purity/sanctity), there was not a significant difference between the vegan and vegetarian group. Since past studies have grouped vegans and vegetarians as alike, these results seem support the importance of maintaining a separation between diets and subgroups in future studies since differences may exist between the groups

    Jockey Club Age-Friendly City Project : Action plan : Tuen Mun

    Full text link
    In response to the global ageing population, the World Health Organization (the “WHO”) devised the concept of “Global Age-friendly Cities” in 2005 to encourage cities all around the world to develop a healthy and comfortable living environment with age-friendly facilities and provide sufficient community support and health care services which benefit the older people, family and society. In order to proactively tackle the challenges of an ageing population and promote the concept of an age-friendly city, the Hong Kong Jockey Club Charities Trust launched the Jockey Club Age-friendly City Project (“Project”) in 2015 in partnership with four gerontology research institutes of local universities, including CUHK Jockey Club Institute of Ageing, Sau Po Centre on Ageing of the University of Hong Kong, Asia-Pacific Institute of Ageing Studies of Lingnan University (“LU APIAS”), and Institute of Active Ageing of the Hong Kong Polytechnic University. The four institutes have formed professional teams under this project to support eighteen districts in Hong Kong to adopt a bottom-up and district-based approach to develop age-friendly communities. Under the Project, LU APIAS conducted a baseline assessment, which comprised questionnaire surveys, focus group interviews and field observation from May to September 2017 in order to provide relevant information to the Tuen Mun District Council and other district stakeholders on the existing age-friendliness of Tuen Mun District, Hong Kong (“District”). Senior residents in the District have also been recruited as ambassadors to spread the messages of age-friendly city. Training workshops and seminars have been arranged to equip them with necessary skills and knowledge to perform qualitative research by making assessment in the District with reference to the eight domains of the “Age-friendly City”. Meanwhile, residents have been encouraged to express their views regarding age-friendly facilities and measures in the community. LU APIAS has compiled the results of baseline assessment, including questionnaire surveys, focus groups and observations by the ambassadors, into a baseline assessment report. The report, together with this action plan for enhancing the age-friendliness of the District, will be submitted to WHO for joining its Global Network of Age-friendly Cities and Communities

    AI for the Common Good?! Pitfalls, challenges, and Ethics Pen-Testing

    Full text link
    Recently, many AI researchers and practitioners have embarked on research visions that involve doing AI for "Good". This is part of a general drive towards infusing AI research and practice with ethical thinking. One frequent theme in current ethical guidelines is the requirement that AI be good for all, or: contribute to the Common Good. But what is the Common Good, and is it enough to want to be good? Via four lead questions, I will illustrate challenges and pitfalls when determining, from an AI point of view, what the Common Good is and how it can be enhanced by AI. The questions are: What is the problem / What is a problem?, Who defines the problem?, What is the role of knowledge?, and What are important side effects and dynamics? The illustration will use an example from the domain of "AI for Social Good", more specifically "Data Science for Social Good". Even if the importance of these questions may be known at an abstract level, they do not get asked sufficiently in practice, as shown by an exploratory study of 99 contributions to recent conferences in the field. Turning these challenges and pitfalls into a positive recommendation, as a conclusion I will draw on another characteristic of computer-science thinking and practice to make these impediments visible and attenuate them: "attacks" as a method for improving design. This results in the proposal of ethics pen-testing as a method for helping AI designs to better contribute to the Common Good.Comment: to appear in Paladyn. Journal of Behavioral Robotics; accepted on 27-10-201
    • …
    corecore