507 research outputs found

    Next-Generation Data Governance

    Get PDF
    The proliferation of sensors, electronic payments, click-stream data, location-tracking, biometric feeds, and smart home devices, creates an incredibly profitable market for both personal and non-personal data. It is also leading to an amplification of harm to those from or about whom the data is collected. Because federal law provides inadequate protection for data subjects, there are growing calls for organizations to implement data governance solutions. Unfortunately, in the U.S., the concept of data governance has not progressed beyond the management and monetization of data. Many organizations operate under an outdated paradigm which fails to consider the impact of data use on data subjects due to the proliferation of third-party service providers hawking their “check-the-box” data governance systems. As a result, American companies suffer from a lack of trust and are hindered in their international operations due to the higher data protection requirements of foreign regulators. After discussing the pitfalls of the traditional view of data governance and the limitations of suggested models, we propose a set of ten principles based on the Medical Code of Ethics. This framework, first encompassed in the Hippocratic Oath, has been evolving for over one thousand years advancing to a code of conduct based on stewardship. Just as medical ethics had to evolve as society changed and technology advanced, so too must data governance. We propose that a new iteration of data governance (Next-Gen Data Governance) can mitigate the harms resulting from the lack of data protection law in the U.S. and rebuild trust in American organizations

    Six Human-Centered Artificial Intelligence Grand Challenges

    Get PDF
    Widespread adoption of artificial intelligence (AI) technologies is substantially affecting the human condition in ways that are not yet well understood. Negative unintended consequences abound including the perpetuation and exacerbation of societal inequalities and divisions via algorithmic decision making. We present six grand challenges for the scientific community to create AI technologies that are human-centered, that is, ethical, fair, and enhance the human condition. These grand challenges are the result of an international collaboration across academia, industry and government and represent the consensus views of a group of 26 experts in the field of human-centered artificial intelligence (HCAI). In essence, these challenges advocate for a human-centered approach to AI that (1) is centered in human well-being, (2) is designed responsibly, (3) respects privacy, (4) follows human-centered design principles, (5) is subject to appropriate governance and oversight, and (6) interacts with individuals while respecting human’s cognitive capacities. We hope that these challenges and their associated research directions serve as a call for action to conduct research and development in AI that serves as a force multiplier towards more fair, equitable and sustainable societies

    Fairness in Recommendation: Foundations, Methods and Applications

    Full text link
    As one of the most pervasive applications of machine learning, recommender systems are playing an important role on assisting human decision making. The satisfaction of users and the interests of platforms are closely related to the quality of the generated recommendation results. However, as a highly data-driven system, recommender system could be affected by data or algorithmic bias and thus generate unfair results, which could weaken the reliance of the systems. As a result, it is crucial to address the potential unfairness problems in recommendation settings. Recently, there has been growing attention on fairness considerations in recommender systems with more and more literature on approaches to promote fairness in recommendation. However, the studies are rather fragmented and lack a systematic organization, thus making it difficult to penetrate for new researchers to the domain. This motivates us to provide a systematic survey of existing works on fairness in recommendation. This survey focuses on the foundations for fairness in recommendation literature. It first presents a brief introduction about fairness in basic machine learning tasks such as classification and ranking in order to provide a general overview of fairness research, as well as introduce the more complex situations and challenges that need to be considered when studying fairness in recommender systems. After that, the survey will introduce fairness in recommendation with a focus on the taxonomies of current fairness definitions, the typical techniques for improving fairness, as well as the datasets for fairness studies in recommendation. The survey also talks about the challenges and opportunities in fairness research with the hope of promoting the fair recommendation research area and beyond.Comment: Accepted by ACM Transactions on Intelligent Systems and Technology (TIST

    Why we need biased AI -- How including cognitive and ethical machine biases can enhance AI systems

    Full text link
    This paper stresses the importance of biases in the field of artificial intelligence (AI) in two regards. First, in order to foster efficient algorithmic decision-making in complex, unstable, and uncertain real-world environments, we argue for the structurewise implementation of human cognitive biases in learning algorithms. Secondly, we argue that in order to achieve ethical machine behavior, filter mechanisms have to be applied for selecting biased training stimuli that represent social or behavioral traits that are ethically desirable. We use insights from cognitive science as well as ethics and apply them to the AI field, combining theoretical considerations with seven case studies depicting tangible bias implementation scenarios. Ultimately, this paper is the first tentative step to explicitly pursue the idea of a re-evaluation of the ethical significance of machine biases, as well as putting the idea forth to implement cognitive biases into machines

    Articulating tomorrow: Large language models in the service of professional training. A contribution by the Digitalbegleitung (technological monitoring and research) within the framework of the German funding program "Innovationswettbewerb INVITE"

    Get PDF
    The present paper offers a comprehensive introduction to large language models and their transformative impact on professional training. Language models, especially GPT models, are on the verge of revolutionizing teaching methods and the culture of learning itself. The paper aims to explore the diverse applications, opportunities, and challenges of language models in professional education and training. It presents how language models work and real-world use cases in professional education. The use cases range from filtering and capturing metadata from course descriptions for better findability and interoperability, to improving training in production, supporting role-play-based learning units, and virtual coaching for future leaders. Each case study highlights the specific use of language models, the benefits they bring to educational content, and the insights gained from integrating these technologies into learning systems. This publication is part of an innovation competition focused on connecting and advancing educational and training platforms with modern methods like AI. It underscores the necessity for ongoing research, development, and collaboration to responsibly harness the full potential of large language models in education. (DIPF/Orig.)Das vorliegende Papier bietet eine umfassende EinfĂŒhrung in große Sprachmodelle und ihre transformative Wirkung auf die berufsbezogene Weiterbildung. Sprachmodelle, insbesondere GPT-Modelle, stehen an der Schwelle, Lehrmethoden und die Lernkultur selbst zu revolutionieren. Das Papier zielt darauf ab, die vielfĂ€ltigen Einsatzmöglichkeiten, Chancen und Herausforderungen von Sprachmodellen in der beruflichen Bildung und Weiterbildung zu erkunden. Es stellt die Funktionsweise von Sprachmodellen und reale AnwendungsfĂ€lle in der beruflichen Bildung vor. Die AnwendungsfĂ€lle reichen vom Herausfiltern und Erfassen von Metadaten aus Kursbeschreibungen fĂŒr eine bessere Auffindbarkeit und InteroperabilitĂ€t, ĂŒber die Verbesserung der Ausbildung in der Produktion, die UnterstĂŒtzung von rollenspielbasierten Lerneinheiten, bis hin zum virtuellen Coaching fĂŒr zukĂŒnftige FĂŒhrungskrĂ€fte. Jede Fallstudie reflektiert die spezifische Nutzung von Sprachmodellen, die Vorteile, die sie fĂŒr den Bildungsinhalt bringen, und die aus der Integration dieser Technologien in Lernsysteme gewonnenen Erkenntnisse. Diese Publikation ist Teil des vom BMBF geförderten Innovationswettbewerbs INVITE, der auf die Vernetzung und Weiterentwicklung von Bildungs- und Weiterbildungsplattformen mit modernen Methoden wie KI fokussiert. Das Papier betont die Notwendigkeit kontinuierlicher Forschung, Entwicklung und Zusammenarbeit, um das volle Potenzial von Sprachmodellen in der Bildung verantwortungsvoll zu nutzen. (DIPF/Orig.

    Engage D3.10 Research and innovation insights

    Get PDF
    Engage is the SESAR 2020 Knowledge Transfer Network (KTN). It is managed by a consortium of academia and industry, with the support of the SESAR Joint Undertaking. This report highlights future research opportunities for ATM. The basic framework is structured around three research pillars. Each research pillar has a dedicated section in this report. SESAR’s Strategic Research and Innovation Agenda, Digital European Sky is a focal point of comparison. Much of the work is underpinned by the building and successful launch of the Engage wiki, which comprises an interactive research map, an ATM concepts roadmap and a research repository. Extensive lessons learned are presented. Detailed proposals for future research, plus research enablers and platforms are suggested for SESAR 3

    Automated editorial control:Responsibility for news personalisation under European media law

    Get PDF
    News personalisation allows social and traditional media media to show each individual different information that is ‘relevant’ to them. The technology plays an important role in the digital media environment, as it navigates individuals through the vast amounts of content available online. However, determining what news an individual should see involves nuanced editorial judgment. The public and legal debate have highlighted the dangers, ranging filter bubbles to polarisation, that could result from ignoring the need for such editorial judgment.This dissertation analyses how editorial responsibility should be safeguarded in the context of news personalisation. It argues that a key challenge to the responsible implementation of news personalisation lies in the way it changes the exercise of editorial control. Rather than an editor deciding what news is on the frontpage, personalisation algorithms’ recommendations are influenced by software engineers, news recipients, business departments, product managers, and/or editors and journalists. The dissertation uses legal and empirical research to analyse the roles and responsibilities of three central actors: traditional media, platforms, and news users. It concludes law can play an important role by enabling stakeholders to control personalisation in line with editorial values. It can do so by for example ensuring the availability of metrics that allow editors to evaluate personalisation algorithms, or by enabling individuals to understand and influence how personalisation shapes their news diet. At the same time, law must ensure an appropriate allocation of responsibility in the face of fragmenting editorial control, including by moving towards cooperative responsibility for platforms and ensuring editors can control the design of personalisation algorithms
    • 

    corecore