1,201 research outputs found

    Long Text Generation via Adversarial Training with Leaked Information

    Get PDF
    Automatically generating coherent and semantically meaningful text has many applications in machine translation, dialogue systems, image captioning, etc. Recently, by combining with policy gradient, Generative Adversarial Nets (GAN) that use a discriminative model to guide the training of the generative model as a reinforcement learning policy has shown promising results in text generation. However, the scalar guiding signal is only available after the entire text has been generated and lacks intermediate information about text structure during the generative process. As such, it limits its success when the length of the generated text samples is long (more than 20 words). In this paper, we propose a new framework, called LeakGAN, to address the problem for long text generation. We allow the discriminative net to leak its own high-level extracted features to the generative net to further help the guidance. The generator incorporates such informative signals into all generation steps through an additional Manager module, which takes the extracted features of current generated words and outputs a latent vector to guide the Worker module for next-word generation. Our extensive experiments on synthetic data and various real-world tasks with Turing test demonstrate that LeakGAN is highly effective in long text generation and also improves the performance in short text generation scenarios. More importantly, without any supervision, LeakGAN would be able to implicitly learn sentence structures only through the interaction between Manager and Worker.Comment: 14 pages, AAAI 201

    Computational ethics

    Get PDF
    Technological advances are enabling roles for machines that present novel ethical challenges. The study of 'AI ethics' has emerged to confront these challenges, and connects perspectives from philosophy, computer science, law, and economics. Less represented in these interdisciplinary efforts is the perspective of cognitive science. We propose a framework – computational ethics – that specifies how the ethical challenges of AI can be partially addressed by incorporating the study of human moral decision-making. The driver of this framework is a computational version of reflective equilibrium (RE), an approach that seeks coherence between considered judgments and governing principles. The framework has two goals: (i) to inform the engineering of ethical AI systems, and (ii) to characterize human moral judgment and decision-making in computational terms. Working jointly towards these two goals will create the opportunity to integrate diverse research questions, bring together multiple academic communities, uncover new interdisciplinary research topics, and shed light on centuries-old philosophical questions.publishedVersio

    Artificial Intelligence in Criminal Justice Settings:: Where should be the limits of Artificial Intelligence in legal decision-making? Should an AI device make a decision about human justice?

    Get PDF
    The application of Artificial Intelligence (AI) systems for high-stakes decision making is currently out for debate. In the Criminal Justice System, it can provide great benefits as well as aggravate systematic biases and introduce unprecedented ones. Hence, should artificial devices be involved in the decision-making process? And if the answer is affirmative, where should be the limits of that involvement? To answer these questions, this dissertation examines two popular risk assessment tools currently in use in the United States, LS and COMPAS, to discuss the differences between a traditional and an actuarial instrument that rely on computerized algorithms. Further analysis of the later is done in relation with the Fairness, Accountability, Transparency and Ethics (FATE) perspective to be implemented in any technology involving AI. Although the future of AI is uncertain, the ignorance with respect to so many aspects of this kind of innovative methods demand further research on how to make the best use of the several opportunities that it brings

    A chemometric approach to investigating South African wine behaviour using chemical and sensory markers

    Get PDF
    Thesis (PhDAgric)--Stellenbosch University, 2021.ENGLISH ABSTRACT: The aim of this dissertation was to demonstrate the value of comprehensive narratives and elucidate critical steps in data handling in Oenology, while highlighting some common misconceptions and misinterpretations related to the process. This compilation was a journey through different stages of dealing with oenological data, with increasing complexity in both the strategies and the techniques used (sensory, chemistry, and statistics). To achieve this aim, different strategies and multivariate tools were used under two prime objectives. Firstly, several multivariate descriptive approaches were used to investigate two oenological problems and lay out the contextual foundations for the statistics-focused work (Chapters 3 and 5). Secondly, in increasing levels of complexity, statistical strategies of constructing comprehensive data fusion as well as pattern recognition models were investigated (Chapters 4 and 6). A comprehensive literature review (Chapter 2) examined and addressed common misconceptions in the different stages of data handling Oenology. The first oenological problem, described in Chapter 3, investigated the evolution of the sensory perception of aroma, as well as the antioxidant-related parameters and volatile compound composition of Sauvignon Blanc and Chenin Blanc wines stored under different conditions and durations. The study applied an appropriate sensory method for this research question, namely, Pivot©Profiling. The study was able to show the evolution of Sauvignon Blanc from ‘fruity’ and ‘herbaceous’ and of Chenin Blanc from ‘fruity’ and ‘tropical’ both towards ‘toasted’, ‘oak’, and ‘honey’ attributes. Chemically, the volatile composition did not show any trends. However, wines stored at higher temperatures for longer periods had relatively higher UV-Vis absorbance, colour density as well as higher b* (yellow) values and lower clarity in terms of L* index, compared to the control. The second oenological problem, described in Chapter 5, investigated the typicality of South African old vine Chenin Blanc perceptually and conceptually using a typicality rating and a flexible sorting task. The sensory methodology followed published strategies for investigating typicality. This study did not find a unique sensory space of the old vine Chenin Blanc due to a lack of perceptual consensus among the industry professionals for the wines included in the study. However, it did find that the industry professionals had unified ideas about the attributes of an ideal old vine Chenin Blanc wine. The first of the statistics-focused studies, described in Chapter 4, explored data fusion at low and mid-level using principal component analysis - PCA (low and mid-level) and multiple factor analysis - MFA (mid-level). The study looked at data pre-processing and matrix compatibility, which are important data handling stages for data fusion. Like the contextual chapters (Chapter 3 and 5), and keeping with the aim of this compilation, this chapter gave a detailed descriptive narrative of the data handling. Through detailed examination of the process, the study found that MFA was the most appropriate data fusion strategy. The second statistics-focused study, described in Chapter 6, continued to exploit the multiple advantages of multiblock approach of MFA. Additionally, this chapter showed the reliability of fuzzy k-means clustering compared to agglomerative hierarchical clustering (AHC).AFRIKAANSE OPSOMMING: Die doel van hierdie proefskrif was om die waarde van omvattende vertellings te demonstreer en om kritiese stappe in die hantering van data in die wynkunde toe te lig, terwyl enkele algemene wanopvattings en verkeerde interpretasies in verband met die proses uitgelig word. Hierdie samestelling was 'n reis deur verskillende stadiums van die hantering van wynkundige data, met toenemende kompleksiteit in beide die strategieĂ« en die gebruikte tegnieke (sensoriese, chemie en statistieke). Om hierdie doel te bereik, is verskillende strategieĂ« en meerveranderlike instrumente onder twee hoofdoelstellings gebruik. Eerstens is verskeie multivariate beskrywingsbenaderings gebruik om twee oenologiese probleme te ondersoek en die kontekstuele grondslae vir die statistiekgerigte werk uit te lĂȘ (hoofstukke 3 en 5). Tweedens, in toenemende vlakke van kompleksiteit, is statistiese strategieĂ« vir die konstruering van omvattende datafusie sowel as patroonherkenningsmodelle ondersoek (hoofstukke 4 en 6). 'N Omvattende literatuuroorsig (hoofstuk 2) het algemene misverstande in die verskillende stadiums van datahantering van wynkunde ondersoek en behandel. Die eerste wynprobleem, wat in hoofstuk 3 beskryf word, het die evolusie van die sintuiglike waarneming van aroma ondersoek, asook die antioksidant-verwante parameters en die vlugtige samestelling van Sauvignon Blanc- en Chenin Blanc-wyne wat onder verskillende toestande en duur gestoor is. Die studie het 'n toepaslike sensoriese metode vir hierdie navorsingsvraag toegepas, naamlik Pivot©Profiling. Die studie kon die evolusie van Sauvignon Blanc van 'vrugtige' en 'kruidagtige' en van Chenin Blanc van 'vrugtige' en 'tropiese' sowel as 'geroosterde', 'eikehout' en 'heuning'-eienskappe aantoon. Chemies het die vlugtige samestelling geen neigings getoon nie. Wyne wat vir langer tydperke by hoĂ«r temperature gestoor is, het egter relatief hoĂ«r UV-Vis- absorbansie, kleurdigtheid sowel as hoĂ«r b * (geel) waardes en laer helderheid in terme van L * - indeks, vergeleke met die kontrole. Die tweede wynprobleem, wat in hoofstuk 5 beskryf word, het die tipiesheid van die Suid- Afrikaanse ou wingerdstok Chenin Blanc perseptueel en konseptueel ondersoek met behulp van 'n tipiese klassifikasie en 'n buigsame sorteertaak. Die sensoriese metodologie het gepubliseerde strategieĂ« vir die ondersoek na tipiesheid gevolg. Hierdie studie het nie 'n unieke sensoriese ruimte vir die ou wingerdstok Chenin Blanc gevind nie, omdat daar 'n gebrek aan konseptuele konsensus tussen die professionele persone vir die wyne wat in die studie opgeneem is, was. Dit het egter gevind dat professionele persone in die bedryf eenvormige idees gehad het oor die eienskappe van 'n ideale ou wynstok Chenin Blanc-wyn. Die eerste van die statistiekgerigte studies, wat in hoofstuk 4 beskryf word, het datafusie op lae en middelvlak ondersoek met hoofkomponentanalise - PCA (lae en middelvlak) en meervoudige faktorontleding - MFA (middelvlak). Die studie het gekyk na die voorverwerking van data en matriksversoenbaarheid, wat belangrike stadiums vir die hantering van data is vir die versmelting van data. Net soos die kontekstuele hoofstukke (Hoofstuk 3 en 5), en in ooreenstemming met die doel van hierdie samestelling, het hierdie hoofstuk 'n gedetailleerde beskrywende vertelling van die datahantering gegee. Deur middel van 'n uitvoerige ondersoek van die proses, het die studie bevind dat MFA die mees geskikte strategie vir data-fusie was. Die tweede statistiekgerigte studie, wat in hoofstuk 6 beskryf word, het voortgegaan om die veelvuldige voordele van multiblokke benadering van MFA te benut. Verder het hierdie hoofstuk die betroubaarheid van fuzzy k-middelgroepering vergeleke met agglomeratiewe hiĂ«rargiese groepering (AHC) getoon.Doctora

    Operationalizing fairness for responsible machine learning

    Get PDF
    As machine learning (ML) is increasingly used for decision making in scenarios that impact humans, there is a growing awareness of its potential for unfairness. A large body of recent work has focused on proposing formal notions of fairness in ML, as well as approaches to mitigate unfairness. However, there is a growing disconnect between the ML fairness literature and the needs to operationalize fairness in practice. This thesis addresses the need for responsible ML by developing new models and methods to address challenges in operationalizing fairness in practice. Specifically, it makes the following contributions. First, we tackle a key assumption in the group fairness literature that sensitive demographic attributes such as race and gender are known upfront, and can be readily used in model training to mitigate unfairness. In practice, factors like privacy and regulation often prohibit ML models from collecting or using protected attributes in decision making. To address this challenge we introduce the novel notion of computationally-identifiable errors and propose Adversarially Reweighted Learning (ARL), an optimization method that seeks to improve the worst-case performance over unobserved groups, without requiring access to the protected attributes in the dataset. Second, we argue that while group fairness notions are a desirable fairness criterion, they are fundamentally limited as they reduce fairness to an average statistic over pre-identified protected groups. In practice, automated decisions are made at an individual level, and can adversely impact individual people irrespective of the group statistic. We advance the paradigm of individual fairness by proposing iFair (individually fair representations), an optimization approach for learning a low dimensional latent representation of the data with two goals: to encode the data as well as possible, while removing any information about protected attributes in the transformed representation. Third, we advance the individual fairness paradigm, which requires that similar individuals receive similar outcomes. However, similarity metrics computed over observed feature space can be brittle, and inherently limited in their ability to accurately capture similarity between individuals. To address this, we introduce a novel notion of fairness graphs, wherein pairs of individuals can be identified as deemed similar with respect to the ML objective. We cast the problem of individual fairness into graph embedding, and propose PFR (pairwise fair representations), a method to learn a unified pairwise fair representation of the data. Fourth, we tackle the challenge that production data after model deployment is constantly evolving. As a consequence, in spite of the best efforts in training a fair model, ML systems can be prone to failure risks due to a variety of unforeseen reasons. To ensure responsible model deployment, potential failure risks need to be predicted, and mitigation actions need to be devised, for example, deferring to a human expert when uncertain or collecting additional data to address model’s blind-spots. We propose Risk Advisor, a model-agnostic meta-learner to predict potential failure risks and to give guidance on the sources of uncertainty inducing the risks, by leveraging information theoretic notions of aleatoric and epistemic uncertainty. This dissertation brings ML fairness closer to real-world applications by developing methods that address key practical challenges. Extensive experiments on a variety of real-world and synthetic datasets show that our proposed methods are viable in practice.Mit der zunehmenden Verwendung von Maschinellem Lernen (ML) in Situationen, die Auswirkungen auf Menschen haben, nimmt das Bewusstsein ĂŒber das Potenzial fĂŒr Unfair- ness zu. Ein großer Teil der jĂŒngeren Forschung hat den Fokus auf das formale VerstĂ€ndnis von Fairness im Zusammenhang mit ML sowie auf AnsĂ€tze zur Überwindung von Unfairness gelegt. Jedoch driften die Literatur zu Fairness in ML und die Anforderungen zur Implementierung in der Praxis zunehmend auseinander. Diese Arbeit beschĂ€ftigt sich mit der Notwendigkeit fĂŒr verantwortungsvolles ML, wofĂŒr neue Modelle und Methoden entwickelt werden, um die Herausforderungen im Fairness-Bereich in der Praxis zu bewĂ€ltigen. Ihr wissenschaftlicher Beitrag ist im Folgenden dargestellt. In Kapitel 3 behandeln wir die SchlĂŒsselprĂ€misse in der Gruppenfairnessliteratur, dass sensible demografische Merkmale wie etwa die ethnische Zugehörigkeit oder das Geschlecht im Vorhinein bekannt sind und wĂ€hrend des Trainings eines Modells zur Reduzierung der Unfairness genutzt werden können. In der Praxis hindern hĂ€ufig EinschrĂ€nkungen zum Schutz der PrivatsphĂ€re oder gesetzliche Regelungen ML-Modelle daran, geschĂŒtzte Merkmale fĂŒr die Entscheidungsfindung zu sammeln oder zu verwenden. Um diese Herausforderung zu ĂŒberwinden, fĂŒhren wir das Konzept der Komputational-identifizierbaren Fehler ein und stellen Adversarially Reweighted Learning (ARL) vor, ein Optimierungsverfahren, das die Worst-Case-Performance bei unbekannter Gruppenzugehörigkeit ohne Wissen ĂŒber die geschĂŒtzten Merkmale verbessert. In Kapitel 4 stellen wir dar, dass Konzepte fĂŒr Gruppenfairness trotz ihrer Eignung als Fairnesskriterium grundsĂ€tzlich beschrĂ€nkt sind, da Fairness auf eine gemittelte statistische GrĂ¶ĂŸe fĂŒr zuvor identifizierte geschĂŒtzte Gruppen reduziert wird. In der Praxis werden automatisierte Entscheidungen auf einer individuellen Ebene gefĂ€llt, und können unabhĂ€ngig von der gruppenbezogenen Statistik Nachteile fĂŒr Individuen haben. Wir erweitern das Konzept der individuellen Fairness um unsere Methode iFair (individually fair representations), ein Optimierungsverfahren zum Erlernen einer niedrigdimensionalen Darstellung der Daten mit zwei Zielen: die Daten so akkurat wie möglich zu enkodieren und gleichzeitig jegliche Information ĂŒber die geschĂŒtzten Merkmale in der transformierten Darstellung zu entfernen. In Kapitel 5 entwickeln wir das Paradigma der individuellen Fairness weiter, das ein Ă€hnliches Ergebnis fĂŒr Ă€hnliche Individuen erfordert. Ähnlichkeitsmetriken im beobachteten Featureraum können jedoch unzuverlĂ€ssig und inhĂ€rent beschrĂ€nkt darin sein, Ähnlichkeit zwischen Individuen korrekt abzubilden. Um diese Herausforderung anzugehen, fĂŒhren wir den neue Konzept der Fairnessgraphen ein, in denen Paare (oder Sets) von Individuen als Ă€hnlich im Bezug auf die ML-Aufgabe identifiziert werden. Wir ĂŒbersetzen das Problem der individuellen Fairness in eine Grapheinbindung und stellen PFR (pairwise fair representations) vor, eine Methode zum Erlernen einer vereinheitlichten paarweisen fairen Abbildung der Daten. In Kapitel 6 gehen wir die Herausforderung an, dass sich die Daten im Feld nach der Inbetriebnahme des Modells fortlaufend Ă€ndern. In der Konsequenz können ML-Systeme trotz grĂ¶ĂŸter BemĂŒhungen, ein faires Modell zu trainieren, aufgrund einer Vielzahl an unvorhergesehenen GrĂŒnden scheitern. Um eine verantwortungsvolle Implementierung sicherzustellen, gilt es, Risiken fĂŒr ein potenzielles Versagen vorherzusehen und Gegenmaßnahmen zu entwickeln,z.B. die Übertragung der Entscheidung an einen menschlichen Experten bei Unsicherheit oder das Sammeln weiterer Daten, um die blinden Flecken des Modells abzudecken. Wir stellen mit Risk Advisor einen modell-agnostischen Meta-Learner vor, der Risiken fĂŒr potenzielles Versagen vorhersagt und Anhaltspunkte fĂŒr die Ursache der zugrundeliegenden Unsicherheit basierend auf informationstheoretischen Konzepten der aleatorischen und epistemischen Unsicherheit liefert. Diese Dissertation bringt Fairness fĂŒr verantwortungsvolles ML durch die Entwicklung von AnsĂ€tzen fĂŒr die Lösung von praktischen Kernproblemen nĂ€her an die Anwendungen im Feld. Umfassende Experimente mit einer Vielzahl von synthetischen und realen DatensĂ€tzen zeigen, dass unsere AnsĂ€tze in der Praxis umsetzbar sind.The International Max Planck Research School for Computer Science (IMPRS-CS

    Adversarial content manipulation for analyzing and improving model robustness

    Get PDF
    The recent rapid progress in machine learning systems has opened up many real-world applications --- from recommendation engines on web platforms to safety critical systems like autonomous vehicles. A model deployed in the real-world will often encounter inputs far from its training distribution. For example, a self-driving car might come across a black stop sign in the wild. To ensure safe operation, it is vital to quantify the robustness of machine learning models to such out-of-distribution data before releasing them into the real-world. However, the standard paradigm of benchmarking machine learning models with fixed size test sets drawn from the same distribution as the training data is insufficient to identify these corner cases efficiently. In principle, if we could generate all valid variations of an input and measure the model response, we could quantify and guarantee model robustness locally. Yet, doing this with real world data is not scalable. In this thesis, we propose an alternative, using generative models to create synthetic data variations at scale and test robustness of target models to these variations. We explore methods to generate semantic data variations in a controlled fashion across visual and text modalities. We build generative models capable of performing controlled manipulation of data like changing visual context, editing appearance of an object in images or changing writing style of text. Leveraging these generative models we propose tools to study robustness of computer vision systems to input variations and systematically identify failure modes. In the text domain, we deploy these generative models to improve diversity of image captioning systems and perform writing style manipulation to obfuscate private attributes of the user. Our studies quantifying model robustness explore two kinds of input manipulations, model-agnostic and model-targeted. The model-agnostic manipulations leverage human knowledge to choose the kinds of changes without considering the target model being tested. This includes automatically editing images to remove objects not directly relevant to the task and create variations in visual context. Alternatively, in the model-targeted approach the input variations performed are directly adversarially guided by the target model. For example, we adversarially manipulate the appearance of an object in the image to fool an object detector, guided by the gradients of the detector. Using these methods, we measure and improve the robustness of various computer vision systems -- specifically image classification, segmentation, object detection and visual question answering systems -- to semantic input variations.Der schnelle Fortschritt von Methoden des maschinellen Lernens hat viele neue Anwendungen ermöglicht – von Recommender-Systemen bis hin zu sicherheitskritischen Systemen wie autonomen Fahrzeugen. In der realen Welt werden diese Systeme oft mit Eingaben außerhalb der Verteilung der Trainingsdaten konfrontiert. Zum Beispiel könnte ein autonomes Fahrzeug einem schwarzen Stoppschild begegnen. Um sicheren Betrieb zu gewĂ€hrleisten, ist es entscheidend, die Robustheit dieser Systeme zu quantifizieren, bevor sie in der Praxis eingesetzt werden. Aktuell werden diese Modelle auf festen Eingaben von derselben Verteilung wie die Trainingsdaten evaluiert. Allerdings ist diese Strategie unzureichend, um solche AusnahmefĂ€lle zu identifizieren. Prinzipiell könnte die Robustheit “lokal” bestimmt werden, indem wir alle zulĂ€ssigen Variationen einer Eingabe generieren und die Ausgabe des Systems ĂŒberprĂŒfen. Jedoch skaliert dieser Ansatz schlecht zu echten Daten. In dieser Arbeit benutzen wir generative Modelle, um synthetische Variationen von Eingaben zu erstellen und so die Robustheit eines Modells zu ĂŒberprĂŒfen. Wir erforschen Methoden, die es uns erlauben, kontrolliert semantische Änderungen an Bild- und Textdaten vorzunehmen. Wir lernen generative Modelle, die kontrollierte Manipulation von Daten ermöglichen, zum Beispiel den visuellen Kontext zu Ă€ndern, die Erscheinung eines Objekts zu bearbeiten oder den Schreibstil von Text zu Ă€ndern. Basierend auf diesen Modellen entwickeln wir neue Methoden, um die Robustheit von Bilderkennungssystemen bezĂŒglich Variationen in den Eingaben zu untersuchen und Fehlverhalten zu identifizieren. Im Gebiet von Textdaten verwenden wir diese Modelle, um die DiversitĂ€t von sogenannten Automatische Bildbeschriftung-Modellen zu verbessern und Schreibtstil-Manipulation zu erlauben, um private Attribute des Benutzers zu verschleiern. Um die Robustheit von Modellen zu quantifizieren, werden zwei Arten von Eingabemanipulationen untersucht: Modell-agnostische und Modell-spezifische Manipulationen. Modell-agnostische Manipulationen basieren auf menschlichem Wissen, um bestimmte Änderungen auszuwĂ€hlen, ohne das entsprechende Modell miteinzubeziehen. Dies beinhaltet das Entfernen von fĂŒr die Aufgabe irrelevanten Objekten aus Bildern oder Variationen des visuellen Kontextes. In dem alternativen Modell-spezifischen Ansatz werden Änderungen vorgenommen, die fĂŒr das Modell möglichst ungĂŒnstig sind. Zum Beispiel Ă€ndern wir die Erscheinung eines Objekts um ein Modell der Objekterkennung tĂ€uschen. Dies ist durch den Gradienten des Modells möglich. Mithilfe dieser Werkzeuge können wir die Robustheit von Systemen zur Bildklassifizierung oder -segmentierung, Objekterkennung und Visuelle Fragenbeantwortung quantifizieren und verbessern

    ASA 2021 Statistics and Information Systems for Policy Evaluation

    Get PDF
    This book includes 40 peer-reviewed short papers submitted to the Scientific Conference titled Statistics and Information Systems for Policy Evaluation, aimed at promoting new statistical methods and applications for the evaluation of policies and organized by the Association for Applied Statistics (ASA) and the Dept. of Statistics, Computer Science, Applications DiSIA “G. Parenti” of the University of Florence, jointly with the partners AICQ (Italian Association for Quality Culture), AICQ-CN (Italian Association for Quality Culture North and Centre of Italy), AISS (Italian Academy for Six Sigma), ASSIRM (Italian Association for Marketing, Social and Opinion Research), Comune di Firenze, the SIS – Italian Statistical Society, Regione Toscana and Valmon – Evaluation & Monitoring

    Analyses and Creation of Author Stylized Text

    Get PDF
    Written text is one of the major ways that humans communicate their thoughts. A single thought can be expressed through many different combinations of words, and the writer must choose which they will use. We call the idea which is communicated the content of the message, and the particular words chosen to express the content, the style. The same content expressed in a different style may tell something useful about the author of the text (e.g., the author\u27s identity), may be easier to understand for different audiences, or may evoke different emotions in the reader. In this work we explore ways that the style of writing can be used to make inferences about the author and demonstrate applications where these techniques uncover interesting results. We supplement the analytic approach with a synthetic approach and consider the problem of generating text which matches the style of a target author. To this end we find and curate suitable parallel datasets of the same content written in different styles. These are -- to the extent possible -- made publicly available. Next, we demonstrate the performance of machine translation systems on this data. Finally, we show settings in which modifications to existing machine translation architectures can improve results and even perform style transfer in an unsupervised setting

    Computational ethics

    Get PDF
    This is the final version. Available on open access from Elsevier via the DOI in this recordTechnological advances are enabling roles for machines that present novel ethical challenges. The study of 'AI ethics' has emerged to confront these challenges, and connects perspectives from philosophy, computer science, law, and economics. Less represented in these interdisciplinary efforts is the perspective of cognitive science. We propose a framework – computational ethics – that specifies how the ethical challenges of AI can be partially addressed by incorporating the study of human moral decision-making. The driver of this framework is a computational version of reflective equilibrium (RE), an approach that seeks coherence between considered judgments and governing principles. The framework has two goals: (i) to inform the engineering of ethical AI systems, and (ii) to characterize human moral judgment and decision-making in computational terms. Working jointly towards these two goals will create the opportunity to integrate diverse research questions, bring together multiple academic communities, uncover new interdisciplinary research topics, and shed light on centuries-old philosophical questions.Templeton World Charity Foundatio

    Implementation of Artificial Intelligence in Food Science, Food Quality, and Consumer Preference Assessment

    Get PDF
    In recent years, new and emerging digital technologies applied to food science have been gaining attention and increased interest from researchers and the food/beverage industries. In particular, those digital technologies that can be used throughout the food value chain are accurate, easy to implement, affordable, and user-friendly. Hence, this Special Issue (SI) is dedicated to novel technology based on sensor technology and machine/deep learning modeling strategies to implement artificial intelligence (AI) into food and beverage production and for consumer assessment. This SI published quality papers from researchers in Australia, New Zealand, the United States, Spain, and Mexico, including food and beverage products, such as grapes and wine, chocolate, honey, whiskey, avocado pulp, and a variety of other food products
    • 

    corecore