Analysis of Trustworthiness in Machine Learning and Deep Learning

Abstract

Trustworthy Machine Learning (TML) represents a set of mechanisms and explainable layers, which enrich the learning model in order to be clear, understood, thus trusted by users. A literature review has been conducted in this paper to provide a comprehensive analysis on TML perception. A quantitative study accompanied with qualitative observations have been discussed by categorizing machine learning algorithms and emphasising deep learning ones, the latter models have achieved very high performance as real-world function approximators (e.g., natural language and signal processing, robotics, etc.). However, to be fully adapted by humans, a level of transparency needs to be guaranteed which makes the task harder regarding recent techniques (e.g., fully connected layers in neural net-works, dynamic bias, parallelism, etc.). The paper covered both academics and practitioners works, some promising results have been covered, the goal is a high trade-off transparency/accuracy achievement towards a reliable learning approach

Similar works

This paper was published in University of Huddersfield Repository.

Having an issue?

Is data on this page outdated, violates copyrights or anything else? Report the problem now and we will take corresponding actions after reviewing your request.