5 research outputs found
“Are Machines Better Than Humans in Image Tagging?” - A User Study Adds to the Puzzle
“Do machines perform better than humans in visual recognition tasks?” Not so long ago, this question would have been considered even somewhat provoking and the answer would have been clear: “No”. In this paper, we present a comparison of human and machine performance with respect to annotation for multimedia retrieval tasks. Going beyond recent crowdsourcing studies in this respect, we also report results of two extensive user studies. In total, 23 participants were asked to annotate more than 1000 images of a benchmark dataset, which is the most comprehensive study in the field so far. Krippendorff’s α is used to measure inter-coder agreement among several coders and the results are compared with the best machine results. The study is preceded by a summary of studies which compared human and machine performance in different visual and auditory recognition tasks. We discuss the results and derive a methodology in order to compare machine performance in multimedia annotation tasks at human level. This allows us to formally answer the question whether a recognition problem can be considered as solved. Finally, we are going to answer the initial question
“Are Machines Better Than Humans in Image Tagging?” - A User Study Adds to the Puzzle
“Do machines perform better than humans in visual recognition
tasks?” Not so long ago, this question would have been considered
even somewhat provoking and the answer would have been clear:
“No”. In this paper, we present a comparison of human and machine
performance with respect to annotation for multimedia retrieval tasks.
Going beyond recent crowdsourcing studies in this respect, we also report
results of two extensive user studies. In total, 23 participants were asked
to annotate more than 1000 images of a benchmark dataset, which is the
most comprehensive study in the field so far. Krippendorff’s α is used
to measure inter-coder agreement among several coders and the results
are compared with the best machine results. The study is preceded by
a summary of studies which compared human and machine performance
in different visual and auditory recognition tasks. We discuss the results
and derive a methodology in order to compare machine performance in
multimedia annotation tasks at human level. This allows us to formally
answer the question whether a recognition problem can be considered as
solved. Finally, we are going to answer the initial question
Recommended from our members
“Are machines better than humans in image tagging?” - A user study adds to the puzzle
“Do machines perform better than humans in visual recognition
tasks?” Not so long ago, this question would have been considered
even somewhat provoking and the answer would have been clear:
“No”. In this paper, we present a comparison of human and machine
performance with respect to annotation for multimedia retrieval tasks.
Going beyond recent crowdsourcing studies in this respect, we also report
results of two extensive user studies. In total, 23 participants were asked
to annotate more than 1000 images of a benchmark dataset, which is the
most comprehensive study in the field so far. Krippendorff’s α is used
to measure inter-coder agreement among several coders and the results
are compared with the best machine results. The study is preceded by
a summary of studies which compared human and machine performance
in different visual and auditory recognition tasks. We discuss the results
and derive a methodology in order to compare machine performance in
multimedia annotation tasks at human level. This allows us to formally
answer the question whether a recognition problem can be considered as
solved. Finally, we are going to answer the initial question
An AI toolkit for libraries
oai:bibnum.enssib.fr:70898« An AI toolkit for libraries » donne un aperçu rapide de certains domaines dans lesquels les outils d’AI sont utilisés en bibliothèque et fournit ensuite une liste de contrôle pour l’évaluation