7 research outputs found
TIB-arXiv: Ein alternatives Suchportal fĂĽr den arXiv Preprint Server
Es ist entscheidend für Wissenschaftler in ihrem Gebiet den aktuellen Stand der Forschung zu kennen. Doch diese Aufgabe wird immer schwieriger: Einerseits werden jeden Tag mehr neue Artikel veröffentlicht; andererseits gibt es immer mehr mögliche Publikationsforen und -formen -- das Publikationssystem wird immer heterogener. . Daher müssen sich Wissenschaftler immer mehr Zeit nehmen, um die für ihre Forschung relevanten Artikel zu finden. Ein guter Indikator für diesen Trend ist der beliebte Preprint-Server arXiv. Die Anzahl der Artikel im Repository ist in den letzten 25 Jahren linear gestiegen, 2017 waren es mehr als 10.000 Artikel pro Monat. Der Fokus der arXiv-Plattform liegt auf einfachen Publishing-Diensten und daher bietet arxiv keine erweiterte Suchfunktionalität oder hilfreiche Visualisierungen. ArXiv bietet hingegen Programmierschnittstellen, die es externen Entwicklern erlauben, verschiedenste Zusatzdienste hinzuzufügen. Die meisten der existierenden Werkzeuge konzentrieren sich hierbei auf eine bestimmte Community und zeigen nur einen Bruchteil der arXiv-Daten. Andere Anwendungen bieten eine erweiterte Visualisierung, um die arXiv-Bibliothek zu erkunden.Dieser Beitrag präsentiert TIB-arXiv, ein webbasiertes Werkzeug für die Suche und Exploration von Publikationen in arXiv. Die Plattform bietet Zugang zur gesamten arXiv-Bibliothek und vereinfacht den Zugang mithilfe einer intuitiven Nutzeroberfläche. Der integrierte PDF-Reader ermöglicht es dem Nutzer, Artikel direkt auf der Webseite zu konsultieren während er den Datenbestand erkundet. Durch die Verlinkung der Publikationen mit externen Ressourcen und sozialen Medien, können dem Nutzer komplexere Retrieval- und Ranking-Methoden geboten werden. Dies ermöglicht es Wissenschaftlern, die aktuelle Entwicklung in ihrem Forschungsgebiet durch soziale und kollaborative Funktionalitäten schneller auf Aktualität und Relevanz überprüfen
“Are Machines Better Than Humans in Image Tagging?” - A User Study Adds to the Puzzle
“Do machines perform better than humans in visual recognition tasks?” Not so long ago, this question would have been considered even somewhat provoking and the answer would have been clear: “No”. In this paper, we present a comparison of human and machine performance with respect to annotation for multimedia retrieval tasks. Going beyond recent crowdsourcing studies in this respect, we also report results of two extensive user studies. In total, 23 participants were asked to annotate more than 1000 images of a benchmark dataset, which is the most comprehensive study in the field so far. Krippendorff’s α is used to measure inter-coder agreement among several coders and the results are compared with the best machine results. The study is preceded by a summary of studies which compared human and machine performance in different visual and auditory recognition tasks. We discuss the results and derive a methodology in order to compare machine performance in multimedia annotation tasks at human level. This allows us to formally answer the question whether a recognition problem can be considered as solved. Finally, we are going to answer the initial question
Recommended from our members
On the effects of spam filtering and incremental learning for web-supervised visual concept classification
Deep neural networks have been successfully applied to the task of visual concept classification. However, they require a large number of training examples for learning. Although pre-trained deep neural networks are available for some domains, they usually have to be fine-tuned for an envisaged target domain. Recently, some approaches have been suggested that are aimed at incrementally (or even endlessly) learning visual concepts based on Web data. Since tags of Web images are often noisy, normally some filtering mechanisms are employed in order to remove ``spam'' images that are not appropriate for training. In this paper, we investigate several aspects of a web-supervised system that has to be adapted to another target domain: 1.) the effect of incremental learning, 2.) the effect of spam filtering, and 3.) the behavior of particular concept classes with respect to 1.) and 2.). The experimental results provide some insights under which conditions incremental learning and spam filtering are useful
Recommended from our members
“Are machines better than humans in image tagging?” - A user study adds to the puzzle
“Do machines perform better than humans in visual recognition
tasks?” Not so long ago, this question would have been considered
even somewhat provoking and the answer would have been clear:
“No”. In this paper, we present a comparison of human and machine
performance with respect to annotation for multimedia retrieval tasks.
Going beyond recent crowdsourcing studies in this respect, we also report
results of two extensive user studies. In total, 23 participants were asked
to annotate more than 1000 images of a benchmark dataset, which is the
most comprehensive study in the field so far. Krippendorff’s α is used
to measure inter-coder agreement among several coders and the results
are compared with the best machine results. The study is preceded by
a summary of studies which compared human and machine performance
in different visual and auditory recognition tasks. We discuss the results
and derive a methodology in order to compare machine performance in
multimedia annotation tasks at human level. This allows us to formally
answer the question whether a recognition problem can be considered as
solved. Finally, we are going to answer the initial question