11 research outputs found

    Overview of the ImageCLEFphoto 2008 photographic retrieval task

    Get PDF
    ImageCLEFphoto 2008 is an ad-hoc photo retrieval task and part of the ImageCLEF evaluation campaign. This task provides both the resources and the framework necessary to perform comparative laboratory-style evaluation of visual information retrieval systems. In 2008, the evaluation task concentrated on promoting diversity within the top 20 results from a multilingual image collection. This new challenge attracted a record number of submissions: a total of 24 participating groups submitting 1,042 system runs. Some of the findings include that the choice of annotation language is almost negligible and the best runs are by combining concept and content-based retrieval methods

    The Wikipedia Image Retrieval Task

    Get PDF
    The wikipedia image retrieval task at ImageCLEF provides a testbed for the system-oriented evaluation of visual information retrieval from a collection of Wikipedia images. The aim is to investigate the effectiveness of retrieval approaches that exploit textual and visual evidence in the context of a large and heterogeneous collection of images that are searched for by users with diverse information needs. This chapter presents an overview of the available test collections, summarises the retrieval approaches employed by the groups that participated in the task during the 2008 and 2009 ImageCLEF campaigns, provides an analysis of the main evaluation results, identifies best practices for effective retrieval, and discusses open issues

    Comparing Fusion Techniques for the ImageCLEF 2013 Medical Case Retrieval Task

    Get PDF
    Retrieval systems can supply similar cases with a proven diagnosis to a new example case under observation to help clinicians during their work. The ImageCLEFmed evaluation campaign proposes a framework where research groups can compare case-based retrieval approaches. This paper focuses on the case-based task and adds results of the compound figure separation and modality classification tasks. Several fusion approaches are compared to identify the approaches best adapted to the heterogeneous data of the task. Fusion of visual and textual features is analyzed, demonstrating that the selection of the fusion strategy can improve the best performance on the case-based retrieval task

    The Wikipedia Image Retrieval Task

    Get PDF
    htmlabstractThe wikipedia image retrieval task at ImageCLEF provides a testbed for the system-oriented evaluation of visual information retrieval from a collection of Wikipedia images. The aim is to investigate the effectiveness of retrieval approaches that exploit textual and visual evidence in the context of a large and heterogeneous collection of images that are searched for by users with diverse information needs. This chapter presents an overview of the available test collections, summarises the retrieval approaches employed by the groups that participated in the task during the 2008 and 2009 ImageCLEF campaigns, provides an analysis of the main evaluation results, identifies best practices for effective retrieval, and discusses open issues

    Smartphone picture organization: a hierarchical approach

    Get PDF
    We live in a society where the large majority of the population has a camera-equipped smartphone. In addition, hard drives and cloud storage are getting cheaper and cheaper, leading to a tremendous growth in stored personal photos. Unlike photo collections captured by a digital camera, which typically are pre-processed by the user who organizes them into event-related folders, smartphone pictures are automatically stored in the cloud. As a consequence, photo collections captured by a smartphone are highly unstructured and because smartphones are ubiquitous, they present a larger variability compared to pictures captured by a digital camera. To solve the need of organizing large smartphone photo collections automatically, we propose here a new methodology for hierarchical photo organization into topics and topic-related categories. Our approach successfully estimates latent topics in the pictures by applying probabilistic Latent Semantic Analysis, and automatically assigns a name to each topic by relying on a lexical database. Topic-related categories are then estimated by using a set of topic-specific Convolutional Neuronal Networks. To validate our approach, we ensemble and make public a large dataset of more than 8,000 smartphone pictures from 40 persons. Experimental results demonstrate major user satisfaction with respect to state of the art solutions in terms of organization.Peer ReviewedPreprin

    Video genre categorization and representation using audio-visual information

    Get PDF
    International audienceWe propose an audio-visual approach to video genre classification using content descriptors that exploit audio, color, temporal, and contour information. Audio information is extracted at block-level, which has the advantage of capturing local temporal information. At the temporal structure level, we consider action content in relation to human perception. Color perception is quantified using statistics of color distribution, elementary hues, color properties, and relationships between colors. Further, we compute statistics of contour geometry and relationships. The main contribution of our work lies in harnessingn the descriptive power of the combination of these descriptors in genre classification. Validation was carried out on over 91 h of video footage encompassing 7 common video genres, yielding average precision and recall ratios of 87% to 100% and 77% to 100%, respectively, and an overall average correct classification of up to 97%. Also, experimental comparison as part of the MediaEval 2011 benchmarkingn campaign demonstrated the efficiency of the proposed audiovisual descriptors over other existing approaches. Finally, we discuss a 3-D video browsing platform that displays movies using efaturebased coordinates and thus regroups them according to genre

    The Liver Tumor Segmentation Benchmark (LiTS)

    Full text link
    In this work, we report the set-up and results of the Liver Tumor Segmentation Benchmark (LITS) organized in conjunction with the IEEE International Symposium on Biomedical Imaging (ISBI) 2016 and International Conference On Medical Image Computing Computer Assisted Intervention (MICCAI) 2017. Twenty four valid state-of-the-art liver and liver tumor segmentation algorithms were applied to a set of 131 computed tomography (CT) volumes with different types of tumor contrast levels (hyper-/hypo-intense), abnormalities in tissues (metastasectomie) size and varying amount of lesions. The submitted algorithms have been tested on 70 undisclosed volumes. The dataset is created in collaboration with seven hospitals and research institutions and manually reviewed by independent three radiologists. We found that not a single algorithm performed best for liver and tumors. The best liver segmentation algorithm achieved a Dice score of 0.96(MICCAI) whereas for tumor segmentation the best algorithm evaluated at 0.67(ISBI) and 0.70(MICCAI). The LITS image data and manual annotations continue to be publicly available through an online evaluation system as an ongoing benchmarking resource.Comment: conferenc

    Accès à l'information biomédicale : vers une approche d'indexation et de recherche d'information conceptuelle basée sur la fusion de ressources termino-ontologiques

    Get PDF
    La recherche d'information (RI) est une discipline scientifique qui a pour objectif de produire des solutions permettant de sélectionner à partir de corpus d'information celle qui sont dites pertinentes pour un utilisateur ayant exprimé une requête. Dans le contexte applicatif de la RI biomédicale, les corpus concernent différentes sources d'information du domaine : dossiers médicaux de patients, guides de bonnes pratiques médicales, littérature scientifique du domaine médical etc. Les besoins en information peuvent concerner divers profils : des experts médicaux, des patients et leurs familles, des utilisateurs néophytes etc. Plusieurs défis sont liés spécifiquement à la RI biomédicale : la représentation "spécialisée" des documents, basés sur l'usage des ressources terminologiques du domaine, le traitement des synonymes, des acronymes et des abréviations largement pratiquée dans le domaine, l'accès à l'information guidé par le contexte du besoin et des profils des utilisateurs. Nos travaux de thèse s'inscrivent dans le domaine général de la RI biomédicale et traitent des défis de représentation de l'information biomédicale et de son accès. Sur le volet de la représentation de l'information, nous proposons des techniques d'indexation de documents basées sur : 1) la reconnaissance de concepts termino-ontologiques : cette reconnaissance s'apparente à une recherche approximative de concepts pertinents associés à un contenu, vu comme un sac de mots. La technique associée exploite à la fois la similitude structurelle des contenus informationnels des concepts vis-à-vis des documents mais également la similitude du sujet porté par le document et le concept, 2) la désambiguïsation des entrées de concepts reconnus en exploitant la branche liée au sous-domaine principal de la ressource termino-ontologique, 3) l'exploitation de différentes ressources termino-ontologiques dans le but de couvrir au mieux la sémantique du contenu documentaire. Sur le volet de l'accès à l'information, nous proposons des techniques d'appariement basées sur l'expansion combinée de requêtes et des documents guidées par le contexte du besoin en information d'une part et des contenus documentaires d'autre part. Notre analyse porte essentiellement sur l'étude de l'impact des différents paramètres d'expansion sur l'efficacité de la recherche : distribution des concepts dans les ressources ontologiques, modèle de fusion des concepts, modèle de pondération des concepts, etc. L'ensemble de nos contributions, en termes de techniques d'indexation et d'accès à l'information ont fait l'objet d'évaluation expérimentale sur des collections de test dédiées à la recherche d'information médicale, soit du point de vue de la tâche telles que TREC Medical track, CLEF Image, Medical case ou des collections de test telles que TREC Genomics.Information Retrieval (IR) is a scientific field aiming at providing solutions to select relevant information from a corpus of documents in order to answer the user information need. In the context of biomedical IR, there are different sources of information: patient records, guidelines, scientific literature, etc. In addition, the information needs may concern different profiles : medical experts, patients and their families, and other users ... Many challenges are specifically related to the biomedical IR : the document representation, the usage of terminologies with synonyms, acronyms, abbreviations as well as the access to the information guided by the context of information need and the user profiles. Our work is most related to the biomedical IR and deals with the challenges of the representation of biomedical information and the access to this rich source of information in the biomedical domain.Concerning the representation of biomedical information, we propose techniques and approaches to indexing documents based on: 1) recognizing and extracting concepts from terminologies : the method of concept extraction is basically based on an approximate lookup of candidate concepts that could be useful to index the document. This technique expoits two sources of evidence : (a) the content-based similarity between concepts and documents and (b) the semantic similarity between them. 2) disambiguating entry terms denoting concepts by exploiting the polyhierarchical structure of a medical thesaurus (MeSH - Medical Subject Headings). More specifically, the domains of each concept are exploited to compute the semantic similarity between ambiguous terms in documents. The most appropriate domain is detected and associated to each term denoting a particular concept. 3) exploiting different termino-ontological resources in an attempt to better cover the semantics of document contents. Concerning the information access, we propose a document-query matching method based on the combination of document and query expansion techniques. Such a combination is guided by the context of information need on one hand and the semantic context in the document on the other hand. Our analysis is essentially based on the study of factors related to document and query expansion that could have an impact on the IR performance: distribution of concepts in termino-ontological resources, fusion techniques for concept extraction issued from multiple terminologies, concept weighting models, etc

    The Liver Tumor Segmentation Benchmark (LiTS)

    Full text link
    In this work, we report the set-up and results of the Liver Tumor Segmentation Benchmark (LiTS), which was organized in conjunction with the IEEE International Symposium on Biomedical Imaging (ISBI) 2017 and the International Conferences on Medical Image Computing and Computer-Assisted Intervention (MICCAI) 2017 and 2018. The image dataset is diverse and contains primary and secondary tumors with varied sizes and appearances with various lesion-to-background levels (hyper-/hypo-dense), created in collaboration with seven hospitals and research institutions. Seventy-five submitted liver and liver tumor segmentation algorithms were trained on a set of 131 computed tomography (CT) volumes and were tested on 70 unseen test images acquired from different patients. We found that not a single algorithm performed best for both liver and liver tumors in the three events. The best liver segmentation algorithm achieved a Dice score of 0.963, whereas, for tumor segmentation, the best algorithms achieved Dices scores of 0.674 (ISBI 2017), 0.702 (MICCAI 2017), and 0.739 (MICCAI 2018). Retrospectively, we performed additional analysis on liver tumor detection and revealed that not all top-performing segmentation algorithms worked well for tumor detection. The best liver tumor detection method achieved a lesion-wise recall of 0.458 (ISBI 2017), 0.515 (MICCAI 2017), and 0.554 (MICCAI 2018), indicating the need for further research. LiTS remains an active benchmark and resource for research, e.g., contributing the liver-related segmentation tasks in http://medicaldecathlon.com/. In addition, both data and online evaluation are accessible via https://competitions.codalab.org/competitions/17094

    The Liver Tumor Segmentation Benchmark (LiTS)

    Get PDF
    In this work, we report the set-up and results of the Liver Tumor Segmentation Benchmark (LiTS), which was organized in conjunction with the IEEE International Symposium on Biomedical Imaging (ISBI) 2017 and the International Conferences on Medical Image Computing and Computer-Assisted Intervention (MICCAI) 2017 and 2018. The image dataset is diverse and contains primary and secondary tumors with varied sizes and appearances with various lesion-to-background levels (hyper-/hypo-dense), created in collaboration with seven hospitals and research institutions. Seventy-five submitted liver and liver tumor segmentation algorithms were trained on a set of 131 computed tomography (CT) volumes and were tested on 70 unseen test images acquired from different patients. We found that not a single algorithm performed best for both liver and liver tumors in the three events. The best liver segmentation algorithm achieved a Dice score of 0.963, whereas, for tumor segmentation, the best algorithms achieved Dices scores of 0.674 (ISBI 2017), 0.702 (MICCAI 2017), and 0.739 (MICCAI 2018). Retrospectively, we performed additional analysis on liver tumor detection and revealed that not all top-performing segmentation algorithms worked well for tumor detection. The best liver tumor detection method achieved a lesion-wise recall of 0.458 (ISBI 2017), 0.515 (MICCAI 2017), and 0.554 (MICCAI 2018), indicating the need for further research. LiTS remains an active benchmark and resource for research, e.g., contributing the liver-related segmentation tasks in http://medicaldecathlon.com/. In addition, both data and online evaluation are accessible via https://competitions.codalab.org/competitions/17094.Bjoern Menze is supported through the DFG funding (SFB 824, subproject B12) and a Helmut-Horten-Professorship for Biomedical Informatics by the Helmut-Horten-Foundation. Florian Kofler is Supported by Deutsche Forschungsgemeinschaft (DFG) through TUM International Graduate School of Science and Engineering (IGSSE), GSC 81. An Tang was supported by the Fonds de recherche du Québec en Santé and Fondation de l’association des radiologistes du Québec (FRQS- ARQ 34939 Clinical Research Scholarship – Junior 2 Salary Award). Hongwei Bran Li is supported by Forschungskredit (Grant NO. FK-21- 125) from University of Zurich.Peer ReviewedArticle signat per 109 autors/es: Patrick Bilic 1,a,b, Patrick Christ 1,a,b, Hongwei Bran Li 1,2,∗,b, Eugene Vorontsov 3,a,b, Avi Ben-Cohen 5,a, Georgios Kaissis 10,12,15,a, Adi Szeskin 18,a, Colin Jacobs 4,a, Gabriel Efrain Humpire Mamani 4,a, Gabriel Chartrand 26,a, Fabian Lohöfer 12,a, Julian Walter Holch 29,30,69,a, Wieland Sommer 32,a, Felix Hofmann 31,32,a, Alexandre Hostettler 36,a, Naama Lev-Cohain 38,a, Michal Drozdzal 34,a, Michal Marianne Amitai 35,a, Refael Vivanti 37,a, Jacob Sosna 38,a, Ivan Ezhov 1, Anjany Sekuboyina 1,2, Fernando Navarro 1,76,78, Florian Kofler 1,13,57,78, Johannes C. Paetzold 15,16, Suprosanna Shit 1, Xiaobin Hu 1, Jana Lipková 17, Markus Rempfler 1, Marie Piraud 57,1, Jan Kirschke 13, Benedikt Wiestler 13, Zhiheng Zhang 14, Christian Hülsemeyer 1, Marcel Beetz 1, Florian Ettlinger 1, Michela Antonelli 9, Woong Bae 73, Míriam Bellver 43, Lei Bi 61, Hao Chen 39, Grzegorz Chlebus 62,64, Erik B. Dam 72, Qi Dou 41, Chi-Wing Fu 41, Bogdan Georgescu 60, Xavier Giró-i-Nieto 45, Felix Gruen 28, Xu Han 77, Pheng-Ann Heng 41, Jürgen Hesser 48,49,50, Jan Hendrik Moltz 62, Christian Igel 72, Fabian Isensee 69,70, Paul Jäger 69,70, Fucang Jia 75, Krishna Chaitanya Kaluva 21, Mahendra Khened 21, Ildoo Kim 73, Jae-Hun Kim 53, Sungwoong Kim 73, Simon Kohl 69, Tomasz Konopczynski 49, Avinash Kori 21, Ganapathy Krishnamurthi 21, Fan Li 22, Hongchao Li 11, Junbo Li 8, Xiaomeng Li 40, John Lowengrub 66,67,68, Jun Ma 54, Klaus Maier-Hein 69,70,7, Kevis-Kokitsi Maninis 44, Hans Meine 62,65, Dorit Merhof 74, Akshay Pai 72, Mathias Perslev 72, Jens Petersen 69, Jordi Pont-Tuset 44, Jin Qi 56, Xiaojuan Qi 40, Oliver Rippel 74, Karsten Roth 47, Ignacio Sarasua 51,12, Andrea Schenk 62,63, Zengming Shen 59,60, Jordi Torres 46,43, Christian Wachinger 51,12,1, Chunliang Wang 42, Leon Weninger 74, Jianrong Wu 25, Daguang Xu 71, Xiaoping Yang 55, Simon Chun-Ho Yu 58, Yading Yuan 52, Miao Yue 20, Liping Zhang 58, Jorge Cardoso 9, Spyridon Bakas 19,23,24, Rickmer Braren 6,12,30,a, Volker Heinemann 33,a, Christopher Pal 3,a, An Tang 27,a, Samuel Kadoury 3,a, Luc Soler 36,a, Bram van Ginneken 4,a, Hayit Greenspan 5,a, Leo Joskowicz 18,a, Bjoern Menze 1,2,a // 1 Department of Informatics, Technical University of Munich, Germany; 2 Department of Quantitative Biomedicine, University of Zurich, Switzerland; 3 Ecole Polytechnique de Montréal, Canada; 4 Department of Medical Imaging, Radboud University Medical Center, Nijmegen, The Netherlands; 5 Department of Biomedical Engineering, Tel-Aviv University, Israel; 6 German Cancer Consortium (DKTK), Germany; 7 Pattern Analysis and Learning Group, Department of Radiation Oncology, Heidelberg University Hospital, Heidelberg, Germany; 8 Philips Research China, Philips China Innovation Campus, Shanghai, China; 9 School of Biomedical Engineering & Imaging Sciences, King’s College London, London, UK; 10 Institute for AI in Medicine, Technical University of Munich, Germany; 11 Department of Computer Science, Guangdong University of Foreign Studies, China; 12 Institute for diagnostic and interventional radiology, Klinikum rechts der Isar, Technical University of Munich, Germany; 13 Institute for diagnostic and interventional neuroradiology, Klinikum rechts der Isar,Technical University of Munich, Germany; 14 Department of Hepatobiliary Surgery, the Affiliated Drum Tower Hospital of Nanjing University Medical School, China; 15 Department of Computing, Imperial College London, London, United Kingdom; 16 Institute for Tissue Engineering and Regenerative Medicine, Helmholtz Zentrum München, Neuherberg, Germany; 17 Brigham and Women’s Hospital, Harvard Medical School, USA; 18 School of Computer Science and Engineering, the Hebrew University of Jerusalem, Israel; 19 Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, PA, USA; 20 CGG Services (Singapore) Pte. Ltd., Singapore; 21 Medical Imaging and Reconstruction Lab, Department of Engineering Design, Indian Institute of Technology Madras, India; 22 Sensetime, Shanghai, China; 23 Department of Radiology, Perelman School of Medicine, University of Pennsylvania, USA; 24 Department of Pathology and Laboratory Medicine, Perelman School of Medicine, University of Pennsylvania, PA, USA; 25 Tencent Healthcare (Shenzhen) Co., Ltd, China; 26 The University of Montréal Hospital Research Centre (CRCHUM) Montréal, Québec, Canada; 27 Department of Radiology, Radiation Oncology and Nuclear Medicine, University of Montréal, Canada; 28 Institute of Control Engineering, Technische Universität Braunschweig, Germany; 29 Department of Medicine III, University Hospital, LMU Munich, Munich, Germany; 30 Comprehensive Cancer Center Munich, Munich, Germany; 31 Department of General, Visceral and Transplantation Surgery, University Hospital, LMU Munich, Germany; 32 Department of Radiology, University Hospital, LMU Munich, Germany; 33 Department of Hematology/Oncology & Comprehensive Cancer Center Munich, LMU Klinikum Munich, Germany; 34 Polytechnique Montréal, Mila, QC, Canada; 35 Department of Diagnostic Radiology, Sheba Medical Center, Tel Aviv university, Israel; 36 Department of Surgical Data Science, Institut de Recherche contre les Cancers de l’Appareil Digestif (IRCAD), France; 37 Rafael Advanced Defense System, Israel; 38 Department of Radiology, Hadassah University Medical Center, Jerusalem, Israel; 39 Department of Computer Science and Engineering, The Hong Kong University of Science and Technology, China; 40 Department of Electrical and Electronic Engineering, The University of Hong Kong, China; 41 Department of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong, China; 42 Department of Biomedical Engineering and Health Systems, KTH Royal Institute of Technology, Sweden; 43 Barcelona Supercomputing Center, Barcelona, Spain; 44 Eidgenössische Technische Hochschule Zurich (ETHZ), Zurich, Switzerland; 45 Signal Theory and Communications Department, Universitat Politecnica de Catalunya, Catalonia, Spain; 46 Universitat Politecnica de Catalunya, Catalonia, Spain; 47 University of Tuebingen, Germany; 48 Mannheim Institute for Intelligent Systems in Medicine, department of Medicine Mannheim, Heidelberg University, Germany; 49 Interdisciplinary Center for Scientific Computing (IWR), Heidelberg University, Germany; 50 Central Institute for Computer Engineering (ZITI), Heidelberg University, Germany; 51 Department of Child and Adolescent Psychiatry, Ludwig-Maximilians-Universität, Munich, Germany; 52 Department of Radiation Oncology, Icahn School of Medicine at Mount Sinai, NY, USA; 53 Department of Radiology, Samsung Medical Center, Sungkyunkwan University School of Medicine, South Korea; 54 Department of Mathematics, Nanjing University of Science and Technology, China; 55 Department of Mathematics, Nanjing University, China; 56 School of Information and Communication Engineering, University of Electronic Science and Technology of China, China; 57 Helmholtz AI, Helmholtz Zentrum München, Neuherberg, Germany; 58 Department of Imaging and Interventional Radiology, Chinese University of Hong Kong, Hong Kong, China; 59 Beckman Institute, University of Illinois at Urbana-Champaign, USA; 60 Siemens Healthineers, USA; 61 School of Computer Science, the University of Sydney, Australia; 62 Fraunhofer MEVIS, Bremen, Germany; 63 Institute for Diagnostic and Interventional Radiology, Hannover Medical School, Hannover, Germany; 64 Diagnostic Image Analysis Group, Radboud University Medical Center, Nijmegen, The Netherlands; 65 Medical Image Computing Group, FB3, University of Bremen, Germany; 66 Departments of Mathematics, Biomedical Engineering, University of California, Irvine, USA; 67 Center for Complex Biological Systems, University of California, Irvine, USA; 68 Chao Family Comprehensive Cancer Center, University of California, Irvine, USA; 69 Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany; 70 Helmholtz Imaging, Germany; 71 NVIDIA, Santa Clara, CA, USA; 72 Department of Computer Science, University of Copenhagen, Denmark; 73 Kakao Brain, Republic of Korea; 74 Institute of Imaging & Computer Vision, RWTH Aachen University, Germany; 75 Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, China; 76 Department of Radiation Oncology and Radiotherapy, Klinikum rechts der Isar, Technical University of Munich, Germany; 77 Department of computer science, UNC Chapel Hill, USA; 78 TranslaTUM - Central Institute for Translational Cancer Research, Technical University of Munich, GermanyPostprint (published version
    corecore