12 research outputs found

    Medical image modality classification using discrete Bayesian Networks

    Get PDF
    In this paper we propose a complete pipeline for medical image modality classification focused on the application of discrete Bayesian network classifiers. Modality refers to the categorization of biomedical images from the literature according to a previously defined set of image types, such as X-ray, graph or gene sequence. We describe an extensive pipeline starting with feature extraction from images, data combination, pre-processing and a range of different classification techniques and models. We study the expressive power of several image descriptors along with supervised discretization and feature selection to show the performance of discrete Bayesian networks compared to the usual deterministic classifiers used in image classification. We perform an exhaustive experimentation by using the ImageCLEFmed 2013 collection. This problem presents a high number of classes so we propose several hierarchical approaches. In a first set of experiments we evaluate a wide range of parameters for our pipeline along with several classification models. Finally, we perform a comparison by setting up the competition environment between our selected approaches and the best ones of the original competition. Results show that the Bayesian Network classifiers obtain very competitive results. Furthermore, the proposed approach is stable and it can be applied to other problems that present inherent hierarchical structures of classes

    Computer-based Blind Diagnostic System for Classification of Healthy and Disordered Voices

    Get PDF
    A large population around the world is suffering from voice-related complications. Computer-based voice disorder detection systems can play a substantial role in the early detection of voice disorders by providing complementary information to early-career otolaryngologists and general practitioners. However, various studies have concluded that the recording environment of voice samples affects disorder detection. This influence of the recording environment is a major obstacle in developing such systems when a local voice disorder database is not available. In addition, sometimes the number of samples is not sufficient for training the system. To overcome these issues, a blind detection system for voice disorders is designed and implemented in this study. Hence, without any prior knowledge of voice disorders, the proposed system has the ability to detect those disorders. The developed system relies only on healthy voice samples which can be recorded locally in the desired environment. The generation of a reference model for healthy subjects and decision criteria to detect voice disorders are two major tasks in the proposed systems. These tasks are implemented with two different types of speech features. Moreover, the unsupervised reference model is created by using DBSCAN and k-means algorithms. The overall performance of the system is 74.9 % in terms of the geometric mean of sensitivity and specificity. The results of the proposed system are encouraging and better than the performance of Multidimensional Voice Program (MDVP) parameters which are widely used for disorder assessment by otolaryngologists in clinics

    Overview of the ImageCLEFcoral 2020 Task: Automated Coral Reef Image Annotation

    Get PDF
    This paper presents an overview of the ImageCLEFcoral 2020 task that was organised as part of the Conference and Labs of the Evaluation Forum - CLEF Labs 2020. The task addresses the problem of automatically segmenting and labelling a collection of underwater images that can be used in combination to create 3D models for the monitoring of coral reefs. The data set comprises 440 human-annotated training images, with 12,082 hand-annotated substrates, from a single geographical region. The test set comprises a further 400 test images, with 8,640 substrates annotated, from four geographical regions ranging in geographical similarity and ecological connectedness to the training data (100 images per subset). 15 teams registered, of which 4 teams submitted 53 runs. The majority of submissions used deep neural networks, generally convolutional ones. Participants’ entries showed that some level of automatically annotating corals and benthic substrates was possible, despite this being a difficult task due to the variation of colour, texture and morphology between and within classification types

    Overview of The MediaEval 2022 Predicting Video Memorability Task

    Get PDF
    This paper describes the 5th edition of the Predicting Video Memorability Task as part of MediaEval2022. This year we have reorganised and simplified the task in order to lubricate a greater depth of inquiry. Similar to last year, two datasets are provided in order to facilitate generalisation, however, this year we have replaced the TRECVid2019 Video-to-Text dataset with the VideoMem dataset in order to remedy underlying data quality issues, and to prioritise short-term memorability prediction by elevating the Memento10k dataset as the primary dataset. Additionally, a fully fledged electroencephalography (EEG)-based prediction sub-task is introduced. In this paper, we outline the core facets of the task and its constituent sub-tasks; describing the datasets, evaluation metrics, and requirements for participant submissions.Comment: 6 pages. In: MediaEval Multimedia Benchmark Workshop Working Notes, 202

    Overview of MediaEval 2020 predicting media memorability task: what makes a video memorable?

    Get PDF
    This paper describes the MediaEval 2020 Predicting Media Memorability task. After first being proposed at MediaEval 2018, the Predicting Media Memorability task is in its 3rd edition this year, as the prediction of short-term and long-term video memorability (VM) remains a challenging task. In 2020, the format remained the same as in previous editions. This year the videos are a subset of the TRECVid 2019 Video-to-Text dataset, containing more action rich video content as compared with the 2019 task. In this paper a description of some aspects of this task is provided, including its main characteristics, a description of the collection, the ground truth dataset, evaluation metrics and the requirements for participants’ run submissions

    Overview of the ImageCLEFmed 2021 concept & caption prediction task

    Get PDF
    The 2021 ImageCLEF concept detection and caption prediction task follows similar challenges that werealready run from 2017–2020. The objective is to extract UMLS-concept annotations and/or captionsfrom the image data that are then compared against the original text captions of the images. The usedimages are clinically relevant radiology images and the describing captions were created by medicalexperts. In the caption prediction task, lexical similarity with the original image captions is evaluatedwith the BLEU-score. In the concept detection task, UMLS (Unified Medical Language System) termsare extracted from the original text captions and compared against the predicted concepts in a multi-label way. The F1-score was used to assess the performance. The 2021 task has been conducted incollaboration with the Visual Question Answering task and used the same images. The task attracteda strong participation with 25 registered teams. In the end 10 teams submitted 75 runs for the two subtasks. Results show that there is a variety of used techniques that can lead to good prediction resultsfor the two tasks. In comparison to earlier competitions, more modern deep learning architectures likeEfficientNets and Transformer-based architectures for text or images were used

    The 2021 ImageCLEF benchmark ::multimedia retrieval in medical, nature, internet and social media applications

    No full text
    This paper presents the ideas for the 2021 ImageCLEF lab that will be organized as part of the Conference and Labs of the Evaluation Forum—CLEF Labs 2021 in Bucharest, Romania. ImageCLEF is an ongoing evaluation initiative (active since 2003) that promotes the evaluation of technologies for annotation, indexing and retrieval of visual data with the aim of providing information access to large collections of images in various usage scenarios and domains. In 2021, the 19th edition of ImageCLEF will organize four main tasks: (i) a Medical task addressing visual question answering, a concept annotation and a tuberculosis classification task, (ii) a Coral task addressing the annotation and localisation of substrates in coral reef images, (iii) a DrawnUI task addressing the creation of websites from either a drawing or a screenshot by detecting the different elements present on the design and a new (iv) Aware task addressing the prediction of real-life consequences of online photo sharing. The strong participation in 2020, despite the COVID pandemic, with over 115 research groups registering and 40 submitting over 295 runs for the tasks shows an important interest in this benchmarking campaign. We expect the new tasks to attract at least as many researchers for 2021

    Celia kirjaston uusien verkkopalvelumallien kehittäminen ja rakenteisten DAISY-äänikirjojen verkkojakelu

    Get PDF
    Tutkimuksen aihe on Celia-kirjaston toteuttama digitaalisten DAISY-kirjojen verkkojakelu sekä uusien verkkopalvelumallien kehittäminen jakelun yhteyteen. Työ on kehittämistyö, joka ideoi ja kuvailee verkkojakelua, jonka tarkoituksena on parantaa lukemisesteisten kirjastolta saamaa palvelua. Opinnäytetyön tuloksena kuvataan kirjastomaailman uusia tuulia, joissa kirjasto etsii uutta identiteettiä ja roolia muuttuvassa mediamaisemassa. Celia on järjestelmällisesti rakentanut digitaalista kirjastoa, jonka tärkein lainaustuote on kansainväliseen standardiin perustuva DAISY-kirja. DAISY soveltuu mainiosti myös verkkojakeluun, joka tulee olemaan kirjan tärkein jakeluväylä lähitulevaisuudessa. Jakelun lisäksi kirjastolla on hyvät valmiudet myös muunlaisen materiaalin tuotantoon. Nämä oheismateriaalit - esimerkiksi kirjailijaesittelyt tai kirjojen kuvaukset - tuovat verkkopalveluihin lisäarvoa, ja jos tuotanto suunnitellaan oikein niiden kustannukset tulevat pysymään kohtuullisina. Kerran tehtyä rakenteista dokumenttipohjaa voidaan hyödyttää ja kierrättää. Myös metadatan merkitys sekä luettelointi ovat keskeisiä asioita uusia palveluita kehitettäessä. Projekti on alku pitempiaikaiselle kehitystyölle, jota toteutetaan kansainvälisesti tavoitteena Global Library. Ideaalinen tulevaisuudenkuva olisi maailmanlaajuinen järjestelmä, jossa kirja tuotetaan vain kerran ja kirjastot eri puolilla maapalloa voivat vaihtaa luettelointietoja ja lainata samaa kirjaa. Tähän on vielä matkaa, mutta alku on jo nyt hyvin lupaava ja tulee tuottamaan Celian asiakkaille jatkuvasti paranevia palveluja.The aim of this research is to describe a project during which Celia – Library for the Visually Impaired developed an online distribution system for the digital DAISY books. It also discusses the various new ways a modern library can build web services. In the final project I focus on a real life project which started in late 2008 and will continue until December 2009. The libraries are looking into new directions and Celia has its own strategy for building a digital library. The most important materials are DAISY books, which are based on an international standard. DAISY books can be distributed online and that will be the main distribution channel in the near future. Besides books, the library can also produce other kinds of materials for web services. These can include for example, a recommendation or book reviews. They will give added value to the services. The production is based on XML and all distribution copies can be produced from the master files. Metadata and cataloging are important as the library is further developing its systems. This project is a beginning of a larger project and the aim is to build a global library with other libraries as partners. It will take a while but nevertheless the start is very promising
    corecore