9 research outputs found

    Umfrage zum Ernährungsverhalten in Hamburg

    No full text
    Anhand dieses Datensatzes kann das Ernährungsverhalten, insbesondere der Fleischkonsum und -verzicht auf individueller Ebene untersucht werden. Im Fokus der Analyse stehen insbesondere Aspekte der Nachhaltigkeit in Bezug auf Konsum (Einkaufskriterien, Fleischkonsum) als auch entsprechende Einstellungen und Motive. Außerdem gibt es diverse Variablen zum Medienkonsum als auch zu Schwartz' universellen Werten (PVQ, abgewandelt) sowie soziodemografische Daten. Erhoben wurden die Daten auf Basis einer Wahrscheinlichkeitsstichprobe nach dem Gabler-Häder-Verfahren als CATI, durchgeführt mit 1311 Personen aus Hamburg Ende 2018. Eine Designgewichtung liegt vor.Anhand dieses Datensatzes kann das Ernährungsverhalten, insbesondere der Fleischkonsum und -verzicht auf individueller Ebene untersucht werden. Im Fokus der Analyse stehen insbesondere Aspekte der Nachhaltigkeit in Bezug auf Konsum (Einkaufskriterien, Fleischkonsum) als auch entsprechende Einstellungen und Motive. Außerdem gibt es diverse Variablen zum Medienkonsum als auch zu Schwartz' universellen Werten (PVQ, abgewandelt) sowie soziodemografische Daten. Erhoben wurden die Daten auf Basis einer Wahrscheinlichkeitsstichprobe nach dem Gabler-Häder-Verfahren als CATI, durchgeführt mit 1311 Personen aus Hamburg Ende 2018. Eine Designgewichtung liegt vor

    Performance measures for multilabel evaluation

    No full text
    With the steadily increasing amount of multimedia documents on the web and at home, the need for reliable semantic indexing methods that assign multiple keywords to a document grows. The performance of existing approaches is often measured with standard evaluation measures of the information retrieval community. In a case study on image annotation, we show the behaviour of 13 different evaluation measures and point out their strengths and weaknesses. For the analysis, data from 19 research groups that participated in the ImageCLEF Photo Annotation Task are utilized together with several configurations based on random numbers. A recently proposed ontology-based measure was investigated that incorporates structure information, relationships from the ontology and the agreement between annotators for a concept and compared to a hierarchical variant. The results for the hierarchical measure are not competitive. The ontology-based results assign good scores to the systems that got also good ranks in the other measures like the example-based F-measure. For concept-based evaluation, stable results could be obtained for MAP concerning random numbers and the number of annotated labels. The AUC measure shows good evaluation characteristics in case all annotations contain confidence values

    Content-based mood classification for photos and music: A generic multi-modal classification framework and evaluation approach

    No full text
    Mood or emotion information are often used search terms or navigation properties within multimedia archives, retrieval systems or multimedia players. Most of these applications engage end-users or experts to tag multimedia objects with mood annotations. Within the scientific community different approaches for content-based music, photo or multimodal mood classification can be found with a wide range of used mood definitions or models and completely different test suites. The purpose of this paper is to review common mood models in order to assess their flexibility, to present a generic multi-modal mood classification framework which uses various audio-visual features and multiple classifiers and to present a novel music and photo mood classification reference set for evaluation. The classification framework is the basis for different applications e.g. automatic media tagging or music slideshow players. The novel reference set can be used for comparison of different algorithms from various research groups. Finally, the results of the introduced framework are presented, discussed and conclusions for future steps are drawn

    Semantic high-level features for automated cross-modal slideshow generation

    No full text
    This paper describes a technical solution for automated slideshow generation by extracting a set of high-level features from music, such as beat grid, mood and genre and intelligently combining this set with image high-level features, such as mood, daytime- and scene classification. An advantage of this high-level concept is to enable the user to incorporate his preferences regarding the semantic aspects of music and images. For example, the user might request the system to automatically create a slideshow, which plays soft music and shows pictures with sunsets from the last 10 years of his own photo collection.The high-level feature extraction on both, the audio and the visual information is based on the same underlying machine learning core, which processes different audio- and visual-low- and mid-level features. This paper describes the technical realization and evaluation of the algorithms with suitable test databases

    Music search and recommendation

    No full text
    In the last ten years, our ways to listen to music have drastically changed: In earlier times, we went to record stores or had to use low bit-rate audio coding to get some music and to store it on PCs. Nowadays, millions of songs are within reach via on-line distributors. Some music lovers already got terabytes of music on their hard disc. Users are now no longer desparate to get music, but to select, to find the music they love. A number of technologies has been developed to adress these new requirements. There are techniques to identify music and ways to search for music. Recommendation today is a hot topic as well as organizing music into playlists

    Gadolinium Tissue Distribution in a Large-Animal Model after a Single Dose of Gadolinium-based Contrast Agents

    Full text link
    Background There is an ongoing scientific debate about the degree and clinical importance of gadolinium deposition in the brain and other organs after administration of gadolinium-based contrast agents (GBCAs). While most published data focus on gadolinium deposition in the brain, other organs are rarely investigated. Purpose To compare gadolinium tissue concentrations in various organs 10 weeks after one injection (comparable to a clinically applied dose) of linear and macrocyclic GBCAs in a large-animal model. Materials and Methods In this prospective animal study conducted from March to May 2018, 36 female Swiss-Alpine sheep (age range, 4-10 years) received one injection (0.1 mmol/kg) of macrocyclic GBCAs (gadobutrol, gadoteridol, and gadoterate meglumine), linear GBCAs (gadodiamide and gadobenate dimeglumine), or saline. Ten weeks after injection, sheep were sacrificed and tissues were harvested. Gadolinium concentrations were quantified with inductively coupled plasma mass spectrometry (ICP-MS). Histologic staining was performed. Data were analyzed with nonparametric tests. Results At 10 weeks after injection, linear GBCAs resulted in highest mean gadolinium concentrations in the kidney (502 ng/g [95% CI: 270, 734]) and liver (445 ng/g [95% CI: 202, 687]), while low concentrations were found in the deep cerebellar nuclei (DCN) (30 ng/g [95% CI: 20, 41]). Tissue concentrations of linear GBCAs were three to 21 times higher compared with those of macrocyclic GBCAs. Administered macrocyclic GBCAs resulted in mean gadolinium concentrations of 86 ng/g (95% CI: 31, 141) (P = .08) in the kidney, 21 ng/g (95% CI: 4, 39) (P = .15) in liver tissue, and 10 ng/g (95% CI: 9, 12) (P > .99) in the DCN, which were not significantly elevated when compared with concentrations in control animals. No histopathologic alterations were observed irrespective of tissue concentrations within any examined organ. Conclusion Ten weeks after one injection of a clinically relevant dose of gadolinium-based contrast agents, the liver and kidney appeared to be reservoirs of gadolinium; however, despite gadolinium presence, no tissue injury was detected. © RSNA, 2021 Online supplemental material is available for this article. See also the editorial by Clément in this issue
    corecore