648 research outputs found

    Zwischen Formalismus-Forschung und Ideengeschichte : die Potebnja-Rezeption in der westlichen Literaturwissenschaft

    Get PDF
    Im folgenden werden die verschiedenen Beurteilungen des Verhältnisses Potebnja vs. russische Formalisten erörtert. Die allgemeine Einschätzung, daß es sich bei Potebnja in gewissem Sinne um einen Vorläufer der Formalisten handelt, soll dabei nicht bestritten werden. Ziel dieses Überblicks ist statt dessen, die Frage, in welchem Sinne er es denn war, einer Beantwortung näher zu bringen

    Sugar intake of children in the "Childhood Obesity Project" trial

    Get PDF
    Introduction: High sugar intake has been suggested to be involved in the development of overweight and obesity and several associated NCDs such as diabetes and CVD. The aim of this doctoral thesis is to investigate whether a higher sugar intake in children is associated with two different risk factors of NCDs, i.e. overweight and obesity, and unfavorable blood markers of lipid and glucose metabolism. Methods: Data was drawn from the CHOP trial, a randomized controlled nutritional intervention trial in the first year of life with long- term follow-up. Infants from five European countries (Belgium, Germany, Italy, Poland, Spain) were randomized to feeding with a higher or lower protein content formula, and an additional breastfed reference group was recruited. Nutrition was assessed yearly from 2 to 6 years of age and again at age 8 years using 3-day-weighed food protocols. Anthropometric measurements were taken from 2 to 8 years of age, at the same time as nutrition assessments. A longitudinal analysis was performed to investigate the influence of sugar intake on age and gender standardized body mass index (BMI) and fat mass index (FMI) over time. A cross-sectional analysis at 8 years of age examined the association of sugar intake with several blood markers of lipid and glucose metabolism. Results: While increasing TS intake in an ad libitum diet was positively associated with BMI and FMI z-score, a negative association was observed on an energy-equivalent basis (zBMI: -0.033; 95% CI: -0.061, -0.005, zFMI: - 0.050; 95% CI: - 0.089, - 0.011 at an increase of 100 kcal from TS). Looking at blood markers, an increased consumption of 100 kcal from TS was significantly associated with a HDL-C z-score decrease (-0.14; 95% CI: -0.01, - 0.27). Increase of TS intake from SSBs showed the strongest association with a decrease in HDL-C z-score (-1.67; 95% CI: -0.42, -2.91). For none of the other investigated markers of lipid or glucose metabolism a significant association with TS increase or TS increase of major food groups was observed. Conclusions: Results indicate that increasing TS intake in childhood does not affect overweight or obesity on an energy-equivalent basis. Additionally, on an energy-equivalent basis only HDL-C was unfavorably influenced by increasing TS intake and this association was very weak. The analysis of the current thesis suggests that increasing TS on an energy-equivalent basis in childhood have little impact on the investigated risk factors of NCDs. Therefore, for prevention of NCD risk factors in early childhood the reduction of TEI should be rather focused on. Nevertheless, a diet with a high sugar intake is generally not recommended, since dietary products with high sugar intake, especially with free sugars, are often accompanied by a low nutritional density and add unnecessary and dispensable energy

    Algorithm Engineering for High-Dimensional Similarity Search Problems (Invited Talk)

    Get PDF
    Similarity search problems in high-dimensional data arise in many areas of computer science such as data bases, image analysis, machine learning, and natural language processing. One of the most prominent problems is finding the k nearest neighbors of a data point q ? ?^d in a large set of data points S ? ?^d, under same distance measure such as Euclidean distance. In contrast to lower dimensional settings, we do not know of worst-case efficient data structures for such search problems in high-dimensional data, i.e., data structures that are faster than a linear scan through the data set. However, there is a rich body of (often heuristic) approaches that solve nearest neighbor search problems much faster than such a scan on many real-world data sets. As a necessity, the term solve means that these approaches give approximate results that are close to the true k-nearest neighbors. In this talk, we survey recent approaches to nearest neighbor search and related problems. The talk consists of three parts: (1) What makes nearest neighbor search difficult? (2) How do current state-of-the-art algorithms work? (3) What are recent advances regarding similarity search on GPUs, in distributed settings, or in external memory

    Towards web supported identification of top affiliations from scholarly papers

    Get PDF
    Frequent successful publications by specific institutions are indicators for identifying outstanding centres of research. This institution data are present in scholarly papers as the authors‟ affilations – often in very heterogeneous variants for the same institution across publications. Thus, matching is needed to identify the denoted real world institutions and locations. We introduce an approximate string metric that handles acronyms and abbreviations. Our URL overlap similarity measure is based on comparing the result sets of web searches. Evaluations on affiliation strings of a conference prove better results than soft tf/idf, trigram, and levenshtein. Incorporating the aligned affiliations we present top institutions and countries for the last 10 years of SIGMOD

    Das exokrine Pankreas: Non-invasive Evaluation der Funktion mittels MRT zur Frühdiagnose der Abstossung nach Transplantation

    Get PDF
    In dieser Arbeit wurde die Quantifizierung von Flüssigkeiten in einem klinischen 1,0 Tesla Ganzkörper-MR-Tomographen sowohl im Phantomexperiment als auch in vivo in Probanden und Patienten implementiert. Quantifizierung von Flüssigkeiten ist ein in der NMR bekanntes Verfahren [RENOU JP et al 87; Schmidt, S. J. et al 96]. Diese Untersuchungen beschränkten sich allerdings auf in vitro Untersuchungen. Spektroskopische Techniken in der MRT erlauben zwar theoretisch eine vergleichbare Quantifizierung, sind aber aufgrund der langen Untersuchungszeiten und beschränkten räumlichen Auflösung für den klinischen Einsatz nutzlos. Die vorliegende Arbeit wurde in Zusammenarbeit mit der Klinik für Strahlendiagnostik und der Klinik für Innere Medizin, Schwerpunkt Gastroenterologie/Endokrinologie und Stoffwechsel des Klinikums der Philipps Universität Marburg durchgeführt. In den Phantomuntersuchungen wurde gezeigt, dass ein linearer Zusammenhang zwischen der Signalintensität schneller (single-shot) stark T2-gewichteter MR-Sequenzen und der im Untersuchungsvolumen vorhandenen Flüssigkeitsmenge besteht. Damit ist es möglich, Flüssigkeiten nicht nur abzubilden, sondern auch an Hand der gemessenen Signalintensität zu quantifizieren. Des Weiteren wurde in den Phantomuntersuchungen gezeigt, dass diese Messungen reproduzierbar und unabhängig von der gewählten Schichtdicke bzw. Pixelgrösse sind. Der Einfluss der Vorsättigung durch vorausgegangene Messungen kann eliminiert werden, wenn der Abstand zwischen den beiden Messungen mindestens 11 Sekunden beträgt. Sowohl die tierexperimentellen als auch die Probandenuntersuchungen bestätigten den linearen Zusammenhang zwischen Signalintensität und Flüssigkeitsmenge im Untersuchungsvolumen. An Hand der Probandenuntersuchungen wurden die Messungen geeicht, so dass eine Signalintensitätsänderung in ein Flüssigkeitsvolumen umgerechnet werden konnte. Die Patientenuntersuchungen gliederten sich in drei Teile: 1. Diagnose der chronischen Pankreatitis mit Hilfe der MRH im Vergleich zur endoskopischen retrograden Cholangiopankreatikographie. 2. Vergleich der MRH Ergebnisse mit den Ergebnissen des Secretin-Caerulein-Sondentests. 3. Diagnose von Funktionsstörungen von Pankreastransplantaten. Die Ergebnisse der Patientenuntersuchungen zeigten, dass die MRH-Ergebnisse signifikant mit den Ergebnissen des Sekretin-Caerulein-Sondentests korrelieren. Trotzdem gab es einige Unterschiede, welche sich aber auf die unterschiedlichen Testbedingungen zurückführen liessen. So war das gemessene Volumen im Sondentest immer höher als in der MRH. Dies lag vor allem daran, dass die MRH nur über einen Zeitraum von 10 Minuten mass, während der Sondentest 60 Minuten dauerte. Des Weiteren war das Duodenum während der MRH nicht durch Ballons blockiert, so dass Flüssigkeit aus dem Untersuchungsvolumen heraus transportiert werden konnte. Insgesamt konnten die Patienten-Untersuchungen allerdings zeigen, dass die MRH in der Lage ist, fortgeschrittene chronische Pankreatitis zu diagnostizieren, während in frühen Stadien immer noch Probleme bestehen. Insbesondere die Einführung eines MRH Scores, bestehend aus dem sezernierten Volumen und der Dauer der Sekretion, verbesserte die Diagnosestellung. In Zukunft sollte eine weitere Verbesserung der Spezifität des Verfahrens mit Hilfe von MR-Spektroskopie möglich sein. Die Untersuchungen der Patienten nach Pankreastransplantation zeigten, dass die MRH durchaus in der Lage ist, Patienten mit einer Funktionsstörung des Pankreastransplantats von solchen mit normaler Funktion zu unterscheiden. Die MRH zeigte ebenfalls Unterschiede zwischen verschiedenen Funktionsstörungen. So sezernierte ein Patient mit einer chronischen Abstossungsreaktion noch eine geringe Menge an Pankreassekret, während beide Patienten mit nekrotisierender Pankreastitis so gut wie keine Sekretion mehr aufwiesen

    Simple and Fast BlockQuicksort using Lomuto's Partitioning Scheme

    Get PDF
    This paper presents simple variants of the BlockQuicksort algorithm described by Edelkamp and Weiss (ESA 2016). The simplification is achieved by using Lomuto's partitioning scheme instead of Hoare's crossing pointer technique to partition the input. To achieve a robust sorting algorithm that works well on many different input types, the paper introduces a novel two-pivot variant of Lomuto's partitioning scheme. A surprisingly simple twist to the generic two-pivot quicksort approach makes the algorithm robust. The paper provides an analysis of the theoretical properties of the proposed algorithms and compares them to their competitors. The analysis shows that Lomuto-based approaches incur a higher average sorting cost than the Hoare-based approach of BlockQuicksort. Moreover, the analysis is particularly useful to reason about pivot choices that suit the two-pivot approach. An extensive experimental study shows that, despite their worse theoretical behavior, the simpler variants perform as well as the original version of BlockQuicksort.Comment: Accepted at ALENEX 201

    Towards a Semantic Wiki Experience – Desktop Integration and Interactivity in WikSAR

    Get PDF
    Common Wiki systems such as MediaWiki lack semantic annotations. WikSAR (Semantic Authoring and Retrieval within a Wiki), a prototype of a semantic Wiki, offers effortless semantic authoring. Instant gratification of users is achieved by context aware means of navigation, interactive graph visualisation of the emerging ontology, as well as semantic retrieval possibilities. Embedding queries into Wiki pages creates views (as dependant collections) on the information space. Desktop integration includes accessing dates (e.g. reminders) entered in the Wiki via local calendar applications, maintaining bookmarks, and collecting web quotes within the Wiki. Approaches to reference documents on the local file system are sketched out, as well as an enhancement of the Wiki interface to suggest appropriate semantic annotations to the user

    How Good Is Multi-Pivot Quicksort?

    Get PDF
    Multi-Pivot Quicksort refers to variants of classical quicksort where in the partitioning step kk pivots are used to split the input into k+1k + 1 segments. For many years, multi-pivot quicksort was regarded as impractical, but in 2009 a 2-pivot approach by Yaroslavskiy, Bentley, and Bloch was chosen as the standard sorting algorithm in Sun's Java 7. In 2014 at ALENEX, Kushagra et al. introduced an even faster algorithm that uses three pivots. This paper studies what possible advantages multi-pivot quicksort might offer in general. The contributions are as follows: Natural comparison-optimal algorithms for multi-pivot quicksort are devised and analyzed. The analysis shows that the benefits of using multiple pivots with respect to the average comparison count are marginal and these strategies are inferior to simpler strategies such as the well known median-of-kk approach. A substantial part of the partitioning cost is caused by rearranging elements. A rigorous analysis of an algorithm for rearranging elements in the partitioning step is carried out, observing mainly how often array cells are accessed during partitioning. The algorithm behaves best if 3 to 5 pivots are used. Experiments show that this translates into good cache behavior and is closest to predicting observed running times of multi-pivot quicksort algorithms. Finally, it is studied how choosing pivots from a sample affects sorting cost. The study is theoretical in the sense that although the findings motivate design recommendations for multipivot quicksort algorithms that lead to running time improvements over known algorithms in an experimental setting, these improvements are small.Comment: Submitted to a journal, v2: Fixed statement of Gibb's inequality, v3: Revised version, especially improving on the experiments in Section
    • …
    corecore