6,916 research outputs found
ANALYZING USER INTERACTION LOGS OF AN EDUCATIONAL VISUALIZATION SYSTEM TO UNDERSTAND HOW STUDENTS GENERATE INSIGHTS
Department of Computer Science and EngineeringVisual analytics systems have been becoming popular in many domains. Recently, a visual analytical tool, VAiRoma is designed in educational domain to support students learn the history class. However, how users are interacting with such systems is still not known enough. In an educational domain, it is important to know how users are gaining insights. It may give us an opportunity to understand the user???s learning style, so that we can design better visualization tools in the future. In this thesis, I will analyze the interaction logs of an educational visualization system, VAiRoma, in order to explore how users generating insights via the system. Based on the results, users tried more explorative interactions at the initial stages of their insight generation path. In the middle of the path, users mostly read some textual information. Toward the end, they attempted to show their understandings from what they learnt by creating an annotation. There is also a cyclic behavior of an insight generation path. In 38% of cases, during the annotation creation process, the users cancelled to ???create an annotation??? and went back to read some textual information.ope
Creation, Enrichment and Application of Knowledge Graphs
The world is in constant change, and so is the knowledge about it. Knowledge-based systems - for example, online encyclopedias, search engines and virtual assistants - are thus faced with the constant challenge of collecting this knowledge and beyond that, to understand it and make it accessible to their users. Only if a knowledge-based system is capable of this understanding - that is, it is capable of more than just reading a collection of words and numbers without grasping their semantics - it can recognise relevant information and make it understandable to its users. The dynamics of the world play a unique role in this context: Events of various kinds which are relevant to different communities are shaping the world, with examples ranging from the coronavirus pandemic to the matches of a local football team. Vital questions arise when dealing with such events: How to decide which events are relevant, and for whom? How to model these events, to make them understood by knowledge-based systems? How is the acquired knowledge returned to the users of these systems?
A well-established concept for making knowledge understandable by knowledge-based systems are knowledge graphs, which contain facts about entities (persons, objects, locations, ...) in the form of graphs, represent relationships between these entities and make the facts understandable by means of ontologies. This thesis considers knowledge graphs from three different perspectives: (i) Creation of knowledge graphs: Even though the Web offers a multitude of sources that provide knowledge about the events in the world, the creation of an event-centric knowledge graph requires recognition of such knowledge, its integration across sources and its representation. (ii) Knowledge graph enrichment: Knowledge of the world seems to be infinite, and it seems impossible to grasp it entirely at any time. Therefore, methods that autonomously infer new knowledge and enrich the knowledge graphs are of particular interest. (iii) Knowledge graph interaction: Even having all knowledge of the world available does not have any value in itself; in fact, there is a need to make it accessible to humans. Based on knowledge graphs, systems can provide their knowledge with their users, even without demanding any conceptual understanding of knowledge graphs from them. For this to succeed, means for interaction with the knowledge are required, hiding the knowledge graph below the surface.
In concrete terms, I present EventKG - a knowledge graph that represents the happenings in the world in 15 languages - as well as Tab2KG - a method for understanding tabular data and transforming it into a knowledge graph. For the enrichment of knowledge graphs without any background knowledge, I propose HapPenIng, which infers missing events from the descriptions of related events. I demonstrate means for interaction with knowledge graphs at the example of two web-based systems (EventKG+TL and EventKG+BT) that enable users to explore the happenings in the world as well as the most relevant events in the lives of well-known personalities.Die Welt befindet sich im steten Wandel, und mit ihr das Wissen ĂŒber die Welt. Wissensbasierte Systeme - seien es Online-EnzyklopĂ€dien, Suchmaschinen oder Sprachassistenten - stehen somit vor der konstanten Herausforderung, dieses Wissen zu sammeln und darĂŒber hinaus zu verstehen, um es so Menschen verfĂŒgbar zu machen. Nur wenn ein wissensbasiertes System in der Lage ist, dieses VerstĂ€ndnis aufzubringen - also zu mehr in der Lage ist, als auf eine unsortierte Ansammlung von Wörtern und Zahlen zurĂŒckzugreifen, ohne deren Bedeutung zu erkennen -, kann es relevante Informationen erkennen und diese seinen Nutzern verstĂ€ndlich machen. Eine besondere Rolle spielt hierbei die Dynamik der Welt, die von Ereignissen unterschiedlichster Art geformt wird, die fĂŒr unterschiedlichste Bevölkerungsgruppe relevant sind; Beispiele hierfĂŒr erstrecken sich von der Corona-Pandemie bis hin zu den Spielen lokaler FuĂballvereine. Doch stellen sich hierbei bedeutende Fragen: Wie wird die Entscheidung getroffen, ob und fĂŒr wen derlei Ereignisse relevant sind? Wie sind diese Ereignisse zu modellieren, um von wissensbasierten Systemen verstanden zu werden? Wie wird das angeeignete Wissen an die Nutzer dieser Systeme zurĂŒckgegeben?
Ein bewĂ€hrtes Konzept, um wissensbasierten Systemen das Wissen verstĂ€ndlich zu machen, sind Wissensgraphen, die Fakten ĂŒber EntitĂ€ten (Personen, Objekte, Orte, ...) in der Form von Graphen sammeln, ZusammenhĂ€nge zwischen diesen EntitĂ€ten darstellen, und darĂŒber hinaus anhand von Ontologien verstĂ€ndlich machen. Diese Arbeit widmet sich der Betrachtung von Wissensgraphen aus drei aufeinander aufbauenden Blickwinkeln: (i) Erstellung von Wissensgraphen: Auch wenn das Internet eine Vielzahl an Quellen anbietet, die Wissen ĂŒber Ereignisse in der Welt bereithalten, so erfordert die Erstellung eines ereigniszentrierten Wissensgraphen, dieses Wissen zu erkennen, miteinander zu verbinden und zu reprĂ€sentieren. (ii) Anreicherung von Wissensgraphen: Das Wissen ĂŒber die Welt scheint schier unendlich und so scheint es unmöglich, dieses je vollstĂ€ndig (be)greifen zu können. Von Interesse sind also Methoden, die selbststĂ€ndig das vorhandene Wissen erweitern. (iii) Interaktion mit Wissensgraphen: Selbst alles Wissen der Welt bereitzuhalten, hat noch keinen Wert in sich selbst, vielmehr muss dieses Wissen Menschen verfĂŒgbar gemacht werden. Basierend auf Wissensgraphen, können wissensbasierte Systeme Nutzern ihr Wissen darlegen, auch ohne von diesen ein konzeptuelles VerstĂ€ndis von Wissensgraphen abzuverlangen. Damit dies gelingt, sind Möglichkeiten der Interaktion mit dem gebotenen Wissen vonnöten, die den genutzten Wissensgraphen unter der OberflĂ€che verstecken.
Konkret prĂ€sentiere ich EventKG - einen Wissensgraphen, der Ereignisse in der Welt reprĂ€sentiert und in 15 Sprachen verfĂŒgbar macht, sowie Tab2KG - eine Methode, um in Tabellen enthaltene Daten anhand von Hintergrundwissen zu verstehen und in Wissensgraphen zu wandeln. Zur Anreicherung von Wissensgraphen ohne weiteres Hintergrundwissen stelle ich HapPenIng vor, das fehlende Ereignisse aus den vorliegenden Beschreibungen Ă€hnlicher Ereignisse inferiert. Interaktionsmöglichkeiten mit Wissensgraphen demonstriere ich anhand zweier web-basierter Systeme (EventKG+TL und EventKG+BT), die Nutzern auf einfache Weise die Exploration von Geschehnissen in der Welt sowie der wichtigsten Ereignisse in den Leben bekannter Persönlichkeiten ermöglichen
From Consumers to Creators: Wikistoriograhy and the Consensus of Collaborative Learning in the Landscape of Web 2.0
This thesis reviews the significance of Wikipedia in an approach to internet historiography. Wikipedia incorporates Web 2.0 methods to create a new way to study and revise history through a consensus of multiple users and editors. The argument of the thesis is structured to address some of the qualms many academics have about Wikipedia, examine how historiography functions in an internet driven world, and finally how Wikipedia fits into the puzzle of internet historiography. It concludes that Wikipedia, the largest user-based information site in the world, must be at the forefront of discussion surrounding internet historiography
Promoting Awareness for the Cibachrome Association
We completed our project on behalf of the Cibachrome Association of Marly, Switzerland to enhance their public awareness and outreach. Due to the technical nature of their materials, we focused on outreach methods that would benefit photographic curators, conservators and other interested members of the public. We created a revised, expanded website and a new Wiki article, using feedback from the Associationâs target demographics. This will help the Cibachrome Association effectively publicize their information and gather further public awareness
Recommended from our members
Content Selection for Timeline Generation from Single History Articles
This thesis investigates the problem of content selection for timeline generation from single history articles. While the task of timeline generation has been addressed before, most previous approaches assume the existence of a large corpus of history articles from the same era. They exploit the fact that salient information is likely to be mentioned multiple times in such corpora. However, large resources of this kind are only available for historical events that happened in the most recent decades. In this thesis, I present approaches which can be used to create history timelines for any historical period, even for eras such as the Middle Ages, for which no large corpora of supplementary text exist.
The thesis first presents a system that selects relevant historical figures in a given article, a task which is substantially easier than full timeline generation.
I show that a supervised approach which uses linguistic, structural and semantic features outperforms a competitive baseline on this task.
Based on the observations made in this initial study, I then develop approaches for timeline generation. I find that an unsupervised approach that takes into account the article's subject area outperforms several supervised and unsupervised baselines.
A main focus of this thesis is the development of evaluation methodologies and resources, as no suitable corpora existed when work began.
For the initial experiment on important historical figures, I construct a corpus of existing timelines and textual articles, and devise a method for evaluating algorithms based on this resource.
For timeline generation, I present a comprehensive evaluation methodology which is based on the interpretation of the task as a special form of single-document summarisation. This methodology scores algorithms based on meaning units rather than surface similarity. Unlike previous semantic-units-based evaluation methods for summarisation, my evaluation method does not require any manual annotation of system timelines. Once an evaluation resource has been created, which involves only annotation of the input texts, new timeline generation algorithms can be tested at no cost. This crucial advantage should make my new evaluation methodology attractive for the evaluation of general single-document summaries beyond timelines.
I also present an evaluation resource which is based on this methodology. It was constructed using gold-standard timelines elicited from 30 human timeline writers, and has been made publicly available.
This thesis concentrates on the content selection stage of timeline generation, and leaves the surface realisation step for future work. However, my evaluation methodology is designed in such a way that it can in principle also quantify the degree to which surface realisation is successful
Changing the Scholarly Sources Landscape with Geomorphology Undergraduate Students
Science is a core discipline in academia yet the focus of most undergraduate technical writing is generally on the data and results, not the literature review. The Science, Technology, Engineering, and Math (STEM) librarian and a new geology professor at the University of Nebraska at Omaha (UNO) collaborated to develop an information literacy session for students in a geomorphology class. Here we outline the background of the campus STEM initiatives and the assignment as well as the library instruction activity, learning outcomes, and assessment components. The activity improved student use of scholarly sources and we provide suggested activity modifications for future teaching and assessment efforts
Extracting Temporal Expressions from Unstructured Open Resources
AETAS is an end-to-end system with SOA approach that retrieves plain text data from web and blog news and represents and stores them in RDF, with a special focus on their temporal dimension. The system allows users to acquire, browse and query Linked Data obtained from unstructured sources
Wikipedia Conflict Representation in Articles of War: A critical discourse analysis of current, on-going, socio-political Wikipedia articles about war
With the help of a discourse-historical approach, a textual corpus composed of the talk pages of three controversial, socio-political Wikipedia articles about ongoing wars was analyzed in order to shed light on the way in which conflict is represented through the editing and discussion process. Additionally, a rational discourse was employed in order to unravel communication distortions within the editing process in an attempt to improve communication and consensus-seeking. Finally, semi-structured interviews of participating contributors within studied articles were used in order to better understand Wikipedian experience in a controversial collaboration scenario. Results unveiled a set of discursive practices in which Wikipedians participate, as well as the creation of a Wikipedian argumentation topoi framework useful for further Wikipedia-specific discourse analysis involving the content change-retain negotiation process
edit filters on English Wikipedia
The present thesis offers an initial investigation of a previously unexplored by
scientific research quality control mechanism of Wikipediaâedit filters. It is
analysed how edit filters fit in the quality control system of English Wikipedia,
why they were introduced, and what tasks they take over. Moreover, it is
discussed why rule based systems like these seem to be still popular today, when
more advanced machine learning methods are available. The findings indicate
that edit filters were implemented to take care of obvious but persistent types
of vandalism, disallowing these from the start so that (human) resources can be
used more efficiently elsewhere (i.e. for judging less obvious cases). In addition
to disallowing such vandalism, edit filters appear to be applied in ambiguous
situations where an edit is disruptive but the motivation of the editor is not
clear. In such cases, the filters take an âassume good faithâ approach and seek
via warning messages to guide the disrupting editor towards transforming their
contribution to a constructive one. There are also a smaller number of filters
taking care of haphazard maintenance tasksâabove all tracking a certain bug
or other behaviour for further investigation. Since the current work is just
a first exploration into edit filters, at the end, a comprehensive list of open
questions for future research is compiled.Die vorliegende Arbeit bietet eine erste Untersuchung eines bisher von der Wis-
senschaft unerforschten QualitĂ€tskontrollmechanismusâ von Wikipedia: Bear-
beitungsfilter (âedit filtersâ auf Englisch). Es wird analysiert, wie sich Bear-
beitungsfilter in das QualitÀtssicherungssystem der englischsprachigen Wikipedia
einfĂŒgen, warum sie eingefĂŒhrt wurden und welche Aufgaben sie ĂŒbernehmen.
DarĂŒberhinaus wird diskutiert, warum regelbasierte Systeme wie dieses noch
heute beliebt sind, wenn fortgeschrittenere Machine Lerning Methoden verfĂŒg-
bar sind. Die Ergebnisse deuten darauf hin, dass Bearbeitungsfilter implemen-
tiert wurden, um sich um offensichtliche, aber hartnÀckige Sorten von Vandal-
ismus zu kĂŒmmern. Die Motivation der Wikipedia-Community war, dass wenn
solcher Vandalismus von vornherein verboten wird, (Personal-)Ressourcen an
anderen Stellen effizienter genutzt werden können (z.B. zur Beurteilung weniger
offensichtlicher FĂ€lle). AuĂerdem scheinen Bearbeitungsfilter in uneindeutigen
Situationen angewendet zu werden, in denen eine Bearbeitung zwar störend ist,
die Motivation der editierenden Person allerdings nicht klar als boshaft iden-
tifiziert werden kann. In solchen FĂ€llen verinnerlichen die Filter Wikipedias
âGeh von guten Absichten ausâ Richtlinie und versuchen ĂŒber Warnmeldun-
gen einen konstruktiven Beitrag anzuleiten. Es gibt auch eine kleinere Anzahl
von Filtern, die sich um vereinzelte Wartungsaufgaben kĂŒmmern. Hierunter
fallen die Versuche, einen bestimmten Bug nachzuvollziehen oder ein anderes
Verhalten zu verfolgen, um es dann weiter untersuchen zu können. Da die ak-
tuelle Arbeit nur ein erster Einblick in Wikipedias Bearbeitungsfilter darstellt,
wird am Ende eine umfassendere Liste mit offenen Fragen fĂŒr die zukĂŒnftige
Erforschung des Mechanismusâ erarbeitet
- âŠ