2,106 research outputs found
Linked Data Quality Assessment and its Application to Societal Progress Measurement
In recent years, the Linked Data (LD) paradigm has emerged as a simple mechanism for employing the Web as a medium for data and knowledge integration where both documents and data are linked. Moreover, the semantics and structure of the underlying data are kept intact, making this the Semantic Web. LD essentially entails a set of best practices for publishing and connecting structure data on the Web, which allows publish- ing and exchanging information in an interoperable and reusable fashion. Many different communities on the Internet such as geographic, media, life sciences and government have already adopted these LD principles. This is confirmed by the dramatically growing Linked Data Web, where currently more than 50 billion facts are represented.
With the emergence of Web of Linked Data, there are several use cases, which are possible due to the rich and disparate data integrated into one global information space. Linked Data, in these cases, not only assists in building mashups by interlinking heterogeneous and dispersed data from multiple sources but also empowers the uncovering of meaningful and impactful relationships. These discoveries have paved the way for scientists to explore the existing data and uncover meaningful outcomes that they might not have been aware of previously.
In all these use cases utilizing LD, one crippling problem is the underlying data quality. Incomplete, inconsistent or inaccurate data affects the end results gravely, thus making them unreliable. Data quality is commonly conceived as fitness for use, be it for a certain application or use case. There are cases when datasets that contain quality problems, are useful for certain applications, thus depending on the use case at hand. Thus, LD consumption has to deal with the problem of getting the data into a state in which it can be exploited for real use cases. The insufficient data quality can be caused either by the LD publication process or is intrinsic to the data source itself.
A key challenge is to assess the quality of datasets published on the Web and make this quality information explicit. Assessing data quality is particularly a challenge in LD as the underlying data stems from a set of multiple, autonomous and evolving data sources. Moreover, the dynamic nature of LD makes assessing the quality crucial to ïżŒïżŒmeasure the accuracy of representing the real-world data. On the document Web, data quality can only be indirectly or vaguely defined, but there is a requirement for more concrete and measurable data quality metrics for LD. Such data quality metrics include correctness of facts wrt. the real-world, adequacy of semantic representation, quality of interlinks, interoperability, timeliness or consistency with regard to implicit information. Even though data quality is an important concept in LD, there are few methodologies proposed to assess the quality of these datasets.
Thus, in this thesis, we first unify 18 data quality dimensions and provide a total of 69 metrics for assessment of LD. The first methodology includes the employment of LD experts for the assessment. This assessment is performed with the help of the TripleCheckMate tool, which was developed specifically to assist LD experts for assessing the quality of a dataset, in this case DBpedia. The second methodology is a semi-automatic process, in which the first phase involves the detection of common quality problems by the automatic creation of an extended schema for DBpedia. The second phase involves the manual verification of the generated schema axioms. Thereafter, we employ the wisdom of the crowds i.e. workers for online crowdsourcing platforms such as Amazon Mechanical Turk (MTurk) to assess the quality of DBpedia. We then compare the two approaches (previous assessment by LD experts and assessment by MTurk workers in this study) in order to measure the feasibility of each type of the user-driven data quality assessment methodology.
Additionally, we evaluate another semi-automated methodology for LD quality assessment, which also involves human judgement. In this semi-automated methodology, selected metrics are formally defined and implemented as part of a tool, namely R2RLint. The user is not only provided the results of the assessment but also specific entities that cause the errors, which help users understand the quality issues and thus can fix them. Finally, we take into account a domain-specific use case that consumes LD and leverages on data quality. In particular, we identify four LD sources, assess their quality using the R2RLint tool and then utilize them in building the Health Economic Research (HER) Observatory. We show the advantages of this semi-automated assessment over the other types of quality assessment methodologies discussed earlier. The Observatory aims at evaluating the impact of research development on the economic and healthcare performance of each country per year. We illustrate the usefulness of LD in this use case and the importance of quality assessment for any data analysis
Using formal concept analysis for checking the structure of an ontology in LOD: the example of DBpedia
International audienceLinked Open Data (LOD) constitute a large and growing collection of inter-domain data sets. LOD are represented as RDF graphs that allow interlinking with ontologies, facilitating data integration, knowledge engineering and in a certain sense knowledge discovery. However, ontologies associated with LOD are of different quality and not necessarily adapted to all data sets under study. In this paper, we propose an original approach, based on Formal Concept Analysis (FCA), which builds an optimal lattice-based structure for classifying RDF resources w.r.t. their predicates. We introduce the notion of lattice annotation, which enables comparing our classification with an ontology schema, to confirm subsumption axioms or suggest new ones. We conducted experiments on the DBpedia data set and its domain ontologies, DBpedia On-tology and YAGO. Results show that our approach is well-founded and illustrates the ability of FCA to guide a possible structuring of LOD
The Case of Wikidata
Since its launch in 2012, Wikidata has grown to become the largest open knowledge
base (KB), containing more than 100 million data items and over 6 million registered
users. Wikidata serves as the structured data backbone of Wikipedia, addressing
data inconsistencies, and adhering to the motto of âserving anyone anywhere in
the world,â a vision realized through the diversity of knowledge. Despite being
a collaboratively contributed platform, the Wikidata community heavily relies on
bots, automated accounts with batch, and speedy editing rights, for a majority of
edits. As Wikidata approaches its first decade, the question arises: How close is
Wikidata to achieving its vision of becoming a global KB and how diverse is it in
serving the global population? This dissertation investigates the current status of
Wikidataâs diversity, the role of bot interventions on diversity, and how bots can be
leveraged to improve diversity within the context of Wikidata.
The methodologies used in this study are mapping study and content analysis, which
led to the development of three datasets: 1) Wikidata Research Articles Dataset,
covering the literature on Wikidata from its first decade of existence sourced from
online databases to inspect its current status; 2) Wikidata Requests-for-Permissions
Dataset, based on the pages requesting bot rights on the Wikidata website to explore
bots from a community perspective; and 3) Wikidata Revision History Dataset,
compiled from the edit history of Wikidata to investigate bot editing behavior and
its impact on diversity, all of which are freely available online.
The insights gained from the mapping study reveal the growing popularity of Wikidata
in the research community and its various application areas, indicative of its
progress toward the ultimate goal of reaching the global community. However, there
is currently no research addressing the topic of diversity in Wikidata, which could
shed light on its capacity to serve a diverse global population. To address this gap,
this dissertation proposes a diversity measurement concept that defines diversity in
a KB context in terms of variety, balance, and disparity and is capable of assessing
diversity in a KB from two main angles: user and data. The application of this concept
on the domains and classes of the Wikidata Revision History Dataset exposes
imbalanced content distribution across Wikidata domains, which indicates low data
diversity in Wikidata domains.
Further analysis discloses that bots have been active since the inception of Wikidata,
and the community embraces their involvement in content editing tasks, often
importing data from Wikipedia, which shows a low diversity of sources in bot edits.
Bots and human users engage in similar editing tasks but exhibit distinct editing patterns.
The findings of this thesis confirm that bots possess the potential to influence
diversity within Wikidata by contributing substantial amounts of data to specific
classes and domains, leading to an imbalance. However, this potential can also be
harnessed to enhance coverage in classes with limited content and restore balance,
thus improving diversity. Hence, this study proposes to enhance diversity through
automation and demonstrate the practical implementation of the recommendations
using a specific use case.
In essence, this research enhances our understanding of diversity in relation to a KB,
elucidates the influence of automation on data diversity, and sheds light on diversity
improvement within a KB context through the usage of automation.Seit seiner EinfuÌhrung im Jahr 2012 hat sich Wikidata zu der gröĂten offenen Wissensdatenbank
entwickelt, die mehr als 100 Millionen Datenelemente und uÌber 6
Millionen registrierte Benutzer enthĂ€lt. Wikidata dient als das strukturierte RuÌckgrat
von Wikipedia, indem es Datenunstimmigkeiten angeht und sich dem Motto
verschrieben hat, âjedem uÌberall auf der Welt zu dienenâ, eine Vision, die durch die
DiversitÀt des Wissens verwirklicht wird. Trotz seiner kooperativen Natur ist die
Wikidata-Community in hohem MaĂe auf Bots, automatisierte Konten mit Batch-
Verarbeitung und schnelle Bearbeitungsrechte angewiesen, um die Mehrheit der
Bearbeitungen durchzufuÌhren.
Da Wikidata seinem ersten Jahrzehnt entgegengeht, stellt sich die Frage: Wie nahe
ist Wikidata daran, seine Vision, eine globale Wissensdatenbank zu werden, zu verwirklichen,
und wie ausgeprĂ€gt ist seine Dienstleistung fuÌr die globale Bevölkerung?
Diese Dissertation untersucht den aktuellen Status der DiversitÀt von Wikidata,
die Rolle von Bot-Eingriffen in Bezug auf DiversitÀt und wie Bots im Kontext von
Wikidata zur Verbesserung der DiversitÀt genutzt werden können.
Die in dieser Studie verwendeten Methoden sind Mapping-Studie und Inhaltsanalyse,
die zur Entwicklung von drei DatensĂ€tzen gefuÌhrt haben: 1) Wikidata Research
Articles Dataset, die die Literatur zu Wikidata aus dem ersten Jahrzehnt aus
Online-Datenbanken umfasst, um den aktuellen Stand zu untersuchen; 2) Requestfor-
Permission Dataset, der auf den Seiten zur Beantragung von Bot-Rechten auf
der Wikidata-Website basiert, um Bots aus der Perspektive der Gemeinschaft zu
untersuchen; und 3)Wikidata Revision History Dataset, der aus der Bearbeitungshistorie
von Wikidata zusammengestellt wurde, um das Bearbeitungsverhalten von
Bots zu untersuchen und dessen Auswirkungen auf die DiversitÀt, die alle online frei
verfuÌgbar sind.
Die Erkenntnisse aus der Mapping-Studie zeigen die wachsende Beliebtheit von Wikidata
in der Forschungsgemeinschaft und in verschiedenen Anwendungsbereichen,
was auf seinen Fortschritt hin zur letztendlichen Zielsetzung hindeutet, die globale
Gemeinschaft zu erreichen. Es gibt jedoch derzeit keine Forschung, die sich mit
dem Thema der DiversitÀt in Wikidata befasst und Licht auf seine FÀhigkeit werfen
könnte, eine vielfĂ€ltige globale Bevölkerung zu bedienen. Um diese LuÌcke zu
schlieĂen, schlĂ€gt diese Dissertation ein Konzept zur Messung der DiversitĂ€t vor,
das die DiversitÀt im Kontext einer Wissensbasis anhand von Vielfalt, Balance und
Diskrepanz definiert und in der Lage ist, die DiversitÀt aus zwei Hauptperspektiven
zu bewerten: Benutzer und Daten.
Die Anwendung dieses Konzepts auf die Bereiche und Klassen des Wikidata Revision
History Dataset zeigt eine unausgewogene Verteilung des Inhalts uÌber die Bereiche
von Wikidata auf, was auf eine geringe DiversitÀt der Daten in den Bereichen von
Wikidata hinweist.
Weitere Analysen zeigen, dass Bots seit der GruÌndung von Wikidata aktiv waren
und von der Gemeinschaft inhaltliche Bearbeitungsaufgaben angenommen werden,
oft mit Datenimporten aus Wikipedia, was auf eine geringe DiversitÀt der Quellen
bei Bot-Bearbeitungen hinweist. Bots und menschliche Benutzer fuÌhren Ă€hnliche
Bearbeitungsaufgaben aus, zeigen jedoch unterschiedliche Bearbeitungsmuster. Die
Ergebnisse dieser Dissertation bestÀtigen, dass Bots das Potenzial haben, die DiversitÀt in Wikidata zu beeinflussen, indem sie bedeutende Datenmengen zu bestimmten
Klassen und Bereichen beitragen, was zu einer Ungleichgewichtung fuÌhrt.
Dieses Potenzial kann jedoch auch genutzt werden, um die Abdeckung in Klassen
mit begrenztem Inhalt zu verbessern und das Gleichgewicht wiederherzustellen, um
die DiversitÀt zu verbessern. Daher schlÀgt diese Studie vor, die DiversitÀt durch
Automatisierung zu verbessern und die praktische Umsetzung der Empfehlungen
anhand eines spezifischen Anwendungsfalls zu demonstrieren.
Kurz gesagt trÀgt diese Forschung dazu bei, unser VerstÀndnis der DiversitÀt im
Kontext einer Wissensbasis zu vertiefen, wirft Licht auf den Einfluss von Automatisierung
auf die DiversitÀt von Daten und zeigt die Verbesserung der DiversitÀt im
Kontext einer Wissensbasis durch die Verwendung von Automatisierung auf
Mining Twitter for crisis management: realtime floods detection in the Arabian Peninsula
A thesis submitted to the University of Bedfordshire, in partial fulfilment of the requirements for the degree of doctor of Philosophy.In recent years, large amounts of data have been made available on microblog platforms such as Twitter, however, it is difficult to filter and extract information and knowledge from such data because of the high volume, including noisy data. On Twitter, the general public are able to report real-world events such as floods in real time, and act as social sensors. Consequently, it is beneficial to have a method that can detect flood events automatically in real time to help governmental authorities, such as crisis management authorities, to detect the event and make decisions during the early stages of the event.
This thesis proposes a real time flood detection system by mining Arabic Tweets using machine learning and data mining techniques. The proposed system comprises five main components: data collection, pre-processing, flooding event extract, location inferring, location named entity link, and flooding event visualisation. An effective method of flood detection from Arabic tweets is presented and evaluated by using supervised learning techniques. Furthermore, this work presents a location named entity inferring method based on the Learning to Search method, the results show that the proposed method outperformed the existing systems with significantly higher accuracy in tasks of inferring flood locations from tweets which are written in colloquial Arabic. For the location named entity link, a method has been designed by utilising Google API services as a knowledge base to extract accurate geocode coordinates that are associated with location named entities mentioned in tweets. The results show that the proposed location link method locate 56.8% of tweets with a distance range of 0 â 10 km from the actual location. Further analysis has shown that the accuracy in locating tweets in an actual city and region are 78.9% and 84.2% respectively
Knowledge-Based Techniques for Scholarly Data Access: Towards Automatic Curation
Accessing up-to-date and quality scientific literature is a critical preliminary step in any research activity.
Identifying relevant scholarly literature for the extents of a given task or application is, however a complex and time consuming activity.
Despite the large number of tools developed over the years to support scholars in their literature surveying activity, such as Google Scholar, Microsoft Academic search, and others, the best way to access quality papers remains asking a domain expert who is actively involved in the field and knows research trends and directions.
State of the art systems, in fact, either do not allow exploratory search activity, such as identifying the active research directions within a given topic, or do not offer proactive features, such as content recommendation, which are both critical to researchers.
To overcome these limitations, we strongly advocate a paradigm shift in the development of scholarly data access tools: moving from traditional information retrieval and filtering tools towards automated agents able to make sense of the textual content of published papers and therefore monitor the state of the art.
Building such a system is however a complex task that implies tackling non trivial problems in the fields of Natural Language Processing, Big Data Analysis, User Modelling, and Information Filtering.
In this work, we introduce the concept of Automatic Curator System and present its fundamental components.openDottorato di ricerca in InformaticaopenDe Nart, Dari
Requirements and Use Cases ; Report I on the sub-project Smart Content Enrichment
In this technical report, we present the results of the first milestone phase
of the Corporate Smart Content sub-project "Smart Content Enrichment". We
present analyses of the state of the art in the fields concerning the three
working packages defined in the sub-project, which are aspect-oriented
ontology development, complex entity recognition, and semantic event pattern
mining. We compare the research approaches related to our three research
subjects and outline briefly our future work plan
- âŠ