19 research outputs found
Semantic discovery and reuse of business process patterns
Patterns currently play an important role in modern information systems (IS) development and their use has mainly been restricted to the design and implementation phases of the development lifecycle. Given the increasing significance of business modelling in IS development, patterns have the potential of providing a viable solution for promoting reusability of recurrent generalized models in the very early stages of development. As a statement of research-in-progress this paper focuses on business process patterns and proposes an initial methodological framework for the discovery and reuse of business process patterns within the IS development lifecycle. The framework borrows ideas from the domain engineering literature and proposes the use of semantics to drive both the discovery of patterns as well as their reuse
A treatise on Web 2.0 with a case study from the financial markets
There has been much hype in vocational and academic circles surrounding the emergence of
web 2.0 or social media; however, relatively little work was dedicated to substantiating the
actual concept of web 2.0. Many have dismissed it as not deserving of this new title, since the
term web 2.0 assumes a certain interpretation of web history, including enough progress in
certain direction to trigger a succession [i.e. web 1.0 → web 2.0]. Others provided arguments in
support of this development, and there has been a considerable amount of enthusiasm in the
literature. Much research has been busy evaluating current use of web 2.0, and analysis of the
user generated content, but an objective and thorough assessment of what web 2.0 really stands
for has been to a large extent overlooked. More recently the idea of collective intelligence
facilitated via web 2.0, and its potential applications have raised interest with researchers, yet a
more unified approach and work in the area of collective intelligence is needed.
This thesis identifies and critically evaluates a wider context for the web 2.0 environment, and
what caused it to emerge; providing a rich literature review on the topic, a review of existing
taxonomies, a quantitative and qualitative evaluation of the concept itself, an investigation of
the collective intelligence potential that emerges from application usage. Finally, a framework
for harnessing collective intelligence in a more systematic manner is proposed.
In addition to the presented results, novel methodologies are also introduced throughout this
work. In order to provide interesting insight but also to illustrate analysis, a case study of the
recent financial crisis is considered. Some interesting results relating to the crisis are revealed
within user generated content data, and relevant issues are discussed where appropriate
Recommended from our members
Using Machine Learning to improve Internet Privacy
Internet privacy lacks transparency, choice, quantifiability, and accountability, especially, as the deployment of machine learning technologies becomes mainstream. However, these technologies can be both privacy-invasive as well as privacy-protective. This dissertation advances the thesis that machine learning can be used for purposes of improving Internet privacy. Starting with a case study that shows how the potential of a social network to learn ethnicity and gender of its users from geotags can be estimated, various strands of machine learning technologies to further privacy are explored. While the quantification of privacy is the subject of well-known privacy metrics, such as k-anonymity or differential privacy, I discuss how some of those metrics can be leveraged in tandem with machine learning algorithms for purposes of quantifying the privacy-invasiveness of data collection practices. Further, I demonstrate how the current notice-and-choice paradigm can be realized by automatic machine learning privacy policy analysis. The implemented system notifies users efficiently and accurately on applicable data practices. Further, by analyzing software data flows users are enabled to compare actual to described data practices and regulators can enforce those at scale. The emerging cross-device tracking practices of ad networks, analytics companies, and others can be supplemented by machine learning technologies as well to notify users of privacy practices across devices and give them the choice they are entitled to by law. Ultimately, cross-device tracking is a harbinger of the emerging Internet of Things, for which I envision intelligent personal assistants that help users navigating through the increasing complexity of privacy notices and choices
Social informatics
5th International Conference, SocInfo 2013, Kyoto, Japan, November 25-27, 2013, Proceedings</p
Linked data wrapper curation: A platform perspective
131 p.Linked Data Wrappers (LDWs) turn Web APIs into RDF end-points, leveraging the LOD cloud with current data. This potential is frequently undervalued, regarding LDWs as mere by-products of larger endeavors, e.g. developing mashup applications. However, LDWs are mainly data-driven, not contaminated by application semantics, hence with an important potential for reuse. If LDWs could be decoupled from their breakout projects, this would increase the chances of LDWs becoming truly RDF end-points. But this vision is still under threat by LDW fragility upon API upgrades, and the risk of unmaintained LDWs. LDW curation might help. Similar to dataset curation, LDW curation aims to clean up datasets but, in this case, the dataset is implicitly described by the LDW definition, and ¿stains¿ are not limited to those related with the dataset quality but also include those related to the underlying API. This requires the existence of LDW Platforms that leverage existing code repositories with additional functionalities that cater for LDW definition, deployment and curation. This dissertation contributes to this vision through: (1) identifying a set of requirements for LDW Platforms; (2) instantiating these requirements in SYQL, a platform built upon Yahoo's YQL; (3) evaluating SYQL through a fully-developed proof of concept; and (4), validating the extent to which this approach facilitates LDW curation
Representing archaeological uncertainty in cultural informatics
This thesis sets out to explore, describe, quantify, and visualise uncertainty in a
cultural informatics context, with a focus on archaeological reconstructions. For quite
some time, archaeologists and heritage experts have been criticising the often toorealistic
appearance of three-dimensional reconstructions. They have been highlighting
one of the unique features of archaeology: the information we have on our heritage
will always be incomplete. This incompleteness should be reflected in digitised
reconstructions of the past.
This criticism is the driving force behind this thesis. The research examines archaeological
theory and inferential process and provides insight into computer visualisation.
It describes how these two areas, of archaeology and computer graphics,
have formed a useful, but often tumultuous, relationship through the years.
By examining the uncertainty background of disciplines such as GIS, medicine,
and law, the thesis postulates that archaeological visualisation, in order to mature,
must move towards archaeological knowledge visualisation. Three sequential areas
are proposed through this thesis for the initial exploration of archaeological uncertainty:
identification, quantification and modelling. The main contributions of the
thesis lie in those three areas.
Firstly, through the innovative design, distribution, and analysis of a questionnaire,
the thesis identifies the importance of uncertainty in archaeological interpretation
and discovers potential preferences among different evidence types.
Secondly, the thesis uniquely analyses and evaluates, in relation to archaeological
uncertainty, three different belief quantification models. The varying ways that these
mathematical models work, are also evaluated through simulated experiments. Comparison
of results indicates significant convergence between the models.
Thirdly, a novel approach to archaeological uncertainty and evidence conflict visualisation
is presented, influenced by information visualisation schemes. Lastly, suggestions
for future semantic extensions to this research are presented through the
design and development of new plugins to a search engine
Annotations in Scholarly Editions and Research
The notion of annotation is associated in the Humanities and Information Sciences with different concepts that vary in coverage, application and direction of impact, but have conceptual parallels as well. This publication reflects on different practices and associated concepts of annotation, puts them in relation to each other and attempts to systematize their commonalities and divergences in an interdisciplinary perspective
CLARIN
The book provides a comprehensive overview of the Common Language Resources and Technology Infrastructure – CLARIN – for the humanities. It covers a broad range of CLARIN language resources and services, its underlying technological infrastructure, the achievements of national consortia, and challenges that CLARIN will tackle in the future. The book is published 10 years after establishing CLARIN as an Europ. Research Infrastructure Consortium
CLARIN. The infrastructure for language resources
CLARIN, the "Common Language Resources and Technology Infrastructure", has established itself as a major player in the field of research infrastructures for the humanities. This volume provides a comprehensive overview of the organization, its members, its goals and its functioning, as well as of the tools and resources hosted by the infrastructure. The many contributors representing various fields, from computer science to law to psychology, analyse a wide range of topics, such as the technology behind the CLARIN infrastructure, the use of CLARIN resources in diverse research projects, the achievements of selected national CLARIN consortia, and the challenges that CLARIN has faced and will face in the future.
The book will be published in 2022, 10 years after the establishment of CLARIN as a European Research Infrastructure Consortium by the European Commission (Decision 2012/136/EU)