623 research outputs found
Reason Maintenance - Conceptual Framework
This paper describes the conceptual framework for reason maintenance developed as part of
WP2
10041 Abstracts Collection -- Perspectives Workshop: Digital Social Networks
From 24.01.2010 to 29.01.2010, the Dagstuhl Seminar 10041 ``Perspectives Workshop: Digital Social Networks\u27\u27 was held in Schloss Dagstuhl ~--~ Leibniz Center for Informatics.
During the seminar, several participants presented their current
research, and ongoing work and open problems were discussed. Abstracts of
the presentations given during the seminar as well as abstracts of
seminar results and ideas are put together in this paper. The first section
describes the seminar topics and goals in general.
Links to extended abstracts or full papers are provided, if available
Connected Information Management
Society is currently inundated with more information than ever, making efficient management
a necessity. Alas, most of current information management suffers from several
levels of disconnectedness: Applications partition data into segregated islands,
small notes don’t fit into traditional application categories, navigating the data is different
for each kind of data; data is either available at a certain computer or only online,
but rarely both. Connected information management (CoIM) is an approach to information
management that avoids these ways of disconnectedness. The core idea of
CoIM is to keep all information in a central repository, with generic means for organization
such as tagging. The heterogeneity of data is taken into account by offering
specialized editors.
The central repository eliminates the islands of application-specific data and is formally
grounded by a CoIM model. The foundation for structured data is an RDF repository.
The RDF editing meta-model (REMM) enables form-based editing of this data,
similar to database applications such as MS access. Further kinds of data are supported
by extending RDF, as follows. Wiki text is stored as RDF and can both contain
structured text and be combined with structured data. Files are also supported by the
CoIM model and are kept externally. Notes can be quickly captured and annotated with
meta-data. Generic means for organization and navigation apply to all kinds of data.
Ubiquitous availability of data is ensured via two CoIM implementations, the web application
HYENA/Web and the desktop application HYENA/Eclipse. All data can be
synchronized between these applications. The applications were used to validate the
CoIM ideas
Constructive Reasoning for Semantic Wikis
One of the main design goals of social software, such as wikis, is to
support and facilitate interaction and collaboration. This dissertation
explores challenges that arise from extending social software with
advanced facilities such as reasoning and semantic annotations and
presents tools in form of a conceptual model, structured tags, a rule
language, and a set of novel forward chaining and reason maintenance
methods for processing such rules that help to overcome the
challenges.
Wikis and semantic wikis were usually developed in an ad-hoc
manner, without much thought about the underlying concepts. A conceptual
model suitable for a semantic wiki that takes advanced features
such as annotations and reasoning into account is proposed. Moreover,
so called structured tags are proposed as a semi-formal knowledge
representation step between informal and formal annotations.
The focus of rule languages for the Semantic Web has been predominantly
on expert users and on the interplay of rule languages
and ontologies. KWRL, the KiWi Rule Language, is proposed as a
rule language for a semantic wiki that is easily understandable for
users as it is aware of the conceptual model of a wiki and as it
is inconsistency-tolerant, and that can be efficiently evaluated as it
builds upon Datalog concepts.
The requirement for fast response times of interactive software
translates in our work to bottom-up evaluation (materialization) of
rules (views) ahead of time – that is when rules or data change, not
when they are queried. Materialized views have to be updated when
data or rules change. While incremental view maintenance was intensively
studied in the past and literature on the subject is abundant,
the existing methods have surprisingly many disadvantages – they
do not provide all information desirable for explanation of derived
information, they require evaluation of possibly substantially larger
Datalog programs with negation, they recompute the whole extension
of a predicate even if only a small part of it is affected by a
change, they require adaptation for handling general rule changes.
A particular contribution of this dissertation consists in a set of
forward chaining and reason maintenance methods with a simple declarative
description that are efficient and derive and maintain information
necessary for reason maintenance and explanation. The reasoning
methods and most of the reason maintenance methods are described
in terms of a set of extended immediate consequence operators the
properties of which are proven in the classical logical programming
framework. In contrast to existing methods, the reason maintenance methods in this dissertation work by evaluating the original Datalog
program – they do not introduce negation if it is not present in the input
program – and only the affected part of a predicate’s extension is
recomputed. Moreover, our methods directly handle changes in both
data and rules; a rule change does not need to be handled as a special
case.
A framework of support graphs, a data structure inspired by justification
graphs of classical reason maintenance, is proposed. Support
graphs enable a unified description and a formal comparison of the
various reasoning and reason maintenance methods and define a notion
of a derivation such that the number of derivations of an atom is
always finite even in the recursive Datalog case.
A practical approach to implementing reasoning, reason maintenance,
and explanation in the KiWi semantic platform is also investigated. It
is shown how an implementation may benefit from using a graph
database instead of or along with a relational database
Hybrid Hashtags: #YouKnowYoureAKiwiWhen Your Tweet Contains Māori and English
Twitter constitutes a rich resource for investigating language contact phenomena. In this paper, we report findings from the analysis of a large-scale diachronic corpus of over one million tweets, containing loanwords from te reo Maori, the indigenous language spoken in New Zealand, into (primarily, New Zealand) English. Our analysis focuses on hashtags comprising mixed-language resources (which we term hybrid hashtags), bringing together descriptive linguistic tools (investigating length, word class, and semantic domains of the hashtags) and quantitative methods (Random Forests and regression analysis). Our work has implications for language change and the study of loanwords (we argue that hybrid hashtags can be linked to loanword entrenchment), and for the study of language on social media (we challenge proposals of hashtags as “words,” and show that hashtags have a dual discourse role: a micro-function within the
immediate linguistic context in which they occur and a macro-function within the tweet as a whole)
How to achieve high customer satisfaction in Sabancı University Information Center
The Sabancı University is a young private university, which started providing education in 1999 in Istanbul. A “Search Conference” had been organized in order to find out “what kind of a university the country needed” and of its structure had been established on this understanding. At the first stage, the vision, the mission and the design of the university were completed, and the foundation of administrative infrastructure and selection of technology systems were materialized. Starting from the days of its foundation, the planning of the information services and facilities had been one of the main issues of the project. The university, which aims to become a world university, was accepted to be a member of the “European Foundation of Quality Management (EFQM)” regarding its activities in the stage of its foundation.
A “Student and Staff Tendency Survey” which was implemented in 2001 indicated that the Information Centre was the strong side of the university. At the same time the Center's the statistics covering period of 1999-2007 also indicated that the targets were achieved under the strategic planning of the Center. In 2007, an user satisfaction survey in order to evaluate the conformity of the services and facilities, to identify its strong and weak areas, opportunities and threats through comparison and SWOT analysis for the future, and set up 2007-2011 five-years strategic planning and operational activity plan. The survey indicated that 95% of the participants are satisfied in general with the Center. In addition to these, the results of usage statistics between the years 1998-2009 indicated that utilizing of the services and facilities of the Information Center has increased from year to year. On the other hand, the results of the survey after the orientation programs show that the customer satisfaction is very high.
We believe that the followings are the reasons of high user satisfaction. The Centre has a user and process focused pro-active management, learning organization structure, the availability of the suggestion system, continues benchmarking with the competitors and observing management and technological developments in the world. This paper presents to share our applications and plans on high user satisfaction rate, customer relation management activities and future planning
Semantic Mashup with the Online IDE WikiNEXT
Demo sessionThe proposed demonstration requests DBPedia.org, gets the results and uses them to populate wiki pages with semantic annotations using RDFaLite. These annotations are persisted in a RDF store and we will show how this data can be reused by other applications, e.g. for a semantic mashup that displays all collected metadata about cities on a single map page. It has been developed using WikiNEXT, a mix between a semantic wiki and a web-based IDE. The tool is online 1 , open source 2 ; screencasts are available on YouTube (look for "WikiNext")
- …