47,084 research outputs found
Reasoning about Social Semantic Web Applications using String Similarity and Frame Logic
Social semantic Web or Web 3.0 application gained major attention from academia and industry in recent times. Such applications try to take advantage of user supplied meta data, using ideas from the semantic Web initiative, in order to provide better services. An open problem is the formalization of such meta data, due to its complex and often inconsistent nature. A possible solution to inconsistencies are string similarity metrics which are explained and analyzed. A study of performance and applicability in a frame logic environment is conducted on the case of agent reasoning about multiple domains in TaOPis - a social semantic Web application for self-organizing communities. Results show that the NYSIIS metric yields surprisingly good results on Croatian words and phrases
A Survey of Volunteered Open Geo-Knowledge Bases in the Semantic Web
Over the past decade, rapid advances in web technologies, coupled with
innovative models of spatial data collection and consumption, have generated a
robust growth in geo-referenced information, resulting in spatial information
overload. Increasing 'geographic intelligence' in traditional text-based
information retrieval has become a prominent approach to respond to this issue
and to fulfill users' spatial information needs. Numerous efforts in the
Semantic Geospatial Web, Volunteered Geographic Information (VGI), and the
Linking Open Data initiative have converged in a constellation of open
knowledge bases, freely available online. In this article, we survey these open
knowledge bases, focusing on their geospatial dimension. Particular attention
is devoted to the crucial issue of the quality of geo-knowledge bases, as well
as of crowdsourced data. A new knowledge base, the OpenStreetMap Semantic
Network, is outlined as our contribution to this area. Research directions in
information integration and Geographic Information Retrieval (GIR) are then
reviewed, with a critical discussion of their current limitations and future
prospects
Knowledge Representation Concepts for Automated SLA Management
Outsourcing of complex IT infrastructure to IT service providers has
increased substantially during the past years. IT service providers must be
able to fulfil their service-quality commitments based upon predefined Service
Level Agreements (SLAs) with the service customer. They need to manage, execute
and maintain thousands of SLAs for different customers and different types of
services, which needs new levels of flexibility and automation not available
with the current technology. The complexity of contractual logic in SLAs
requires new forms of knowledge representation to automatically draw inferences
and execute contractual agreements. A logic-based approach provides several
advantages including automated rule chaining allowing for compact knowledge
representation as well as flexibility to adapt to rapidly changing business
requirements. We suggest adequate logical formalisms for representation and
enforcement of SLA rules and describe a proof-of-concept implementation. The
article describes selected formalisms of the ContractLog KR and their adequacy
for automated SLA management and presents results of experiments to demonstrate
flexibility and scalability of the approach.Comment: Paschke, A. and Bichler, M.: Knowledge Representation Concepts for
Automated SLA Management, Int. Journal of Decision Support Systems (DSS),
submitted 19th March 200
Evaluating Maintainability Prejudices with a Large-Scale Study of Open-Source Projects
Exaggeration or context changes can render maintainability experience into
prejudice. For example, JavaScript is often seen as least elegant language and
hence of lowest maintainability. Such prejudice should not guide decisions
without prior empirical validation. We formulated 10 hypotheses about
maintainability based on prejudices and test them in a large set of open-source
projects (6,897 GitHub repositories, 402 million lines, 5 programming
languages). We operationalize maintainability with five static analysis
metrics. We found that JavaScript code is not worse than other code, Java code
shows higher maintainability than C# code and C code has longer methods than
other code. The quality of interface documentation is better in Java code than
in other code. Code developed by teams is not of higher and large code bases
not of lower maintainability. Projects with high maintainability are not more
popular or more often forked. Overall, most hypotheses are not supported by
open-source data.Comment: 20 page
Metrics for Measuring Data Quality - Foundations for an Economic Oriented Management of Data Quality
The article develops metrics for an economic oriented management of data quality. Two data quality dimensions are focussed: consistency and timeliness. For deriving adequate metrics several requirements are stated (e. g. normalisation, cardinality, adaptivity, interpretability). Then the authors discuss existing approaches for measuring data quality and illustrate their weaknesses. Based upon these considerations, new metrics are developed for the data quality dimensions consistency and timeliness. These metrics are applied in practice and the results are illustrated in the case of a major German mobile services provider
An Approach to Transform Public Administration into SOA-based Organizations
Nowadays, Service-Oriented Architectures (SOA) is widely spread in private organizations. However, when transferring this knowledge to Public Administration, it is realized that it has not been transformed in terms
of its legal nature into organizations capable to operate under the SOA paradigm. This fact prevents public
administration bodies from offering the efficient services they have been provided by different boards of
governments. A high-level framework to perform this transformation is proposed. Taking it as starting
point, an instance of a SOA Target Meta-Model can be obtained by means of an iterative and incremental
process based on the analysis of imperatives and focused on the particular business context of each local public administration. This paper briefly presents a practical experience consisting in applying this process
to a Spanish regional public administration.Junta de Andalucía TIC-578
Recommended from our members
Using TREC for cross-comparison between classic IR and ontology-based search models at a Web scale
The construction of standard datasets and benchmarks to evaluate ontology-based search approaches and to compare then against baseline IR models is a major open problem in the semantic technologies community. In this paper we propose a novel evaluation benchmark for ontology-based IR models based on an adaptation of the well-known Cranfield paradigm (Cleverdon, 1967) traditionally used by the IR community. The proposed benchmark comprises: 1) a text document collection, 2) a set of queries and their corresponding document relevance judgments and 3) a set of ontologies and Knowledge Bases covering the query topics. The document collection and the set of queries and judgments are taken from one of the most widely used datasets in the IR community, the TREC Web track. As a use case example we apply the proposed benchmark to compare a real ontology-based search model (Fernandez, et al., 2008) against the best IR systems of TREC 9 and TREC 2001 competitions. A deep analysis of the strengths and weaknesses of this benchmark and a discussion of how it can be used to evaluate other ontology-based search systems is also included at the end of the paper
- …