201,883 research outputs found
Empirical analysis of impacts of instance-driven changes in ontologies
Changes in the characterization of instances in digital contents are one of the rationales to change or evolve ontologies which support the domain. These changes can impacts on one or more of interrelated ontologies. Before implementing changes, their impact on the target ontology, other dependent ontologies or dependent systems should be analysed. We investigate three concerns for the determination of impacts of changes in ontologies: representation of changes to ensure minimum impact, impact determination and integrity determination. Key elements of our solution are the operationalization of change operations to minimize impacts, a parameterization approach for the determination of impacts, a categorization scheme for identified impacts, and prioritization technique for change operations based on the severity of impacts
Semantic Stability in Social Tagging Streams
One potential disadvantage of social tagging systems is that due to the lack
of a centralized vocabulary, a crowd of users may never manage to reach a
consensus on the description of resources (e.g., books, users or songs) on the
Web. Yet, previous research has provided interesting evidence that the tag
distributions of resources may become semantically stable over time as more and
more users tag them. At the same time, previous work has raised an array of new
questions such as: (i) How can we assess the semantic stability of social
tagging systems in a robust and methodical way? (ii) Does semantic
stabilization of tags vary across different social tagging systems and
ultimately, (iii) what are the factors that can explain semantic stabilization
in such systems? In this work we tackle these questions by (i) presenting a
novel and robust method which overcomes a number of limitations in existing
methods, (ii) empirically investigating semantic stabilization processes in a
wide range of social tagging systems with distinct domains and properties and
(iii) detecting potential causes for semantic stabilization, specifically
imitation behavior, shared background knowledge and intrinsic properties of
natural language. Our results show that tagging streams which are generated by
a combination of imitation dynamics and shared background knowledge exhibit
faster and higher semantic stability than tagging streams which are generated
via imitation dynamics or natural language streams alone
DeltaImpactFinder: Assessing Semantic Merge Conflicts with Dependency Analysis
In software development, version control systems (VCS) provide branching and
merging support tools. Such tools are popular among developers to concurrently
change a code-base in separate lines and reconcile their changes automatically
afterwards. However, two changes that are correct independently can introduce
bugs when merged together. We call semantic merge conflicts this kind of bugs.
Change impact analysis (CIA) aims at estimating the effects of a change in a
codebase. In this paper, we propose to detect semantic merge conflicts using
CIA. On a merge, DELTAIMPACTFINDER analyzes and compares the impact of a change
in its origin and destination branches. We call the difference between these
two impacts the delta-impact. If the delta-impact is empty, then there is no
indicator of a semantic merge conflict and the merge can continue
automatically. Otherwise, the delta-impact contains what are the sources of
possible conflicts.Comment: International Workshop on Smalltalk Technologies 2015, Jul 2015,
Brescia, Ital
Mapping Big Data into Knowledge Space with Cognitive Cyber-Infrastructure
Big data research has attracted great attention in science, technology,
industry and society. It is developing with the evolving scientific paradigm,
the fourth industrial revolution, and the transformational innovation of
technologies. However, its nature and fundamental challenge have not been
recognized, and its own methodology has not been formed. This paper explores
and answers the following questions: What is big data? What are the basic
methods for representing, managing and analyzing big data? What is the
relationship between big data and knowledge? Can we find a mapping from big
data into knowledge space? What kind of infrastructure is required to support
not only big data management and analysis but also knowledge discovery, sharing
and management? What is the relationship between big data and science paradigm?
What is the nature and fundamental challenge of big data computing? A
multi-dimensional perspective is presented toward a methodology of big data
computing.Comment: 59 page
Recommended from our members
Conceptual modelling and the quality of ontologies: A comparison between object-role modelling and the object paradigm
Ontologies are key enablers for sharing precise and machine-understandable semantics among different applications and parties. Yet, for ontologies to meet these expectations, their quality must be of a good standard. The quality of an ontology is strongly based on the design method employed. This paper addresses the design problems related to the modelling of ontologies, with specific concentration on the issues related to the quality of the conceptualisations produced. The paper aims
to demonstrate the impact of the modelling paradigm adopted on the quality of ontological models and, consequently, the potential impact that such a decision can have in relation to the development of
software applications. To this aim, an ontology that is conceptualised based on the Object Role Modelling (ORM) approach is re-engineered into a one modelled on the basis of the Object Paradigm (OP). Next, the two ontologies are analytically compared using the specified criteria. The conducted
comparison highlights that using the OP for ontology conceptualisation can provide more expressive, reusable, objective and temporal ontologies than those conceptualised on the basis of the ORM approach
From Linked Data to Relevant Data -- Time is the Essence
The Semantic Web initiative puts emphasis not primarily on putting data on
the Web, but rather on creating links in a way that both humans and machines
can explore the Web of data. When such users access the Web, they leave a trail
as Web servers maintain a history of requests. Web usage mining approaches have
been studied since the beginning of the Web given the log's huge potential for
purposes such as resource annotation, personalization, forecasting etc.
However, the impact of any such efforts has not really gone beyond generating
statistics detailing who, when, and how Web pages maintained by a Web server
were visited.Comment: 1st International Workshop on Usage Analysis and the Web of Data
(USEWOD2011) in the 20th International World Wide Web Conference (WWW2011),
Hyderabad, India, March 28th, 201
Stealthy Deception Attacks Against SCADA Systems
SCADA protocols for Industrial Control Systems (ICS) are vulnerable to
network attacks such as session hijacking. Hence, research focuses on network
anomaly detection based on meta--data (message sizes, timing, command
sequence), or on the state values of the physical process. In this work we
present a class of semantic network-based attacks against SCADA systems that
are undetectable by the above mentioned anomaly detection. After hijacking the
communication channels between the Human Machine Interface (HMI) and
Programmable Logic Controllers (PLCs), our attacks cause the HMI to present a
fake view of the industrial process, deceiving the human operator into taking
manual actions. Our most advanced attack also manipulates the messages
generated by the operator's actions, reversing their semantic meaning while
causing the HMI to present a view that is consistent with the attempted human
actions. The attacks are totaly stealthy because the message sizes and timing,
the command sequences, and the data values of the ICS's state all remain
legitimate.
We implemented and tested several attack scenarios in the test lab of our
local electric company, against a real HMI and real PLCs, separated by a
commercial-grade firewall. We developed a real-time security assessment tool,
that can simultaneously manipulate the communication to multiple PLCs and cause
the HMI to display a coherent system--wide fake view. Our tool is configured
with message-manipulating rules written in an ICS Attack Markup Language (IAML)
we designed, which may be of independent interest. Our semantic attacks all
successfully fooled the operator and brought the system to states of blackout
and possible equipment damage
- âŠ