518,396 research outputs found
An introduction to Graph Data Management
A graph database is a database where the data structures for the schema
and/or instances are modeled as a (labeled)(directed) graph or generalizations
of it, and where querying is expressed by graph-oriented operations and type
constructors. In this article we present the basic notions of graph databases,
give an historical overview of its main development, and study the main current
systems that implement them
Social Learning Systems: The Design of Evolutionary, Highly Scalable, Socially Curated Knowledge Systems
In recent times, great strides have been made towards the advancement of automated reasoning and knowledge management applications, along with their associated methodologies. The introduction of the World Wide Web peaked academicians’ interest in harnessing the power of linked, online documents for the purpose of developing machine learning corpora, providing dynamical knowledge bases for question answering systems, fueling automated entity extraction applications, and performing graph analytic evaluations, such as uncovering the inherent structural semantics of linked pages. Even more recently, substantial attention in the wider computer science and information systems disciplines has been focused on the evolving study of social computing phenomena, primarily those associated with the use, development, and analysis of online social networks (OSN\u27s).
This work followed an independent effort to develop an evolutionary knowledge management system, and outlines a model for integrating the wisdom of the crowd into the process of collecting, analyzing, and curating data for dynamical knowledge systems. Throughout, we examine how relational data modeling, automated reasoning, crowdsourcing, and social curation techniques have been exploited to extend the utility of web-based, transactional knowledge management systems, creating a new breed of knowledge-based system in the process: the Social Learning System (SLS).
The key questions this work has explored by way of elucidating the SLS model include considerations for 1) how it is possible to unify Web and OSN mining techniques to conform to a versatile, structured, and computationally-efficient ontological framework, and 2) how large-scale knowledge projects may incorporate tiered collaborative editing systems in an effort to elicit knowledge contributions and curation activities from a diverse, participatory audience
Graph-based discovery of ontology change patterns
Ontologies can support a variety of purposes, ranging from capturing conceptual knowledge to the organisation of digital content and information. However, information systems are always subject to change and ontology change management can pose challenges. We investigate ontology change representation and discovery of change patterns.
Ontology changes are formalised as graph-based change logs. We use attributed graphs, which are typed over a generic graph with node and edge attribution.We analyse ontology change logs, represented as graphs, and identify frequent change sequences. Such sequences are applied as a reference in order to discover reusable, often domain-specific and usagedriven change patterns. We describe the pattern discovery algorithms and measure their performance using experimental result
Leachate treatment by conventional coagulation, electrocoagulation and two-stage coagulation (conventional coagulation and electrocoagulation)
Leachate is widely explored and investigated due to highly polluted and difficult to treat. Leachate treatment commonly involves advanced, complicated and high cost activities. Conventional coagulation is widely used in the treatment of wastewater but the sludge production becomes the biggest constraint in this treatment. Electrocoagulation is an alternative to conventional method because it has the same application but produce less sludge and requires simple equipment. Thus, combination of conventional coagulation and electrocoagulation can improve the efficiency of coagulation process in leachate treatment. This article is focusing on the efficiency of single and combined treatment as well as the improvement made by combined treatment. Based on review, the percentage reduction of current density and dose of coagulant was perceptible. As much 50% reduction of current density, duration of treatment, and dose of coagulant able to be obtained by using combined treatment. This combined treatment is able to reduce the cost and at the same time reduce the duration of treatment. Hence, the combined treatment offers an alternative technique for landfill leachate treatment on the removal of pollutants
Pattern Reification as the Basis for Description-Driven Systems
One of the main factors driving object-oriented software development for
information systems is the requirement for systems to be tolerant to change. To
address this issue in designing systems, this paper proposes a pattern-based,
object-oriented, description-driven system (DDS) architecture as an extension
to the standard UML four-layer meta-model. A DDS architecture is proposed in
which aspects of both static and dynamic systems behavior can be captured via
descriptive models and meta-models. The proposed architecture embodies four
main elements - firstly, the adoption of a multi-layered meta-modeling
architecture and reflective meta-level architecture, secondly the
identification of four data modeling relationships that can be made explicit
such that they can be modified dynamically, thirdly the identification of five
design patterns which have emerged from practice and have proved essential in
providing reusable building blocks for data management, and fourthly the
encoding of the structural properties of the five design patterns by means of
one fundamental pattern, the Graph pattern. A practical example of this
philosophy, the CRISTAL project, is used to demonstrate the use of
description-driven data objects to handle system evolution.Comment: 20 pages, 10 figure
The Influence of Impression Management in Sustainability Reports on Company’s Performance
This study is aimed to investigate the effect of impression management of the company performance. The impression management was measured by selectivity, distortion, narcissism and the company’s performance was measured by ROA and Tobin’s Q. This study also used leverage and size as control variables. Multiple regression analysis was used to test all listed companies in Indonesia Stock Exchange during the year of 2011-2015. The result of this study is selectivity has positive influence on financial and market performance. Then, distortion has negative effect on financial and market performance. However, narcissism has not significant influence on financial and market performance. Thus, impression management has significant effect on company’s performance. The finding of study is the company with displays more favorable graph tends to improve company’s performance. The company that has poor performance tends to have greater company risk. As a result, the companies tend to distort their graphs. This research can be applied to stakeholder to show company performance whether it is in good or bad performance. The further study may be increase the number of sample by using annual report as basic to do assess impression management
The essence of P2P: A reference architecture for overlay networks
The success of the P2P idea has created a huge diversity
of approaches, among which overlay networks, for example,
Gnutella, Kazaa, Chord, Pastry, Tapestry, P-Grid, or DKS,
have received specific attention from both developers and
researchers. A wide variety of algorithms, data structures,
and architectures have been proposed. The terminologies
and abstractions used, however, have become quite inconsistent since the P2P paradigm has attracted people from many different communities, e.g., networking, databases, distributed systems, graph theory, complexity theory, biology, etc. In this paper we propose a reference model for overlay networks which is capable of modeling different approaches in this domain in a generic manner. It is intended to allow researchers and users to assess the properties of concrete systems, to establish a common vocabulary for scientific discussion, to facilitate the qualitative comparison of the systems, and to serve as the basis for defining a standardized API to make overlay networks interoperable
- …