19,374 research outputs found
A comparative study of the AHP and TOPSIS methods for implementing load shedding scheme in a pulp mill system
The advancement of technology had encouraged mankind to design and create useful
equipment and devices. These equipment enable users to fully utilize them in various
applications. Pulp mill is one of the heavy industries that consumes large amount of
electricity in its production. Due to this, any malfunction of the equipment might
cause mass losses to the company. In particular, the breakdown of the generator
would cause other generators to be overloaded. In the meantime, the subsequence
loads will be shed until the generators are sufficient to provide the power to other
loads. Once the fault had been fixed, the load shedding scheme can be deactivated.
Thus, load shedding scheme is the best way in handling such condition. Selected load
will be shed under this scheme in order to protect the generators from being
damaged. Multi Criteria Decision Making (MCDM) can be applied in determination
of the load shedding scheme in the electric power system. In this thesis two methods
which are Analytic Hierarchy Process (AHP) and Technique for Order Preference by
Similarity to Ideal Solution (TOPSIS) were introduced and applied. From this thesis,
a series of analyses are conducted and the results are determined. Among these two
methods which are AHP and TOPSIS, the results shown that TOPSIS is the best
Multi criteria Decision Making (MCDM) for load shedding scheme in the pulp mill
system. TOPSIS is the most effective solution because of the highest percentage
effectiveness of load shedding between these two methods. The results of the AHP
and TOPSIS analysis to the pulp mill system are very promising
Folks in Folksonomies: Social Link Prediction from Shared Metadata
Web 2.0 applications have attracted a considerable amount of attention
because their open-ended nature allows users to create light-weight semantic
scaffolding to organize and share content. To date, the interplay of the social
and semantic components of social media has been only partially explored. Here
we focus on Flickr and Last.fm, two social media systems in which we can relate
the tagging activity of the users with an explicit representation of their
social network. We show that a substantial level of local lexical and topical
alignment is observable among users who lie close to each other in the social
network. We introduce a null model that preserves user activity while removing
local correlations, allowing us to disentangle the actual local alignment
between users from statistical effects due to the assortative mixing of user
activity and centrality in the social network. This analysis suggests that
users with similar topical interests are more likely to be friends, and
therefore semantic similarity measures among users based solely on their
annotation metadata should be predictive of social links. We test this
hypothesis on the Last.fm data set, confirming that the social network
constructed from semantic similarity captures actual friendship more accurately
than Last.fm's suggestions based on listening patterns.Comment: http://portal.acm.org/citation.cfm?doid=1718487.171852
A Survey of Volunteered Open Geo-Knowledge Bases in the Semantic Web
Over the past decade, rapid advances in web technologies, coupled with
innovative models of spatial data collection and consumption, have generated a
robust growth in geo-referenced information, resulting in spatial information
overload. Increasing 'geographic intelligence' in traditional text-based
information retrieval has become a prominent approach to respond to this issue
and to fulfill users' spatial information needs. Numerous efforts in the
Semantic Geospatial Web, Volunteered Geographic Information (VGI), and the
Linking Open Data initiative have converged in a constellation of open
knowledge bases, freely available online. In this article, we survey these open
knowledge bases, focusing on their geospatial dimension. Particular attention
is devoted to the crucial issue of the quality of geo-knowledge bases, as well
as of crowdsourced data. A new knowledge base, the OpenStreetMap Semantic
Network, is outlined as our contribution to this area. Research directions in
information integration and Geographic Information Retrieval (GIR) are then
reviewed, with a critical discussion of their current limitations and future
prospects
MeLinDa: an interlinking framework for the web of data
The web of data consists of data published on the web in such a way that they
can be interpreted and connected together. It is thus critical to establish
links between these data, both for the web of data and for the semantic web
that it contributes to feed. We consider here the various techniques developed
for that purpose and analyze their commonalities and differences. We propose a
general framework and show how the diverse techniques fit in the framework.
From this framework we consider the relation between data interlinking and
ontology matching. Although, they can be considered similar at a certain level
(they both relate formal entities), they serve different purposes, but would
find a mutual benefit at collaborating. We thus present a scheme under which it
is possible for data linking tools to take advantage of ontology alignments.Comment: N° RR-7691 (2011
Normalized Information Distance
The normalized information distance is a universal distance measure for
objects of all kinds. It is based on Kolmogorov complexity and thus
uncomputable, but there are ways to utilize it. First, compression algorithms
can be used to approximate the Kolmogorov complexity if the objects have a
string representation. Second, for names and abstract concepts, page count
statistics from the World Wide Web can be used. These practical realizations of
the normalized information distance can then be applied to machine learning
tasks, expecially clustering, to perform feature-free and parameter-free data
mining. This chapter discusses the theoretical foundations of the normalized
information distance and both practical realizations. It presents numerous
examples of successful real-world applications based on these distance
measures, ranging from bioinformatics to music clustering to machine
translation.Comment: 33 pages, 12 figures, pdf, in: Normalized information distance, in:
Information Theory and Statistical Learning, Eds. M. Dehmer, F.
Emmert-Streib, Springer-Verlag, New-York, To appea
Augmented Session Similarity Based Framework for Measuring Web User Concern from Web Server Logs
In this paper, an augmented sessions similarity based framework is proposed to measure web user concern from web server logs. This proposed framework will consider the best usage similarity between two web sessions based on accessed page relevance and URL based syntactic structure of website within the session. The proposed framework is implemented using K-medoids clustering algorithms with independent and combined similarity measures. The clusters qualities are evaluated by measuring average intra-cluster and inter-cluster distances. The experimental results show that combined augmented session dissimilarity metric outperformed the independent augmented session dissimilarity measures in terms of cluster validity measures
Study on Distance Measures for Clustering of Web Documents based on DOM-Tree based Representation of Web Document Structure
Among the three broad areas of Web mining, Web Structure Mining is the method of discovering structure information from either the web hyperlink structure or the web page structure. In order to apply data mining techniques on web pages, a good and efficient representation of web pages is required that could depict the actual hierarchical structure of web pages. The work presented here aims to find out an appropriate distance measure (also called as similarity measure) for strings that can be used for clustering of web documents and also for other data mining applications
Web Data Extraction, Applications and Techniques: A Survey
Web Data Extraction is an important problem that has been studied by means of
different scientific tools and in a broad range of applications. Many
approaches to extracting data from the Web have been designed to solve specific
problems and operate in ad-hoc domains. Other approaches, instead, heavily
reuse techniques and algorithms developed in the field of Information
Extraction.
This survey aims at providing a structured and comprehensive overview of the
literature in the field of Web Data Extraction. We provided a simple
classification framework in which existing Web Data Extraction applications are
grouped into two main classes, namely applications at the Enterprise level and
at the Social Web level. At the Enterprise level, Web Data Extraction
techniques emerge as a key tool to perform data analysis in Business and
Competitive Intelligence systems as well as for business process
re-engineering. At the Social Web level, Web Data Extraction techniques allow
to gather a large amount of structured data continuously generated and
disseminated by Web 2.0, Social Media and Online Social Network users and this
offers unprecedented opportunities to analyze human behavior at a very large
scale. We discuss also the potential of cross-fertilization, i.e., on the
possibility of re-using Web Data Extraction techniques originally designed to
work in a given domain, in other domains.Comment: Knowledge-based System
- …