1,526 research outputs found
Using Fuzzy Logic to Leverage HTML Markup for Web Page Representation
The selection of a suitable document representation approach plays a crucial role in the performance of a document clustering task. Being able to pick out representative words within a document can lead to substantial improvements in document clustering. In the case of web documents, the HTML markup that defines the layout of the content provides additional structural information that can be further exploited to identify representative words. In this paper we introduce a fuzzy term weighing approach that makes the most of the HTML structure for document clustering. We set forth and build on the hypothesis that a good representation can take advantage of how humans skim through documents to extract the most representative words. The authors of web pages make use of HTML tags to convey the most important message of a web page through page elements that attract the readers’ attention, such as page titles or emphasized elements. We define a set of criteria to exploit the information provided by these page elements, and introduce a fuzzy combination of these criteria that we evaluate within the context of a web page clustering task. Our proposed approach, called Abstract Fuzzy Combination of Criteria (AFCC), can adapt to datasets whose features are distributed differently, achieving good results compared to other similar fuzzy logic based approaches and TF-IDF across different datasets
An XML-based Multimedia Middleware for Mobile Online Auctions
Pervasive Internet services today promise to provide users with a quick and convenient access to a variety of commercial applications. However, due to unsuitable architectures and poor performance user acceptance is still low. To be a major success mobile services have to provide device-adapted content and advanced value-added Web services. Innovative enabling technologies like XML and wireless communication may for the first time provide a facility to interact with online applications anytime anywhere. We present a prototype implementing an efficient multimedia middleware approach towards ubiquitous value-added services using an auction house as a sample application. Advanced multi-feature retrieval technologies are combined with enhanced content delivery to show the impact of modern enterprise information systems on today’s e-commerce applications
Context Aware Computing for The Internet of Things: A Survey
As we are moving towards the Internet of Things (IoT), the number of sensors
deployed around the world is growing at a rapid pace. Market research has shown
a significant growth of sensor deployments over the past decade and has
predicted a significant increment of the growth rate in the future. These
sensors continuously generate enormous amounts of data. However, in order to
add value to raw sensor data we need to understand it. Collection, modelling,
reasoning, and distribution of context in relation to sensor data plays
critical role in this challenge. Context-aware computing has proven to be
successful in understanding sensor data. In this paper, we survey context
awareness from an IoT perspective. We present the necessary background by
introducing the IoT paradigm and context-aware fundamentals at the beginning.
Then we provide an in-depth analysis of context life cycle. We evaluate a
subset of projects (50) which represent the majority of research and commercial
solutions proposed in the field of context-aware computing conducted over the
last decade (2001-2011) based on our own taxonomy. Finally, based on our
evaluation, we highlight the lessons to be learnt from the past and some
possible directions for future research. The survey addresses a broad range of
techniques, methods, models, functionalities, systems, applications, and
middleware solutions related to context awareness and IoT. Our goal is not only
to analyse, compare and consolidate past research work but also to appreciate
their findings and discuss their applicability towards the IoT.Comment: IEEE Communications Surveys & Tutorials Journal, 201
Integration of e-business strategy for multi-lifecycle production systems
Internet use has grown exponentially on the last few years becoming a global communication and business resource. Internet-based business, or e-Business will truly affect every sector of the economy in ways that today we can only imagine. The manufacturing sector will be at the forefront of this change. This doctoral dissertation provides a scientific framework and a set of novel decision support tools for evaluating, modeling, and optimizing the overall performance of e-Business integrated multi-lifecycle production systems. The characteristics of this framework include environmental lifecycle study, environmental performance metrics, hyper-network model of integrated e-supply chain networks, fuzzy multi-objective optimization method, discrete-event simulation approach, and scalable enterprise environmental management system design. The dissertation research reveals that integration of e-Business strategy into production systems can alter current industry practices along a pathway towards sustainability, enhancing resource productivity, improving cost efficiencies and reducing lifecycle environmental impacts.
The following research challenges and scholarly accomplishments have been addressed in this dissertation: Identification and analysis of environmental impacts of e-Business. A pioneering environmental lifecycle study on the impact of e-Business is conducted, and fuzzy decision theory is further applied to evaluate e-Business scenarios in order to overcome data uncertainty and information gaps; Understanding, evaluation, and development of environmental performance metrics. Major environmental performance metrics are compared and evaluated. A universal target-based performance metric, developed jointly with a team of industry and university researchers, is evaluated, implemented, and utilized in the methodology framework; Generic framework of integrated e-supply chain network. The framework is based on the most recent research on large complex supply chain network model, but extended to integrate demanufacturers, recyclers, and resellers as supply chain partners. Moreover, The e-Business information network is modeled as a overlaid hypernetwork layer for the supply chain; Fuzzy multi-objective optimization theory and discrete-event simulation methods. The solution methods deal with overall system parameter trade-offs, partner selections, and sustainable decision-making; Architecture design for scalable enterprise environmental management system. This novel system is designed and deployed using knowledge-based ontology theory, and XML techniques within an agent-based structure. The implementation model and system prototype are also provided.
The new methodology and framework have the potential of being widely used in system analysis, design and implementation of e-Business enabled engineering systems
Escaping the Trap of too Precise Topic Queries
At the very center of digital mathematics libraries lie controlled
vocabularies which qualify the {\it topic} of the documents. These topics are
used when submitting a document to a digital mathematics library and to perform
searches in a library. The latter are refined by the use of these topics as
they allow a precise classification of the mathematics area this document
addresses. However, there is a major risk that users employ too precise topics
to specify their queries: they may be employing a topic that is only "close-by"
but missing to match the right resource. We call this the {\it topic trap}.
Indeed, since 2009, this issue has appeared frequently on the i2geo.net
platform. Other mathematics portals experience the same phenomenon. An approach
to solve this issue is to introduce tolerance in the way queries are understood
by the user. In particular, the approach of including fuzzy matches but this
introduces noise which may prevent the user of understanding the function of
the search engine.
In this paper, we propose a way to escape the topic trap by employing the
navigation between related topics and the count of search results for each
topic. This supports the user in that search for close-by topics is a click
away from a previous search. This approach was realized with the i2geo search
engine and is described in detail where the relation of being {\it related} is
computed by employing textual analysis of the definitions of the concepts
fetched from the Wikipedia encyclopedia.Comment: 12 pages, Conference on Intelligent Computer Mathematics 2013 Bath,
U
Understanding emerging client-Side web vulnerabilities using dynamic program analysis
Today's Web heavily relies on JavaScript as it is the main driving force behind the plethora of Web applications that we enjoy daily. The complexity and amount of this client-side code have been steadily increasing over the years. At the same time, new vulnerabilities keep being uncovered, for which we mostly rely on manual analysis of security experts. Unfortunately, such manual efforts do not scale to the problem space at hand. Therefore in this thesis, we present techniques capable of finding vulnerabilities automatically and at scale that originate from malicious inputs to postMessage handlers, polluted prototypes, and client-side storage mechanisms. Our results highlight that the investigated vulnerabilities are prevalent even among the most popular sites, showing the need for automated systems that help developers uncover them in a timely manner. Using the insights gained during our empirical studies, we provide recommendations for developers and browser vendors to tackle the underlying problems in the future. Furthermore, we show that security mechanisms designed to mitigate such and similar issues cannot currently be deployed by first-party applications due to their reliance on third-party functionality. This leaves developers in a no-win situation, in which either functionality can be preserved or security enforced.JavaScript ist die treibende Kraft hinter all den Web Applikationen, die wir heutzutage täglich nutzen. Allerdings ist über die Zeit hinweg gesehen die Masse, aber auch die Komplexität, von Client-seitigem JavaScript Code stetig gestiegen. Außerdem finden Sicherheitsexperten immer wieder neue Arten von Verwundbarkeiten, meistens durch manuelle Analyse des Codes. In diesem Werk untersuchen wir deshalb Methodiken, mit denen wir automatisch Verwundbarkeiten finden können, die von postMessages, veränderten Prototypen, oder Werten aus Client-seitigen Persistenzmechnanismen stammen. Unsere Ergebnisse zeigen, dass die untersuchten Schwachstellen selbst unter den populärsten Websites weit verbreitet sind, was den Bedarf an automatisierten Systemen zeigt, die Entwickler bei der rechtzeitigen Aufdeckung dieser Schwachstellen unterstützen. Anhand der in unseren empirischen Studien gewonnenen Erkenntnissen geben wir Empfehlungen für Entwickler und Browser-Anbieter, um die zugrunde liegenden Probleme in Zukunft anzugehen. Zudem zeigen wir auf, dass Sicherheitsmechanismen, die solche und ähnliche Probleme mitigieren sollen, derzeit nicht von Seitenbetreibern eingesetzt werden können, da sie auf die Funktionalität von Drittanbietern angewiesen sind. Dies zwingt den Seitenbetreiber dazu, zwischen Funktionalität und Sicherheit zu wählen
A Deep Search Architecture for Capturing Product Ontologies
This thesis describes a method to populate very large product ontologies quickly. We discuss a deep search architecture to text-mine online e-commerce market places and build a taxonomy of products and their corresponding descriptions and parent categories. The goal is to automatically construct an open database of products, which are aggregated from different online retailers. The database contains extensive metadata on each object, which can be queried and analyzed. Such a public database currently does not exist; instead the information currently resides siloed within various organizations. In this thesis, we describe the tools, data structures and software architectures that allowed aggregating, structuring, storing and searching through several gigabytes of product ontologies and their associated metadata. We also describe solutions to some computational puzzles in trying to mine data on large scale. We implemented the product capture architecture and, using this implementation, we built product ontologies corresponding to two major retailers: Wal-Mart and Target. The ontology data is analyzed to explore structural complexity and similarities and differences between the retailers. A broad product ontology has several uses, from comparison shopping applications that already exist to situation aware computing of tomorrow where computers are aware of the objects in their surroundings and these objects interact together to help humans in everyday tasks
- …