126 research outputs found

    Jahresbericht Forschung und Entwicklung 2005

    Get PDF
    Forschungsjahresbericht 2005 der Fachhochschule Konstan

    Personalizing the web: A tool for empowering end-users to customize the web through browser-side modification

    Get PDF
    167 p.Web applications delegate to the browser the final rendering of their pages. Thispermits browser-based transcoding (a.k.a. Web Augmentation) that can be ultimately singularized for eachbrowser installation. This creates an opportunity for Web consumers to customize their Web experiences.This vision requires provisioning adequate tooling that makes Web Augmentation affordable to laymen.We consider this a special class of End-User Development, integrating Web Augmentation paradigms.The dominant paradigm in End-User Development is scripting languages through visual languages.This thesis advocates for a Google Chrome browser extension for Web Augmentation. This is carried outthrough WebMakeup, a visual DSL programming tool for end-users to customize their own websites.WebMakeup removes, moves and adds web nodes from different web pages in order to avoid tabswitching, scrolling, the number of clicks and cutting and pasting. Moreover, Web Augmentationextensions has difficulties in finding web elements after a website updating. As a consequence, browserextensions give up working and users might stop using these extensions. This is why two differentlocators have been implemented with the aim of improving web locator robustness

    Personalizing the web: A tool for empowering end-users to customize the web through browser-side modification

    Get PDF
    167 p.Web applications delegate to the browser the final rendering of their pages. Thispermits browser-based transcoding (a.k.a. Web Augmentation) that can be ultimately singularized for eachbrowser installation. This creates an opportunity for Web consumers to customize their Web experiences.This vision requires provisioning adequate tooling that makes Web Augmentation affordable to laymen.We consider this a special class of End-User Development, integrating Web Augmentation paradigms.The dominant paradigm in End-User Development is scripting languages through visual languages.This thesis advocates for a Google Chrome browser extension for Web Augmentation. This is carried outthrough WebMakeup, a visual DSL programming tool for end-users to customize their own websites.WebMakeup removes, moves and adds web nodes from different web pages in order to avoid tabswitching, scrolling, the number of clicks and cutting and pasting. Moreover, Web Augmentationextensions has difficulties in finding web elements after a website updating. As a consequence, browserextensions give up working and users might stop using these extensions. This is why two differentlocators have been implemented with the aim of improving web locator robustness

    Linked Data Entity Summarization

    Get PDF
    On the Web, the amount of structured and Linked Data about entities is constantly growing. Descriptions of single entities often include thousands of statements and it becomes difficult to comprehend the data, unless a selection of the most relevant facts is provided. This doctoral thesis addresses the problem of Linked Data entity summarization. The contributions involve two entity summarization approaches, a common API for entity summarization, and an approach for entity data fusion

    Identifying Electronic Information Resources in Economics: A Content Analysis

    Get PDF
    The purpose of this paper is to explore the availability of various types of E-resources in the field of Economics on select library websites of Universities in Delhi. It highlights both the common and different types of E-resources available on three different library websites with special reference to Economics. The paper is primarily based on a content analysis conducted to explore the Library Websites/Subject Portal of the three select Universities in Delhi for finding out the availability of E-resources in the field of Economics

    A System for Deduction-based Formal Verification of Workflow-oriented Software Models

    Full text link
    The work concerns formal verification of workflow-oriented software models using deductive approach. The formal correctness of a model's behaviour is considered. Manually building logical specifications, which are considered as a set of temporal logic formulas, seems to be the significant obstacle for an inexperienced user when applying the deductive approach. A system, and its architecture, for the deduction-based verification of workflow-oriented models is proposed. The process of inference is based on the semantic tableaux method which has some advantages when compared to traditional deduction strategies. The algorithm for an automatic generation of logical specifications is proposed. The generation procedure is based on the predefined workflow patterns for BPMN, which is a standard and dominant notation for the modeling of business processes. The main idea for the approach is to consider patterns, defined in terms of temporal logic,as a kind of (logical) primitives which enable the transformation of models to temporal logic formulas constituting a logical specification. Automation of the generation process is crucial for bridging the gap between intuitiveness of the deductive reasoning and the difficulty of its practical application in the case when logical specifications are built manually. This approach has gone some way towards supporting, hopefully enhancing our understanding of, the deduction-based formal verification of workflow-oriented models.Comment: International Journal of Applied Mathematics and Computer Scienc

    Mobile Bookstore (m-Bookstore)

    Get PDF
    Mobile technologies and computing are evolving and expanding each day, demanding and creating a much more ubiquitous computing environment. This research project proposes the development and implementation of the Mobile Bookstore - a mobile solution for bookstore businesses. This report presents the final research and study of the development of the Mobile Bookstore as a solution to the problem statements stated in the project proposal as well as in this report, which is considered as the main objective of the study. The Mobile Bookstore will address to the four problem statements, which are the geographical problems, the advancing mobile technologies, ubiquitous demands in computing and large bookstore information requests. These objectives help in answering the question to why this research is done and why would we need a mobile bookstore? With the mobile bookstore, companies can reach out to more customers, anywhere and everywhere using mobile devices. This concept allows for a more ubiquitous business and computing. Major bookstores need to compete and to be on top, implementing the latest technologies to serve its customers, and the mobile technology is one that should be taken advantage of. Browsing the large database of a bookstore can be time-consuming and difficult using expensive kiosks that come in limited numbers. A wireless environment can create wireless networks allowing those with mobile devices to browse through the bookstore database with ease. With this report, the basis for the research of this project will be underlined in detail, including the technologies, means, methods and study of recent researches related to the study. The result of this research project will be the software solution, a system (the Mobile Bookstore), which consists of two modules: the outdoor WAPbased module and the indoor Wireless Network module

    Cloud Services Brokerage for Mobile Ubiquitous Computing

    Get PDF
    Recently, companies are adopting Mobile Cloud Computing (MCC) to efficiently deliver enterprise services to users (or consumers) on their personalized devices. MCC is the facilitation of mobile devices (e.g., smartphones, tablets, notebooks, and smart watches) to access virtualized services such as software applications, servers, storage, and network services over the Internet. With the advancement and diversity of the mobile landscape, there has been a growing trend in consumer attitude where a single user owns multiple mobile devices. This paradigm of supporting a single user or consumer to access multiple services from n-devices is referred to as the Ubiquitous Cloud Computing (UCC) or the Personal Cloud Computing. In the UCC era, consumers expect to have application and data consistency across their multiple devices and in real time. However, this expectation can be hindered by the intermittent loss of connectivity in wireless networks, user mobility, and peak load demands. Hence, this dissertation presents an architectural framework called, Cloud Services Brokerage for Mobile Ubiquitous Cloud Computing (CSB-UCC), which ensures soft real-time and reliable services consumption on multiple devices of users. The CSB-UCC acts as an application middleware broker that connects the n-devices of users to the multi-cloud services. The designed system determines the multi-cloud services based on the user's subscriptions and the n-devices are determined through device registration on the broker. The preliminary evaluations of the designed system shows that the following are achieved: 1) high scalability through the adoption of a distributed architecture of the brokerage service, 2) providing soft real-time application synchronization for consistent user experience through an enhanced mobile-to-cloud proximity-based access technique, 3) reliable error recovery from system failure through transactional services re-assignment to active nodes, and 4) transparent audit trail through access-level and context-centric provenance

    Vocabulary Evolution on the Semantic Web: From Changes to Evolution of Vocabularies and its Impact on the Data

    Get PDF
    The main objective of the Semantic Web is to provide data on the web well-defined meaning. Vocabularies are used for modeling data in the web, provide a shared understanding of a domain and consist of a collection of types and properties. These types and properties are so-called terms. A vocabulary can import terms from other vocabularies, and data publishers use vocabulary terms for modeling data. Importing terms via vocabularies results in a Network of Linked vOcabularies (NeLO). Vocabularies are subject to change during their lifetime. When vocabularies change, the published data become a problem if they are not updated based on these changes. So far, there has been no study that analyzes vocabulary changes over time. Furthermore, it is unknown how data publishers reflect on such vocabulary changes. Ontology engineers and data publishers may not be aware of the changes in the vocabulary terms that have already happened since they occur rather rarely. This work addresses the problem of vocabulary changes and their impact on other vocabularies and the published data. We analyzed the changes of vocabularies and their reuse. We selected the most dominant vocabularies, based on their use by data publishers. Additionally, we analyzed the changes of 994 vocabularies. Furthermore, we analyzed various vocabularies to better understand by whom and how they are used in the modeled data, and how these changes are adopted in the Linked Open Data cloud. We computed the state of the NeLO from the available versions of vocabularies for over 17 years. We analyzed the static parameters of the NeLO such as its size, density, average degree, and the most important vocabularies at certain points in time. We further investigated how NeLO changes over time, specifically measuring the impact of a change in one vocabulary on others, how the reuse of terms changes, and the importance of vocabulary changes. Our results show that the vocabularies are highly static, and many of the changes occurred in annotation properties. Additionally, 16% of the existing terms are reused by other vocabularies, and some of the deprecated and deleted terms are still reused. Furthermore, most of the newly coined terms are adopted immediately. Our results show that even if the change frequency of terms is rather low, it can have a high impact on the data due to a large amount of data on the web. Moreover, due to a large number of vocabularies in the NeLO, and therefore the increase of available terms, the percentage of imported terms compared with the available ones has decreased over time. Additionally, based on the scores of the average number of exports for the vocabularies in the NeLO, some vocabularies have become more popular over time. Overall, understanding the evolution of vocabulary terms is important for ontology engineers and data publishers to avoid wrong assumptions about the data published on the web. Furthermore, it may foster a better understanding of the impact of the changes in vocabularies and how they are adopted to possibly learn from previous experience. Our results provide for the first time in-depth insights into the structure and evolution of the NeLO. Supported by proper tools exploiting the analysis of this thesis, it may help ontology engineers to identify data modeling shortcomings and assess the dependencies implied by the reusing of a specific vocabulary.Das Hauptziel des Semantic Web ist es, den Daten im Web eine klar definierte Bedeutung zu geben. Vokabulare werden zum Modellieren von Daten im Web verwendet. Vokabulare vermitteln ein gemeinsames Verständnis einer Domäne und bestehen aus einer Sammlung von Typen und Eigenschaften. Diese Typen und Eigenschaften sind sogenannte Begriffe. Ein Vokabular kann Begriffe aus anderen Vokabularen importieren, und Datenverleger verwenden die Begriffe der Vokabulare zum Modellieren von Daten. Durch das Importieren von Begriffen entsteht ein Netzwerk verknüpfter Vokabulare (NeLO). Vokabulare können sich im Laufe der Zeit ändern. Wenn sich Vokabulare ändern, kann dies zu Problemen mit bereits veröffentlichten Daten führen, falls diese nicht entsprechend angepasst werden. Bisher gibt es keine Studie, die die Veränderung der Vokabulare im Laufe der Zeit analysiert. Darüber hinaus ist nicht bekannt, inwiefern bereits veröffentlichte Daten an diese Veränderungen angepasst werden. Verantwortliche für Ontologien und Daten sind sich möglicherweise der Änderungen in den Vokabularen nicht bewusst, da solche Änderungen eher selten vorkommen. Diese Arbeit befasst sich mit dem Problem der Änderung von Vokabularen und deren Auswirkung auf andere Vokabulare sowie die Daten. Wir analysieren die Änderung von Vokabularen und deren Wiederverwendung. Für unsere Analyse haben wir diejenigen Vokabulare ausgewählt, die am häufigsten verwendet werden. Zusätzlich analysieren wir die Änderungen von 994 Vokabularen aus dem Verzeichnis "Linked Open Vocabulary". Wir analysieren die Vokabulare, um zu verstehen, von wem und wie sie in den modellierten Daten verwendet werden und inwiefern Änderungen in die Linked Open Data Cloud übernommen werden. Wir beobachten den Status von NeLO aus den verfügbaren Versionen der Vokabulare über einen Zeitraum von 17 Jahren. Wir analysieren statische Parameter von NeLO wie Größe, Dichte, Durchschnittsgrad und die wichtigsten Vokabulare zu bestimmten Zeitpunkten. Wir untersuchen weiter, wie sich NeLO mit der Zeit ändert. Insbesondere messen wir die Auswirkung einer Änderung in einem Vokabular auf andere, wie sich die Wiederverwendung von Begriffen ändert und wie wichtig Änderungen im Vokabular sind. Unsere Ergebnisse zeigen, dass die Vokabulare sehr statisch sind und viele Änderungen an sogenannten Annotations-Properties vorgenommen wurden. Darüber hinaus werden 16% der vorhandenen Begriffen von anderen Vokabularen wiederverwendet, und einige der veralteten und gelöschten Begriffe werden weiterhin wiederverwendet. Darüber hinaus werden die meisten neu erstellten Begriffe unmittelbar verwendet. Unsere Ergebnisse zeigen, dass selbst wenn die Häufigkeit von Änderungen an Vokabularen eher gering ist, so kann dies aufgrund der großen Datenmenge im Web erhebliche Auswirkungen haben. Darüber hinaus hat sich aufgrund einer großen Anzahl von Vokabularen in NeLO und damit der Zunahme der verfügbaren Begriffe der Prozentsatz der importierten Begriffe im Vergleich zu den verfügbaren Begriffen im Laufe der Zeit verringert. Basierend auf den Ergebnissen der durchschnittlichen Anzahl von Exporten für die Vokabulare in NeLO sind einige Vokabulare im Laufe der Zeit immer beliebter geworden. Insgesamt ist es für Verantwortliche für Ontologien und Daten wichtig, die Entwicklung der Vokabulare zu verstehen, um falsche Annahmen über die im Web veröffentlichten Daten zu vermeiden. Darüber hinaus ermöglichen unsere Ergebnisse ein besseres Verständnis der Auswirkungen von Änderungen in Vokabularen, sowie deren Nachnutzung, um möglicherweise aus früheren Erfahrungen zu lernen. Unsere Ergebnisse bieten erstmals detaillierte Einblicke in die Struktur und Entwicklung des Netzwerks der verknüpften Vokabularen. Unterstützt von geeigneten Tools für die Analyse in dieser Arbeit, kann es Verantwortlichen für Ontologien helfen, Mängel in der Datenmodellierung zu identifizieren und Abhängigkeiten zu bewerten, die durch die Wiederverwendung eines bestimmten Vokabulars entstehenden
    corecore