924 research outputs found

    Realizing the Hydrogen Economy through Semantic Web Technologies

    Get PDF
    The FUSION (Fuel Cell Understanding through Semantic Inferencing, Ontologies and Nanotechnology) project applies, extends, and combines Semantic Web technologies and image analysis techniques to develop a knowledge management system to optimize the design of fuel cells

    Cuypers : a semi-automatic hypermedia generation system

    Get PDF
    The report describes the architecture of emph{Cuypers, a system supporting second and third generation Web-based multimedia. First generation Web-content encodes information in handwritten (HTML) Web pages. Second generation Web content generates HTML pages on demand, e.g. by filling in templates with content retrieved dynamically from a database or transformation of structured documents using style sheets (e.g. XSLT). Third generation Web pages will make use of rich markup (e.g. XML) along with metadata (e.g. RDF) schemes to make the content not only machine readable but also machine processable --- a necessary pre-requisite to the emph{Semantic Web. While text-based content on the Web is already rapidly approaching the third generation, multimedia content is still trying to catch up with second generation techniques. Multimedia document processing has a number of fundamentally different requirements from text which make it more difficult to incorporate within the document processing chain. In particular, multimedia transformation uses different document and presentation abstractions, its formatting rules cannot be based on text-flow, it requires feedback from the formatting back-end and is hard to describe in the functional style of current style languages. We state the requirements for second generation processing of multimedia and describe how these have been incorporated in our prototype multimedia document transformation environment, emph{Cuypers. The system overcomes a number of the restrictions of the text-flow based tool sets by integrating a number of conceptually distinct processing steps in a single runtime execution environment. We describe the need for these different processing steps and describe them in turn (semantic structure, communicative device, qualitative constraints, quantitative constraints, final form presentation), and illustrate our approach by means of an example. We conclude by discussing the models and techniques required for the creation of third generation multimedia content

    Model driven design and data integration in semantic web information systems

    Get PDF
    The Web is quickly evolving in many ways. It has evolved from a Web of documents into a Web of applications in which a growing number of designers offer new and interactive Web applications with people all over the world. However, application design and implementation remain complex, error-prone and laborious. In parallel there is also an evolution from a Web of documents into a Web of `knowledge' as a growing number of data owners are sharing their data sources with a growing audience. This brings the potential new applications for these data sources, including scenarios in which these datasets are reused and integrated with other existing and new data sources. However, the heterogeneity of these data sources in syntax, semantics and structure represents a great challenge for application designers. The Semantic Web is a collection of standards and technologies that offer solutions for at least the syntactic and some structural issues. If offers semantic freedom and flexibility, but this leaves the issue of semantic interoperability. In this thesis we present Hera-S, an evolution of the Model Driven Web Engineering (MDWE) method Hera. MDWEs allow designers to create data centric applications using models instead of programming. Hera-S especially targets Semantic Web sources and provides a flexible method for designing personalized adaptive Web applications. Hera-S defines several models that together define the target Web application. Moreover we implemented a framework called Hydragen, which is able to execute the Hera-S models to run the desired Web application. Hera-S' core is the Application Model (AM) in which the main logic of the application is defined, i.e. defining the groups of data elements that form logical units or subunits, the personalization conditions, and the relationships between the units. Hera-S also uses a so-called Domain Model (DM) that describes the content and its structure. However, this DM is not Hera-S specific, but instead allows any Semantic Web source representation as its DM, as long as its content can be queried by the standardized Semantic Web query language SPARQL. The same holds for the User Model (UM). The UM can be used for personalization conditions, but also as a source of user-related content if necessary. In fact, the difference between DM and UM is conceptual as their implementation within Hydragen is the same. Hera-S also defines a presentation model (PM) which defines presentation details of elements like order and style. In order to help designers with building their Web applications we have introduced a toolset, Hera Studio, which allows to build the different models graphically. Hera Studio also provides some additional functionality like model checking and deployment of the models in Hydragen. Both Hera-S and its implementation Hydragen are designed to be flexible regarding the user of models. In order to achieve this Hydragen is a stateless engine that queries for relevant information from the models at every page request. This allows the models and data to be changed in the datastore during runtime. We show that one way to exploit this flexibility is by applying aspect-orientation to the AM. Aspect-orientation allows us to dynamically inject functionality that pervades the entire application. Another way to exploit Hera-S' flexibility is in reusing specialized components, e.g. for presentation generation. We present a configuration of Hydragen in which we replace our native presentation generation functionality by the AMACONT engine. AMACONT provides more extensive multi-level presentation generation and adaptation capabilities as well aspect-orientation and a form of semantic based adaptation. Hera-S was designed to allow the (re-)use of any (Semantic) Web datasource. It even opens up the possibility for data integration at the back end, by using an extendible storage layer in our database of choice Sesame. However, even though theoretically possible it still leaves much of the actual data integration issue. As this is a recurring issue in many domains, a broader challenge than for Hera-S design only, we decided to look at this issue in isolation. We present a framework called Relco which provides a language to express data transformation operations as well as a collection of techniques that can be used to (semi-)automatically find relationships between concepts in different ontologies. This is done with a combination of syntactic, semantic and collaboration techniques, which together provide strong clues for which concepts are most likely related. In order to prove the applicability of Relco we explore five application scenarios in different domains for which data integration is a central aspect. This includes a cultural heritage portal, Explorer, for which data from several datasources was integrated and was made available by a mapview, a timeline and a graph view. Explorer also allows users to provide metadata for objects via a tagging mechanism. Another application is SenSee: an electronic TV-guide and recommender. TV-guide data was integrated and enriched with semantically structured data from several sources. Recommendations are computed by exploiting the underlying semantic structure. ViTa was a project in which several techniques for tagging and searching educational videos were evaluated. This includes scenarios in which user tags are related with an ontology, or other tags, using the Relco framework. The MobiLife project targeted the facilitation of a new generation of mobile applications that would use context-based personalization. This can be done using a context-based user profiling platform that can also be used for user model data exchange between mobile applications using technologies like Relco. The final application scenario that is shown is from the GRAPPLE project which targeted the integration of adaptive technology into current learning management systems. A large part of this integration is achieved by using a user modeling component framework in which any application can store user model information, but which can also be used for the exchange of user model data

    Service composition based on SIP peer-to-peer networks

    Get PDF
    Today the telecommunication market is faced with the situation that customers are requesting for new telecommunication services, especially value added services. The concept of Next Generation Networks (NGN) seems to be a solution for this, so this concept finds its way into the telecommunication area. These customer expectations have emerged in the context of NGN and the associated migration of the telecommunication networks from traditional circuit-switched towards packet-switched networks. One fundamental aspect of the NGN concept is to outsource the intelligence of services from the switching plane onto separated Service Delivery Platforms using SIP (Session Initiation Protocol) to provide the required signalling functionality. Caused by this migration process towards NGN SIP has appeared as the major signalling protocol for IP (Internet Protocol) based NGN. This will lead in contrast to ISDN (Integrated Services Digital Network) and IN (Intelligent Network) to significantly lower dependences among the network and services and enables to implement new services much easier and faster. In addition, further concepts from the IT (Information Technology) namely SOA (Service-Oriented Architecture) have largely influenced the telecommunication sector forced by amalgamation of IT and telecommunications. The benefit of applying SOA in telecommunication services is the acceleration of service creation and delivery. Main features of the SOA are that services are reusable, discoverable combinable and independently accessible from any location. Integration of those features offers a broader flexibility and efficiency for varying demands on services. This thesis proposes a novel framework for service provisioning and composition in SIP-based peer-to-peer networks applying the principles of SOA. One key contribution of the framework is the approach to enable the provisioning and composition of services which is performed by applying SIP. Based on this, the framework provides a flexible and fast way to request the creation for composite services. Furthermore the framework enables to request and combine multimodal value-added services, which means that they are no longer limited regarding media types such as audio, video and text. The proposed framework has been validated by a prototype implementation

    Networked experiments and scientific resource sharing in cooperative knowledge spaces

    Get PDF
    Dieser Beitrag ist mit Zustimmung des Rechteinhabers aufgrund einer (DFG geförderten) Allianz- bzw. Nationallizenz frei zugĂ€nglich.This publication is with permission of the rights owner freely accessible due to an Alliance licence and a national licence (funded by the DFG, German Research Foundation) respectively.Cooperative knowledge spaces create new potentials for the experimental fields in natural sciences and engineering because they enhance the accessibility of experimental setups through virtual laboratories and remote technology, opening them for collaborative and distributed usage. A concept for extending existing virtual knowledge spaces for the means of the technological disciplines (“ViCToR‐Spaces” ‐ Virtual Cooperation in Teaching and Research for Mathematics, Natural Sciences and Engineering) is presented. The integration of networked virtual laboratories and remote experiments (“NanoLab Approach”), as well as an approach to community‐driven content sharing and content development within virtual knowledge spaces (NanoWiki) are described

    Interim research assessment 2003-2005 - Computer Science

    Get PDF
    This report primarily serves as a source of information for the 2007 Interim Research Assessment Committee for Computer Science at the three technical universities in the Netherlands. The report also provides information for others interested in our research activities

    Science of Digital Libraries(SciDL)

    Get PDF
    Our purpose is to ensure that people and institutions better manage information through digital libraries (DLs). Thus we address a fundamental human and social need, which is particularly urgent in the modern Information (and Knowledge) Age. Our goal is to significantly advance both the theory and state-of-theart of DLs (and other advanced information systems) - thoroughly validating our approach using highly visible testbeds. Our research objective is to leverage our formal, theory-based approach to the problems of defining, understanding, modeling, building, personalizing, and evaluating DLs. We will construct models and tools based on that theory so organizations and individuals can easily create and maintain fully functional DLs, whose components can interoperate with corresponding components of related DLs. This research should be highly meritorious intellectually. We bring together a team of senior researchers with expertise in information retrieval, human-computer interaction, scenario-based design, personalization, and componentized system development and expect to make important contributions in each of those areas. Of crucial import, however, is that we will integrate our prior research and experience to achieve breakthrough advances in the field of DLs, regarding theory, methodology, systems, and evaluation. We will extend the 5S theory, which has identified five key dimensions or onstructs underlying effective DLs: Streams, Structures, Spaces, Scenarios, and Societies. We will use that theory to describe and develop metamodels, models, and systems, which can be tailored to disciplines and/or groups, as well as personalized. We will disseminate our findings as well as provide toolkits as open source software, encouraging wide use. We will validate our work using testbeds, ensuring broad impact. We will put powerful tools into the hands of digital librarians so they may easily plan and configure tailored systems, to support an extensible set of services, including publishing, discovery, searching, browsing, recommending, and access control, handling diverse types of collections, and varied genres and classes of digital objects. With these tools, end-users will for be able to design personal DLs. Testbeds are crucial to validate scientific theories and will be thoroughly integrated into SciDL research and evaluation. We will focus on two application domains, which together should allow comprehensive validation and increase the significance of SciDL's impact on scholarly communities. One is education (through CITIDEL); the other is libraries (through DLA and OCKHAM). CITIDEL deals with content from publishers (e.g, ACM Digital Library), corporate research efforts e.g., CiteSeer), volunteer initiatives (e.g., DBLP, based on the database and logic rogramming literature), CS departments (e.g., NCSTRL, mostly technical reports), educational initiatives (e.g., Computer Science Teaching Center), and universities (e.g., theses and dissertations). DLA is a unit of the Virginia Tech library that virtually publishes scholarly communication such as faculty-edited journals and rare and unique resources including image collections and finding aids from Special Collections. The OCKHAM initiative, calling for simplicity in the library world, emphasizes a three-part solution: lightweightprotocols, component-based development, and open reference models. It provides a framework to research the deployment of the SciDL approach in libraries. Thus our choice of testbeds also will nsure that our research will have additional benefit to and impact on the fields of computing and library and information science, supporting transformations in how we learn and deal with information

    Processing Structured Hypermedia : A Matter of Style

    Get PDF
    With the introduction of the World Wide Web in the early nineties, hypermedia has become the uniform interface to the wide variety of information sources available over the Internet. The full potential of the Web, however, can only be realized by building on the strengths of its underlying research fields. This book describes the areas of hypertext, multimedia, electronic publishing and the World Wide Web and points out fundamental similarities and differences in approaches towards the processing of information. It gives an overview of the dominant models and tools developed in these fields and describes the key interrelationships and mutual incompatibilities. In addition to a formal specification of a selection of these models, the book discusses the impact of the models described on the software architectures that have been developed for processing hypermedia documents. Two example hypermedia architectures are described in more detail: the DejaVu object-oriented hypermedia framework, developed at the VU, and CWI's Berlage environment for time-based hypermedia document transformations
    • 

    corecore