149,119 research outputs found
Web service-based exploration of Earth Observation time-series data for analyzing environmental changes
The increasing amount of Earth observation (EO) data requires a tremendous change, in order to property handle the number of observations and storage size thereof. Due to open data strategies and the increasing size of data archives, a new market has been developed to provide analysis and application-ready data, services, and platforms. It is not only scientists and geospatial processing specialists who work with EO data; stakeholders, thematic experts, and software developers do too. There is thus a great demand for improving the discovery, access, and analysis of EO data in line with new possibilities of web-based infrastructures. With the aim of bridging the gap between users and EO data archives, various topics have been researched: 1) user requirements and their relation to web services and output formats; 2) technical requirements for the discovery and access of multi-source EO time-series data, and 3) management of EO time-series data focusing on application-ready data. Web services for EO data discovery and access, time-series data processing, and EO platforms have been reviewed and related to the requirements of users. The diversity of data providers and web services requires specific knowledge of systems and specifications. Although service specifications for the discovery of EO data exist, improvements are still necessary to meet the requirements of different user personas. For the processing of EO time-series data, various data formats and processing steps need to be handled. Still, there remains a gap between EO time-series data access and analysis tools, which needs to be addressed to simplify work with such data. Within this thesis, web services for the discovery, access, and analysis of EO time-series data have been described and evaluated based on different user requirements. Standardized web services specifications, output and data formats are proposed, introduced and described to meet the needs of the different user personas
An Extensible and Personalized Approach to QoS-enabled Semantic Web Service Discovery
We present a framework for the autonomous discovery and selection of Semantic Web services based on their QoS properties. The novelty of our approach is the wide use of semantic technologies for a customizable discovery, which enables both the service users and providers to flexibly specify their matching models for QoS and the corresponding environmental conditions. In the presented approach, the discovery and ranking of services can be personalized via the use of domain ontologies detailing the user's preferences and the provider's specification. The discovery component is modeled as an adaptive query processing system in which the basic steps of filtering, matchmaking, reputation-based QoS assessment, and ranking of services correspond to logical algebraic operators, which facilitates the introduction of different discovery algorithms and the automatic generation of appropriate parallelized matchmaking evaluations, enabling the scalability of our solution up to unpredictable arrival rate of user queries against high numbers of published service descriptions in the system
Recommended from our members
Controlling for contamination in re-sequencing studies with a reproducible web-based phylogenetic approach
Polymorphism discovery is a routine application of next-generation sequencing technology where multiple samples are sent to a service provider for library preparation, subsequent sequencing, and bioinformatic analyses. The decreasing cost and advances in multiplexing approaches have made it possible to analyze hundreds of samples at a reasonable cost. However, because of the manual steps involved in the initial processing of samples and handling of sequencing equipment, cross-contamination remains a significant challenge. It is especially problematic in cases where polymorphism frequencies do not adhere to diploid expectation, for example, heterogeneous tumor samples, organellar genomes, as well as during bacterial and viral sequencing. In these instances, low levels of contamination may be readily mistaken for polymorphisms, leading to false results. Here we describe practical steps designed to reliably detect contamination and uncover its origin, and also provide new, Galaxy-based, readily accessible computational tools and workflows for quality control. All results described in this report can be reproduced interactively on the web as described at http://usegalaxy.org/contamination
3PAC: Enforcing Access Policies for Web Services
Web services fail to deliver on the promise of ubiquitous deployment and seamless interoperability due to the lack of a uniform, standards-based approach to all aspects of security. In particular, the enforcement of access policies in a service oriented architecture is not addressed adequately. We present a novel approach to the distribution and enforcement of credentials-based access policies for Web services (3PAC) which scales well and can be implemented in existing deployments
ServeNet: A Deep Neural Network for Web Services Classification
Automated service classification plays a crucial role in service discovery,
selection, and composition. Machine learning has been widely used for service
classification in recent years. However, the performance of conventional
machine learning methods highly depends on the quality of manual feature
engineering. In this paper, we present a novel deep neural network to
automatically abstract low-level representation of both service name and
service description to high-level merged features without feature engineering
and the length limitation, and then predict service classification on 50
service categories. To demonstrate the effectiveness of our approach, we
conduct a comprehensive experimental study by comparing 10 machine learning
methods on 10,000 real-world web services. The result shows that the proposed
deep neural network can achieve higher accuracy in classification and more
robust than other machine learning methods.Comment: Accepted by ICWS'2
The Web SSO Standard OpenID Connect: In-Depth Formal Security Analysis and Security Guidelines
Web-based single sign-on (SSO) services such as Google Sign-In and Log In
with Paypal are based on the OpenID Connect protocol. This protocol enables
so-called relying parties to delegate user authentication to so-called identity
providers. OpenID Connect is one of the newest and most widely deployed single
sign-on protocols on the web. Despite its importance, it has not received much
attention from security researchers so far, and in particular, has not
undergone any rigorous security analysis.
In this paper, we carry out the first in-depth security analysis of OpenID
Connect. To this end, we use a comprehensive generic model of the web to
develop a detailed formal model of OpenID Connect. Based on this model, we then
precisely formalize and prove central security properties for OpenID Connect,
including authentication, authorization, and session integrity properties.
In our modeling of OpenID Connect, we employ security measures in order to
avoid attacks on OpenID Connect that have been discovered previously and new
attack variants that we document for the first time in this paper. Based on
these security measures, we propose security guidelines for implementors of
OpenID Connect. Our formal analysis demonstrates that these guidelines are in
fact effective and sufficient.Comment: An abridged version appears in CSF 2017. Parts of this work extend
the web model presented in arXiv:1411.7210, arXiv:1403.1866,
arXiv:1508.01719, and arXiv:1601.0122
Recommended from our members
A linked data-driven & service-oriented architecture for sharing educational resources
The two fundamental aims of managing educational resources are to enable resources to be reusable and interoperable and to enable Web-scale sharing of resources across learning communities. Currently, a variety of approaches have been proposed to expose and manage educational resources and their metadata on the Web. These are usually based on heterogeneous metadata standards and schemas, such as IEEE LOM or ADL SCORM, and diverse repository interfaces such as OAI-PMH or SQI. Also, there is still a lack of usage of controlled vocabularies and available data sets that could replace the widespread use of unstructured text for describing resources. On the other hand, the Linked Data approach has proven that it offers a set of successful principles that have the potential to alleviate the aforementioned issues. In this paper, we introduce an architecture and prototype which is fundamentally based on (a) Linked Data principles and (b) Service-orientation to resolve the integration issues for sharing educational resources
An Architecture for Integrated Intelligence in Urban Management using Cloud Computing
With the emergence of new methodologies and technologies it has now become
possible to manage large amounts of environmental sensing data and apply new
integrated computing models to acquire information intelligence. This paper
advocates the application of cloud capacity to support the information,
communication and decision making needs of a wide variety of stakeholders in
the complex business of the management of urban and regional development. The
complexity lies in the interactions and impacts embodied in the concept of the
urban-ecosystem at various governance levels. This highlights the need for more
effective integrated environmental management systems. This paper offers a
user-orientated approach based on requirements for an effective management of
the urban-ecosystem and the potential contributions that can be supported by
the cloud computing community. Furthermore, the commonality of the influence of
the drivers of change at the urban level offers the opportunity for the cloud
computing community to develop generic solutions that can serve the needs of
hundreds of cities from Europe and indeed globally.Comment: 6 pages, 3 figure
Developing an open data portal for the ESA climate change initiative
We introduce the rationale for, and architecture of, the European Space Agency Climate Change Initiative (CCI) Open Data Portal (http://cci.esa.int/data/). The Open Data Portal hosts a set of richly diverse datasets – 13 “Essential Climate Variables” – from the CCI programme in a consistent and harmonised form and to provides a single point of access for the (>100 TB) data for broad dissemination to an international user community. These data have been produced by a range of different institutions and vary across both scientific and spatio-temporal characteristics. This heterogeneity of the data together with the range of services to be supported presented significant technical challenges.
An iterative development methodology was key to tackling these challenges: the system developed exploits a workflow which takes data that conforms to the CCI data specification, ingests it into a managed archive and uses both manual and automatically generated metadata to support data discovery, browse, and delivery services. It utilises both Earth System Grid Federation (ESGF) data nodes and the Open Geospatial Consortium Catalogue Service for the Web (OGC-CSW) interface, serving data into both the ESGF and the Global Earth Observation System of Systems (GEOSS). A key part of the system is a new vocabulary server, populated with CCI specific terms and relationships which integrates OGC-CSW and ESGF search services together, developed as part of a dialogue between domain scientists and linked data specialists. These services have enabled the development of a unified user interface for graphical search and visualisation – the CCI Open Data Portal Web Presence
- …