49 research outputs found

    Improving the Authentication Mechanism of Business to Consumer (B2C) Platform in a Cloud Computing Environment: Preliminary Findings

    Get PDF
    The reliance of e-commerce infrastructure on cloud computing environment has undoubtedly increased the security challenges in web-based e-commerce portals. This has necessitated the need for a built-in security feature, essentially to improve the authentication mechanism, during the execution of its dependent transactions. Comparative analysis of the existing works and studies on XML-based authentication and non-XML signaturebased security mechanisms for authentication in Business to Consumer (B2C) e-commerce showed the advantage of using XML-based authentication, and its inherent weaknesses and limitations. It is against this background that this study, based on review and meta-analysis of previous works, proposes an improved XML digital signature with RSA algorithm, as a novel algorithmic framework that improves the authentication strength of XML digital signature in the B2C e-commerce in a cloud-based environment. Our future works include testing and validation, and simulation, of the proposed authentication framework in Cisco’s XML Management Interface with inbuilt feature of NETCONF. The evaluation will be done in conformity to international standard and guideline –such as W3C and NIST

    XML Signature Wrapping Still Considered Harmful: A Case Study on the Personal Health Record in Germany

    Full text link
    XML Signature Wrapping (XSW) has been a relevant threat to web services for 15 years until today. Using the Personal Health Record (PHR), which is currently under development in Germany, we investigate a current SOAP-based web services system as a case study. In doing so, we highlight several deficiencies in defending against XSW. Using this real-world contemporary example as motivation, we introduce a guideline for more secure XML signature processing that provides practitioners with easier access to the effective countermeasures identified in the current state of research.Comment: Accepted for IFIP SEC 202

    Processing Structured Hypermedia : A Matter of Style

    Get PDF
    With the introduction of the World Wide Web in the early nineties, hypermedia has become the uniform interface to the wide variety of information sources available over the Internet. The full potential of the Web, however, can only be realized by building on the strengths of its underlying research fields. This book describes the areas of hypertext, multimedia, electronic publishing and the World Wide Web and points out fundamental similarities and differences in approaches towards the processing of information. It gives an overview of the dominant models and tools developed in these fields and describes the key interrelationships and mutual incompatibilities. In addition to a formal specification of a selection of these models, the book discusses the impact of the models described on the software architectures that have been developed for processing hypermedia documents. Two example hypermedia architectures are described in more detail: the DejaVu object-oriented hypermedia framework, developed at the VU, and CWI's Berlage environment for time-based hypermedia document transformations

    Connected Information Management

    Get PDF
    Society is currently inundated with more information than ever, making efficient management a necessity. Alas, most of current information management suffers from several levels of disconnectedness: Applications partition data into segregated islands, small notes don’t fit into traditional application categories, navigating the data is different for each kind of data; data is either available at a certain computer or only online, but rarely both. Connected information management (CoIM) is an approach to information management that avoids these ways of disconnectedness. The core idea of CoIM is to keep all information in a central repository, with generic means for organization such as tagging. The heterogeneity of data is taken into account by offering specialized editors. The central repository eliminates the islands of application-specific data and is formally grounded by a CoIM model. The foundation for structured data is an RDF repository. The RDF editing meta-model (REMM) enables form-based editing of this data, similar to database applications such as MS access. Further kinds of data are supported by extending RDF, as follows. Wiki text is stored as RDF and can both contain structured text and be combined with structured data. Files are also supported by the CoIM model and are kept externally. Notes can be quickly captured and annotated with meta-data. Generic means for organization and navigation apply to all kinds of data. Ubiquitous availability of data is ensured via two CoIM implementations, the web application HYENA/Web and the desktop application HYENA/Eclipse. All data can be synchronized between these applications. The applications were used to validate the CoIM ideas

    Processing Structured Hypermedia - A Matter of Style

    Get PDF
    Vliet, J.C. van [Promotor]Eliens, A. [Copromotor

    A Transaction Assurance Framework For Web Service

    Get PDF
    Trust assurances for customers of online transactions is an important, but not well implemented concept for the growth of confidence in electronic transactions. In an online world where customers do not personally know the companies they seek to do business with, there is real risk involved in providing an unknown service with personal information and payment details. The risks faced by a customer are compounded when multiple services are involved in a single transaction. This dissertation provides mechanisms that can be used to reduce the risks faced by a client involved in online transactions by allowing the him/her access to information about the services involved and control or prescribe how the transaction uses the services. The dissertation uses electronic transactions legislation to ground a trust assurance protocol and minimize the assumptions that have to be made. By basing the protocol on legislation, no information that isn’t already required by law is used in the protocol. A trust assurance protocol is presented so that the client can establish which services are involved in a transaction so that the he/she can begin to determine whether or not he/she is willing to conduct business with the services. A trust model that calculates an assurance measure for services is developed so that the client can automatically establish a measure of trust for a service based on the external perceptions of a service, and his/her own personal experience. A simulation environment was created and used to monitor the services involved in a transaction to evaluate the trust assurance protocol and gain experience with the trust calculation that the client computes. Vocabularies that simplify and standardize descriptions of personal information, business types and the legal structure imposed on Web services offering goods or services online are presented to reduce the ambiguity involved in gathering information from different online sources. The vocabularies also provide a cornerstone of the trust assurance protocol by providing information that is necessary to compute the trust value of a Web service. Results of the trust assurance protocol are obtained and evaluated against the qualitative requirements of providing assurances to clients, and confirms that the protocol is feasible to be deployed, in terms of the overhead placed on a transaction. This dissertation finds that a trust assurance protocol is necessary to provide the client with information that he/she legally has access to and that the trust model can provide a calculable measure of trust that the client can use to compare Web services

    Semantic Web methods for knowledge management [online]

    Get PDF

    A detailed investigation of interoperability for web services

    Get PDF
    The thesis presents a qualitative survey of web services' interoperability, offering a snapshot of development and trends at the end of 2005. It starts by examining the beginnings of web services in earlier distributed computing and middleware technologies, determining the distance from these approaches evident in current web-services architectures. It establishes a working definition of web services, examining the protocols that now seek to define it and the extent to which they contribute to its most crucial feature, interoperability. The thesis then considers the REST approach to web services as being in a class of its own, concluding that this approach to interoperable distributed computing is not only the simplest but also the most interoperable. It looks briefly at interoperability issues raised by technologies in the wider arena of Service Oriented Architecture. The chapter on protocols is complemented by a chapter that validates the qualitative findings by examining web services in practice. These have been implemented by a variety of toolkits and on different platforms. Included in the study is a preliminary examination of JAX-WS, the replacement for JAX-RPC, which is still under development. Although the main language of implementation is Java, the study includes services in C# and PHP and one implementation of a client using a Firefox extension. The study concludes that different forms of web service may co-exist with earlier middleware technologies. While remaining aware that there are still pitfalls that might yet derail the movement towards greater interoperability, the conclusion sounds an optimistic note that recent cooperation between different vendors may yet result in a solution that achieves interoperability through core web-service standards
    corecore