106 research outputs found
High Energy Physics Forum for Computational Excellence: Working Group Reports (I. Applications Software II. Software Libraries and Tools III. Systems)
Computing plays an essential role in all aspects of high energy physics. As
computational technology evolves rapidly in new directions, and data throughput
and volume continue to follow a steep trend-line, it is important for the HEP
community to develop an effective response to a series of expected challenges.
In order to help shape the desired response, the HEP Forum for Computational
Excellence (HEP-FCE) initiated a roadmap planning activity with two key
overlapping drivers -- 1) software effectiveness, and 2) infrastructure and
expertise advancement. The HEP-FCE formed three working groups, 1) Applications
Software, 2) Software Libraries and Tools, and 3) Systems (including systems
software), to provide an overview of the current status of HEP computing and to
present findings and opportunities for the desired HEP computational roadmap.
The final versions of the reports are combined in this document, and are
presented along with introductory material.Comment: 72 page
Blockchain-Driven Secure and Transparent Audit Logs
In enterprise business applications, large volumes of data are generated daily, encoding business logic and transactions. Those applications are governed by various compliance requirements, making it essential to provide audit logs to store, track, and attribute data changes. In traditional audit log systems, logs are collected and stored in a centralized medium, making them prone to various forms of attacks and manipulations, including physical access and remote vulnerability exploitation attacks, and eventually allowing for unauthorized data modification, threatening the guarantees of audit logs. Moreover, such systems, and given their centralized nature, are characterized by a single point of failure. To harden the security of audit logs in enterprise business applications, in this work we explore the design space of blockchain-driven secure and transparent audit logs. We highlight the possibility of ensuring stronger security and functional properties by a generic blockchain system for audit logs, realize this generic design through BlockAudit, which addresses both security and functional requirements, optimize BlockAudit through multi-layered design in BlockTrail, and explore the design space further by assessing the functional and security properties the consensus algorithms through comprehensive evaluations. The first component of this work is BlockAudit, a design blueprint that enumerates structural, functional, and security requirements for blockchain-based audit logs. BlockAudit uses a consensus-driven approach to replicate audit logs across multiple application peers to prevent the single-point-of-failure. BlockAudit also uses the Practical Byzantine Fault Tolerance (PBFT) protocol to achieve consensus over the state of the audit log data. We evaluate the performance of BlockAudit using event-driven simulations, abstracted from IBM Hyperledger. Through the performance evaluation of BlockAudit, we pinpoint a need for high scalability and high throughput. We achieve those requirements by exploring various design optimizations to the flat structure of BlockAudit inspired by real-world application characteristics. Namely, enterprise business applications often operate across non-overlapping geographical hierarchies including cities, counties, states, and federations. Leveraging that, we applied a similar transformation to BlockAudit to fragment the flat blockchain system into layers of codependent hierarchies, capable of processing transactions in parallel. Our hierarchical design, called BlockTrail, reduced the storage and search complexity for blockchains substantially while increasing the throughput and scalability of the audit log system. We prototyped BlockTrail on a custom-built blockchain simulator and analyzed its performance under varying transactions and network sizes demonstrating its advantages over BlockAudit. A recurring limitation in both BlockAudit and BlockTrail is the use of the PBFT consensus protocol, which has high complexity and low scalability features. Moreover, the performance of our proposed designs was only evaluated in computer simulations, which sidestepped the complexities of the real-world blockchain system. To address those shortcomings, we created a generic cloud-based blockchain testbed capable of executing five well-known consensus algorithms including Proof-of-Work, Proof-of-Stake, Proof-of-Elapsed Time, Clique, and PBFT. For each consensus protocol, we instrumented our auditing system with various benchmarks to measure the latency, throughput, and scalability, highlighting the trade-off between the different protocols
An object query language for multimedia federations
The Fischlar system provides a large centralised repository of multimedia files. As expansion is difficult in centralised systems and as different user groups have a requirement to define their own schemas, the EGTV (Efficient Global Transactions for Video) project was established to examine how the distribution of this database could be managed. The federated database approach is advocated where global schema is designed in a top-down approach, while all multimedia and textual data is stored in object-oriented (O-O) and object-relational (0-R) compliant databases.
This thesis investigates queries and updates on large multimedia collections organised in the database federation. The goal of this research is to provide a generic query language capable of interrogating global and local multimedia database schemas. Therefore, a new query language EQL is defined to facilitate the querying of object-oriented and objectrelational database schemas in a database and platform independent manner, and acts as a canonical language for database federations. A new canonical language was required as the existing query language standards (SQL: 1999 and OQL) axe generally incompatible and translation between them is not trivial. EQL is supported with a formally defined object algebra and specified semantics for query evaluation.
The ability to capture and store metadata of multiple database schemas is essential when constructing and querying a federated schema. Therefore we also present a new platform independent metamodel for specifying multimedia schemas stored in both object-oriented and object-relational databases. This metadata information is later used for the construction of a global schemas, and during the evaluation of local and global queries.
Another important feature of any federated system is the ability to unambiguously define database schemas. The schema definition language for an EGTV database federation must be capable of specifying both object-oriented and object-relational schemas in the database independent format. As XML represents a standard for encoding and distributing data across various platforms, a language based upon XML has been developed as a part of our research. The ODLx (Object Definition Language XML) language specifies a set of XMLbased structures for defining complex database schemas capable of representing different multimedia types. The language is fully integrated with the EGTV metamodel through which ODLx schemas can be mapped to 0-0 and 0-R databases
Delivering an Olympic Games
The technology involved in the distribution of the results during an Olympic
Games is extremely complex. More than 900 servers, 1,000 network devices,
9,500 computers and 3,500 technologists are necessary to make it happen. Would
it be possible to implement a solution with less resources using cutting edge technology?
The following study answers this question by designing two scalable
and high performance web-based applications to manage tournaments offering a
REST interface to stakeholders. The first solution architecture is based on Java
using frameworks such as Spring and Hibernate. The second solution architecture
uses JavaScript with frameworks such as NodeJS, AngularJS, Mongoose
and Express.Ingeniería de Telecomunicació
Long Range Financing Strategy for the CGIAR: Final Report of the Working Group
Executive summary of the longer term strategy prepared by the Conservation Company under the direction of a Finance Committee working group whose members are listed in an annex. The report summarized here presents an operational plan for an enhanced Future Harvest organization. It addresses CGIAR public awareness, resource mobilization, and financing, and builds on work presented at ICW 1999 and MTM 2000. This summary document was circulated as background to the report of the Synthesis Group to ICW 2000. The full report was also distributed to members at ICW 2000 and is contained in a separate record, under the title 'Long Range Financing Strategy for the CGIAR'
Evaluating Cloud Migration Options for Relational Databases
Migrating the database layer remains a key challenge when moving a software system to a new cloud provider. The database is often very large, poorly documented, and used to store business-critical information. Most cloud providers offer a variety of services for hosting databases and the most suitable choice depends on the database size, workload, performance requirements, cost, and future business plans. Current approaches do not support this decision-making process, leading to errors and inaccurate comparisons between database migration options. The heterogeneity of databases and clouds means organisations often have to develop their own ad-hoc process to compare the suitability of cloud services for their system. This is time consuming, error prone, and costly.
This thesis contributes to addressing these issues by introducing a three-phase methodology for evaluating cloud database migration options. The first phase defines the planning activities, such as, considering downtime tolerance, existing infrastructure, and information sources. The second phase is a novel method for modelling the structure and the workload of the database being migrated. This addresses database heterogeneity by using a multi-dialect SQL grammar and annotated text-to-model transformations. The final phase consumes the models from the second and uses discrete-event simulation to predict migration cost, data transfer duration, and cloud running costs. This involved the extension of the existing CloudSim framework to simulate the data transfer to a new cloud database.
An extensive evaluation was performed to assess the effectiveness of each phase of the methodology and of the tools developed to automate their main steps. The modelling phase was applied to 15 real-world systems, and compared to the leading approach there was a substantial improvement in: performance, model completeness, extensibility, and SQL support. The complete methodology was applied to four migrations of two real-world systems. The results from this showed that the methodology provided significantly improved accuracy over existing approaches
Recommended from our members
A methodology for developing scientific software applications in science gateways
This thesis was submitted for the award of Doctor of Philosophy and was awarded by Brunel University LondonDistributed Computing Infrastructures (DCIs) have emerged as a viable and affordable solution to the computing needs of communities of practice that may require the need to improve system performance or enhance the availability of their scientific applications. According to the literature, the ease of access and several other issues which relate to the interoperability among different resources are the biggest challenges surrounding the use of these infrastructures. The traditional method of using a Command Line Interface (CLI) to access these resources is difficult and can make the learning curve quite steep. This approach can result in the low uptake of DCIs as it prevents potential users of the infrastructures from adopting the technology. Science Gateways have emerged as a viable option that are used to realise the high-level scientific domain-specific user interfaces that hide all the details of the underlying infrastructures and expose only the science-specific aspects of the scientific applications to be executed in the various DCIs. A Science Gateway is a digital interface to advanced technologies which is used to provide adequate support for science and engineering research and education. The focus of this study therefore is to propose and implement a Methodology for dEveloping Scientific Software Applications in science GatEways (MESSAGE). This will be achieved by testing an approach which is considered to be appropriate for developing applications in Science Gateways. In the course of this study, several Science Gateway functionalities obtained from the review of literature which may be utilised to provide services for different communities of practice are highlighted. To implement the identified functionalities, this study utilises the methodology for developing scientific software applications in Science Gateways. In order to achieve this purpose, this research therefore adopts the Catania Science Gateway Framework (CSGF) and the Future Gateway approach to implement the methods and ideas described in the proposed methodology, as well the essential services of Science Gateways discussed throughout the thesis. In addition, three different set of scientific software applications are utilised for the implementation of the proposed methodology. While the first application primarily serves as the case study for implementing the methodology discussed in this thesis, a second application is used to evaluate the entire process. Furthermore, several other real-life scientific applications developed (using two distinctly different Science Gateway frameworks) are also utilised for the purpose of evaluation. Subsequently, a revised MESSAGE methodology for developing scientific software applications in Science Gateways is discussed in the latter Chapter of this thesis. Following from the implementation of both scientific software applications which sees the use of portlets to execute single experiments, a study was also conducted to investigate ways in which Science Gateways may be utilised for the execution of multiple experiments in a distributed environment. Finally, similar to making different scientific software applications accessible and available (worldwide) to the communities that need them, the processes involved in making their associated research outputs (such as data, software and results) easily accessible and readily available are also discussed. The main contribution of this thesis is the MESSAGE methodology for developing scientific software applications in Science Gateways. Other contributions which are also made in different aspects of this research include a framework of the essential services required in generic Science Gateways and an approach to developing and executing multiple experiments (via Science Gateway interfaces) within a distributed environment. To a lesser extent, this study also utilises the Open Access Document Repository (OADR) (and other related technologies) to demonstrate accessibility and availability of research outputs associated with specific scientific software applications, thereby introducing the concept (and thus laying the foundation) of an Open Science research
Collaborative electronic purchasing within an SME consortium
The main function of purchasing is to assure the supply with required goods and services. Large organisations have both finances and knowledge to implement optimised purchasing resources, typically using information and communications technology (ICT) to improve efficiency. On the contrary, within individual small and medium sized enterprises electronic purchasing is conducted predominately through supplier's sales web sites.EThOS - Electronic Theses Online ServiceGBUnited Kingdo
- …