106 research outputs found

    High Energy Physics Forum for Computational Excellence: Working Group Reports (I. Applications Software II. Software Libraries and Tools III. Systems)

    Full text link
    Computing plays an essential role in all aspects of high energy physics. As computational technology evolves rapidly in new directions, and data throughput and volume continue to follow a steep trend-line, it is important for the HEP community to develop an effective response to a series of expected challenges. In order to help shape the desired response, the HEP Forum for Computational Excellence (HEP-FCE) initiated a roadmap planning activity with two key overlapping drivers -- 1) software effectiveness, and 2) infrastructure and expertise advancement. The HEP-FCE formed three working groups, 1) Applications Software, 2) Software Libraries and Tools, and 3) Systems (including systems software), to provide an overview of the current status of HEP computing and to present findings and opportunities for the desired HEP computational roadmap. The final versions of the reports are combined in this document, and are presented along with introductory material.Comment: 72 page

    Blockchain-Driven Secure and Transparent Audit Logs

    Get PDF
    In enterprise business applications, large volumes of data are generated daily, encoding business logic and transactions. Those applications are governed by various compliance requirements, making it essential to provide audit logs to store, track, and attribute data changes. In traditional audit log systems, logs are collected and stored in a centralized medium, making them prone to various forms of attacks and manipulations, including physical access and remote vulnerability exploitation attacks, and eventually allowing for unauthorized data modification, threatening the guarantees of audit logs. Moreover, such systems, and given their centralized nature, are characterized by a single point of failure. To harden the security of audit logs in enterprise business applications, in this work we explore the design space of blockchain-driven secure and transparent audit logs. We highlight the possibility of ensuring stronger security and functional properties by a generic blockchain system for audit logs, realize this generic design through BlockAudit, which addresses both security and functional requirements, optimize BlockAudit through multi-layered design in BlockTrail, and explore the design space further by assessing the functional and security properties the consensus algorithms through comprehensive evaluations. The first component of this work is BlockAudit, a design blueprint that enumerates structural, functional, and security requirements for blockchain-based audit logs. BlockAudit uses a consensus-driven approach to replicate audit logs across multiple application peers to prevent the single-point-of-failure. BlockAudit also uses the Practical Byzantine Fault Tolerance (PBFT) protocol to achieve consensus over the state of the audit log data. We evaluate the performance of BlockAudit using event-driven simulations, abstracted from IBM Hyperledger. Through the performance evaluation of BlockAudit, we pinpoint a need for high scalability and high throughput. We achieve those requirements by exploring various design optimizations to the flat structure of BlockAudit inspired by real-world application characteristics. Namely, enterprise business applications often operate across non-overlapping geographical hierarchies including cities, counties, states, and federations. Leveraging that, we applied a similar transformation to BlockAudit to fragment the flat blockchain system into layers of codependent hierarchies, capable of processing transactions in parallel. Our hierarchical design, called BlockTrail, reduced the storage and search complexity for blockchains substantially while increasing the throughput and scalability of the audit log system. We prototyped BlockTrail on a custom-built blockchain simulator and analyzed its performance under varying transactions and network sizes demonstrating its advantages over BlockAudit. A recurring limitation in both BlockAudit and BlockTrail is the use of the PBFT consensus protocol, which has high complexity and low scalability features. Moreover, the performance of our proposed designs was only evaluated in computer simulations, which sidestepped the complexities of the real-world blockchain system. To address those shortcomings, we created a generic cloud-based blockchain testbed capable of executing five well-known consensus algorithms including Proof-of-Work, Proof-of-Stake, Proof-of-Elapsed Time, Clique, and PBFT. For each consensus protocol, we instrumented our auditing system with various benchmarks to measure the latency, throughput, and scalability, highlighting the trade-off between the different protocols

    An object query language for multimedia federations

    Get PDF
    The Fischlar system provides a large centralised repository of multimedia files. As expansion is difficult in centralised systems and as different user groups have a requirement to define their own schemas, the EGTV (Efficient Global Transactions for Video) project was established to examine how the distribution of this database could be managed. The federated database approach is advocated where global schema is designed in a top-down approach, while all multimedia and textual data is stored in object-oriented (O-O) and object-relational (0-R) compliant databases. This thesis investigates queries and updates on large multimedia collections organised in the database federation. The goal of this research is to provide a generic query language capable of interrogating global and local multimedia database schemas. Therefore, a new query language EQL is defined to facilitate the querying of object-oriented and objectrelational database schemas in a database and platform independent manner, and acts as a canonical language for database federations. A new canonical language was required as the existing query language standards (SQL: 1999 and OQL) axe generally incompatible and translation between them is not trivial. EQL is supported with a formally defined object algebra and specified semantics for query evaluation. The ability to capture and store metadata of multiple database schemas is essential when constructing and querying a federated schema. Therefore we also present a new platform independent metamodel for specifying multimedia schemas stored in both object-oriented and object-relational databases. This metadata information is later used for the construction of a global schemas, and during the evaluation of local and global queries. Another important feature of any federated system is the ability to unambiguously define database schemas. The schema definition language for an EGTV database federation must be capable of specifying both object-oriented and object-relational schemas in the database independent format. As XML represents a standard for encoding and distributing data across various platforms, a language based upon XML has been developed as a part of our research. The ODLx (Object Definition Language XML) language specifies a set of XMLbased structures for defining complex database schemas capable of representing different multimedia types. The language is fully integrated with the EGTV metamodel through which ODLx schemas can be mapped to 0-0 and 0-R databases

    Delivering an Olympic Games

    Get PDF
    The technology involved in the distribution of the results during an Olympic Games is extremely complex. More than 900 servers, 1,000 network devices, 9,500 computers and 3,500 technologists are necessary to make it happen. Would it be possible to implement a solution with less resources using cutting edge technology? The following study answers this question by designing two scalable and high performance web-based applications to manage tournaments offering a REST interface to stakeholders. The first solution architecture is based on Java using frameworks such as Spring and Hibernate. The second solution architecture uses JavaScript with frameworks such as NodeJS, AngularJS, Mongoose and Express.Ingeniería de Telecomunicació

    Long Range Financing Strategy for the CGIAR: Final Report of the Working Group

    Get PDF
    Executive summary of the longer term strategy prepared by the Conservation Company under the direction of a Finance Committee working group whose members are listed in an annex. The report summarized here presents an operational plan for an enhanced Future Harvest organization. It addresses CGIAR public awareness, resource mobilization, and financing, and builds on work presented at ICW 1999 and MTM 2000. This summary document was circulated as background to the report of the Synthesis Group to ICW 2000. The full report was also distributed to members at ICW 2000 and is contained in a separate record, under the title 'Long Range Financing Strategy for the CGIAR'

    Sweden

    Get PDF

    Evaluating Cloud Migration Options for Relational Databases

    Get PDF
    Migrating the database layer remains a key challenge when moving a software system to a new cloud provider. The database is often very large, poorly documented, and used to store business-critical information. Most cloud providers offer a variety of services for hosting databases and the most suitable choice depends on the database size, workload, performance requirements, cost, and future business plans. Current approaches do not support this decision-making process, leading to errors and inaccurate comparisons between database migration options. The heterogeneity of databases and clouds means organisations often have to develop their own ad-hoc process to compare the suitability of cloud services for their system. This is time consuming, error prone, and costly. This thesis contributes to addressing these issues by introducing a three-phase methodology for evaluating cloud database migration options. The first phase defines the planning activities, such as, considering downtime tolerance, existing infrastructure, and information sources. The second phase is a novel method for modelling the structure and the workload of the database being migrated. This addresses database heterogeneity by using a multi-dialect SQL grammar and annotated text-to-model transformations. The final phase consumes the models from the second and uses discrete-event simulation to predict migration cost, data transfer duration, and cloud running costs. This involved the extension of the existing CloudSim framework to simulate the data transfer to a new cloud database. An extensive evaluation was performed to assess the effectiveness of each phase of the methodology and of the tools developed to automate their main steps. The modelling phase was applied to 15 real-world systems, and compared to the leading approach there was a substantial improvement in: performance, model completeness, extensibility, and SQL support. The complete methodology was applied to four migrations of two real-world systems. The results from this showed that the methodology provided significantly improved accuracy over existing approaches

    StrainInfo : from microbial information to microbiological knowledge

    Get PDF

    Collaborative electronic purchasing within an SME consortium

    Get PDF
    The main function of purchasing is to assure the supply with required goods and services. Large organisations have both finances and knowledge to implement optimised purchasing resources, typically using information and communications technology (ICT) to improve efficiency. On the contrary, within individual small and medium sized enterprises electronic purchasing is conducted predominately through supplier's sales web sites.EThOS - Electronic Theses Online ServiceGBUnited Kingdo
    corecore