87,372 research outputs found

    First performance measurements with the Analysis Grand Challenge

    Full text link
    The IRIS-HEP Analysis Grand Challenge (AGC) is designed to be a realistic environment for investigating how analysis methods scale to the demands of the HL-LHC. The analysis task is based on publicly available Open Data and allows for comparing the usability and performance of different approaches and implementations. It includes all relevant workflow aspects from data delivery to statistical inference. The reference implementation for the AGC analysis task is heavily based on tools from the HEP Python ecosystem. It makes use of novel pieces of cyberinfrastructure and modern analysis facilities in order to address the data processing challenges of the HL-LHC. This contribution compares multiple different analysis implementations and studies their performance. Differences between the implementations include the use of multiple data delivery mechanisms and caching setups for the analysis facilities under investigation.Comment: Submitted as proceedings for 21st International Workshop on Advanced Computing and Analysis Techniques in Physics Research (ACAT 2022) to Journal Of Physics: Conference Serie

    Benchmarking Ethereum Smart Contract Static Analysis Tools

    Get PDF
    This project benchmarks the operation of existing Ethereum smart contract static analysis tools. This is to support the proliferation of tools which allow developers to screen their Ethereum smart contracts for security vulnerabilities and determine what tool or tool suite would be most appropriate for bulk scanning of the entire Ethereum decentralized finance (DeFi) space. This is achieved by comparing the relative performance of several separate static analysis tools on various curated smart contracts. Each tool is made to analyze a list of smart contracts which have known vulnerabilities of various categories dispersed throughout. The resulting output of each static analysis tool is analyzed in several key ways. First, the general runtime of the tool is measured for each input smart contract. This is broken down into metrics such as time taken per line of code, time per kilobyte of file size, and time vs code complexity. Second, the number of vulnerabilities detected by each tool is taken into account. Each tool is capable of detecting different types of vulnerabilities with substantial overlap between tools. The capabilities of the tools are evaluated and scored based on the number of total vulnerabilities found, as well as how many different types of vulnerabilities are capable of being found. Finally, the general accuracy of each tool is compared. The number of false positives and false negatives for each vulnerability category and tool are displayed and compared. Added together, these benchmarking categories are combined into an overall usability score for each tool. This usability score is employed to determine what tool or set of tools could be used to screen individual smart contracts, as well as bulk scan the entire DeFi space

    Cloud based testing of business applications and web services

    Get PDF
    This paper deals with testing of applications based on the principles of cloud computing. It is aimed to describe options of testing business software in clouds (cloud testing). It identifies the needs for cloud testing tools including multi-layer testing; service level agreement (SLA) based testing, large scale simulation, and on-demand test environment. In a cloud-based model, ICT services are distributed and accessed over networks such as intranet or internet, which offer large data centers deliver on demand, resources as a service, eliminating the need for investments in specific hardware, software, or on data center infrastructure. Businesses can apply those new technologies in the contest of intellectual capital management to lower the cost and increase competitiveness and also earnings. Based on comparison of the testing tools and techniques, the paper further investigates future trend of cloud based testing tools research and development. It is also important to say that this comparison and classification of testing tools describes a new area and it has not yet been done

    Systematic evaluation of design choices for software development tools

    Get PDF
    [Abstract]: Most design and evaluation of software tools is based on the intuition and experience of the designers. Software tool designers consider themselves typical users of the tools that they build and tend to subjectively evaluate their products rather than objectively evaluate them using established usability methods. This subjective approach is inadequate if the quality of software tools is to improve and the use of more systematic methods is advocated. This paper summarises a sequence of studies that show how user interface design choices for software development tools can be evaluated using established usability engineering techniques. The techniques used included guideline review, predictive modelling and experimental studies with users

    Two roads, one destination:A journey of discovery

    Get PDF

    Software Challenges For HL-LHC Data Analysis

    Full text link
    The high energy physics community is discussing where investment is needed to prepare software for the HL-LHC and its unprecedented challenges. The ROOT project is one of the central software players in high energy physics since decades. From its experience and expectations, the ROOT team has distilled a comprehensive set of areas that should see research and development in the context of data analysis software, for making best use of HL-LHC's physics potential. This work shows what these areas could be, why the ROOT team believes investing in them is needed, which gains are expected, and where related work is ongoing. It can serve as an indication for future research proposals and cooperations

    Using a task-based approach in evaluating the usability of BoBIs in an e-book environment

    Get PDF
    This paper reports on a usability evaluation of BoBIs (Back-of-the-book Indexes) as searching and browsing tools in an e-book environment. This study employed a task-based approach and within-subject design. The retrieval performance of a BoBI was compared with a ToC and Full-Text Search tool in terms of their respective effectiveness and efficiency for finding information in e-books. The results demonstrated that a BoBI was significantly more efficient (faster) and useful compared to a ToC or Full-Text Search tool for finding information in an e-book environment

    Heuristic usability evaluation on games: a modular approach

    Get PDF
    Heuristic evaluation is the preferred method to assess usability in games when experts conduct this evaluation. Many heuristics guidelines have been proposed attending to specificities of games but they only focus on specific subsets of games or platforms. In fact, to date the most used guideline to evaluate games usability is still Nielsen’s proposal, which is focused on generic software. As a result, most evaluations do not cover important aspects in games such as mobility, multiplayer interactions, enjoyability and playability, etc. To promote the usage of new heuristics adapted to different game and platform aspects we propose a modular approach based on the classification of existing game heuristics using metadata and a tool, MUSE (Meta-heUristics uSability Evaluation tool) for games, which allows a rebuild of heuristic guidelines based on metadata selection in order to obtain a customized list for every real evaluation case. The usage of these new rebuilt heuristic guidelines allows an explicit attendance to a wide range of usability aspects in games and a better detection of usability issues. We preliminarily evaluate MUSE with an analysis of two different games, using both the Nielsen’s heuristics and the customized heuristic lists generated by our tool.Unión Europea PI055-15/E0
    corecore