244 research outputs found

    The 14th Overture Workshop: Towards Analytical Tool Chains

    Get PDF
    This report contains the proceedings from the 14th Overture workshop organized in connection with the Formal Methods 2016 symposium. This includes nine papers describing different technological progress in relation to the Overture/VDM tool support and its connection with other tools such as Crescendo, Symphony, INTO-CPS, TASTE and ViennaTalk

    Safety Applications and Measurement Tools for Connected Vehicles

    Get PDF
    L'abstract è presente nell'allegato / the abstract is in the attachmen

    DATA-INTENSIVE COMPUTING FOR BIOINFORMATICS USING VIRTUALIZATION TECHNOLOGIES AND HPC INFRASTRUCTURES

    Get PDF
    The bioinformatics applications often involve many computational components and massive data sets, which are very difficult to be deployed on a single computing machine. In this thesis, we designed a data-intensive computing platform for bioinformatics applications using virtualization technologies and high performance computing (HPC) infrastructures with the concept of multi-tier architecture, which can seamlessly integrate the web user interface (presentation tier), scientific workflow (logic tier) and computing infrastructure (data/computing tier). We demonstrated our platform on two bioinformatics projects. First, we redesigned and deployed the cotton marker database (CMD) (http://www.cottonmarker.org), a centralized web portal in the cotton research community, using the Xen-based virtualization solution. To achieve high-performance and scalability for CMD web tools, we hosted the large amounts of protein databases and computational intensive applications of CMD on the Palmetto HPC of Clemson University. Biologists can easily utilize both bioinformatics applications and HPC resources through the CMD website without a background in computer science. Second, we developed a web tools - Glycan Array QSAR Tool (http://bci.clemson.edu/tools/glycan_array), to analyze glycan array data. The user interface of this tool was developed at the top of Drupal Content Management Systems (CMS) and the computational part was implemented using MATLAB Compiler Runtime (MCR) module. Our new bioinformatics computing platform enables the rapid deployment of data-intensive bioinformatics applications on HPC and virtualization environment with a user-friendly web interface and bridges the gap between biological scientists and cyberinfrastructure

    Quantum machine learning at record speed: Many-body distribution functionals as compact representations

    Full text link
    The feature vector mapping used to represent chemical systems is a key factor governing the superior data-efficiency of kernel based quantum machine learning (QML) models applicable throughout chemical compound space. Unfortunately, the most accurate representations require a high dimensional feature mapping, thereby imposing a considerable computational burden on model training and use. We introduce compact yet accurate, linear scaling QML representations based on atomic Gaussian many-body distribution functionals (MBDF), and their derivatives. Weighted density functions (DF) of MBDF values are used as global representations which are constant in size, i.e. invariant with respect to the number of atoms. We report predictive performance and training data efficiency that is close to state of the art for two diverse datasets of organic molecules, QM9 and QMugs. Generalization capability has been investigated for atomization energies, HOMO-LUMO eigenvalues and gap, internal energies at 0 K, zero point vibrational energies, dipole moment norm, static isotropic polarizability, and heat capacity as encoded in QM9. MBDF based QM9 performance lowers the optimal Pareto front spanned between sampling and training cost to compute node minutes, effectively sampling chemical compound space with chemical accuracy at a speed of 37 molecules per core second

    The Optimal Implementation of On-Line Optimization for Chemical and Refinery Processes.

    Get PDF
    On-line optimization is an effective approach for process operation and economic improvement and source reduction in chemical and refinery processes. On-line optimization involves three steps of work as: data validation, parameter estimation, and economic optimization. This research evaluated statistical algorithms for gross error detection, data reconciliation, and parameter estimation, and developed an open-form steady state process model for the Monsanto designed sulfuric acid process of IMC Agrico Company. The plant model was used to demonstrate improved economics and reduced emissions from on-line optimization and to test the methodology of on-line optimization. Also, a modified compensation strategy was proposed to improve the misrectification of data reconciliation algorithms and it was compared with measurement test method. In addition, two ways to conduct on-line optimization were studied. One required two separated optimization problems to update parameters, and the other combined data validation and parameter estimation into one optimization problem. Two-step estimation demonstrated a better performance in estimation accuracy than one-step estimation for sulfuric acid process, while one-step estimation required less computation time. The measurement test method, Tjoa-Biegler\u27 contaminated Gaussian distribution method, and robust method were evaluated theoretically and numerically to compare the performance of these methods. Results from these evaluation were used to recommend the best way to conduct on-line optimization. The optimal procedure is to conduct combined gross error detection and data reconciliation to detect and rectify gross errors in plant data from DCS using Tjoa-Biegler\u27s method or robust method. This step generates a set of measurements containing only random errors which is used for simultaneous data reconciliation and parameter estimation using the least squares method (the normal distribution). Updated parameters are used in the plant model for economic optimization that generates optimal set points for DCS. Applying this procedure to the Monsanto sulfuric acid plant had an increased profit of 3% over current operating condition and an emission reduction of 10% which is consistent with other reported applications. Also, this optimal procedure to conduct on-line optimization has been incorporated into an interactive on-line optimization program which used a window interface developed with Visual Basic and GAMS to solve the nonlinear optimization problems. This program is to be available through the EPA Technology Tool Program

    Remote Attestation for Constrained Relying Parties

    Get PDF
    In today's interconnected world, which contains a massive and rapidly growing number of devices, it is important to have security measures that detect unexpected or unwanted behavior of those devices. Remote attestation -- a procedure for evaluating the software and hardware properties of a remote entity -- is one of those measures. Remote attestation has been used for a long time in Mobile Device Management solutions to assess the security of computers and smartphones. The rise of the Internet of Things (IoT) introduced a new research direction for attestation, which involves IoT devices. The current trend in the academic research of attestation involves a powerful entity, called "verifier", attesting and appraising a less powerful entity, called "attester". However, academic works have not considered the opposite scenario, where a resource constrained device needs to evaluate the security of more powerful devices. In addition, these works do not have the notion of a "relying party" -- the entity that receives the attestation results computed by the verifier to determine the trustworthiness of the attester. There are many scenarios where a resource constrained device might want to evaluate the trustworthiness of a more powerful device. For example, a sensor or wearable may need to assess the state of a smartphone before sending data to it, or a network router may allow only trusted devices to connect to the network. The aim of this thesis is to design an attestation procedure suitable for constrained relying parties. Developing the attestation procedure is done through analyzing possible attestation result formats found in the industry, benchmarking the suitable formats, proposing and formally analyzing an attestation protocol for constrained relying parties, and implementing a prototype of a constrained relying party

    Deployment of Deep Neural Networks on Dedicated Hardware Accelerators

    Get PDF
    Deep Neural Networks (DNNs) have established themselves as powerful tools for a wide range of complex tasks, for example computer vision or natural language processing. DNNs are notoriously demanding on compute resources and as a result, dedicated hardware accelerators for all use cases are developed. Different accelerators provide solutions from hyper scaling cloud environments for the training of DNNs to inference devices in embedded systems. They implement intrinsics for complex operations directly in hardware. A common example are intrinsics for matrix multiplication. However, there exists a gap between the ecosystems of applications for deep learning practitioners and hardware accelerators. HowDNNs can efficiently utilize the specialized hardware intrinsics is still mainly defined by human hardware and software experts. Methods to automatically utilize hardware intrinsics in DNN operators are a subject of active research. Existing literature often works with transformationdriven approaches, which aim to establish a sequence of program rewrites and data-layout transformations such that the hardware intrinsic can be used to compute the operator. However, the complexity this of task has not yet been explored, especially for less frequently used operators like Capsule Routing. And not only the implementation of DNN operators with intrinsics is challenging, also their optimization on the target device is difficult. Hardware-in-the-loop tools are often used for this problem. They use latency measurements of implementations candidates to find the fastest one. However, specialized accelerators can have memory and programming limitations, so that not every arithmetically correct implementation is a valid program for the accelerator. These invalid implementations can lead to unnecessary long the optimization time. This work investigates the complexity of transformation-driven processes to automatically embed hardware intrinsics into DNN operators. It is explored with a custom, graph-based intermediate representation (IR). While operators like Fully Connected Layers can be handled with reasonable effort, increasing operator complexity or advanced data-layout transformation can lead to scaling issues. Building on these insights, this work proposes a novel method to embed hardware intrinsics into DNN operators. It is based on a dataflow analysis. The dataflow embedding method allows the exploration of how intrinsics and operators match without explicit transformations. From the results it can derive the data layout and program structure necessary to compute the operator with the intrinsic. A prototype implementation for a dedicated hardware accelerator demonstrates state-of-the art performance for a wide range of convolutions, while being agnostic to the data layout. For some operators in the benchmark, the presented method can also generate alternative implementation strategies to improve hardware utilization, resulting in a geo-mean speed-up of ×2.813 while reducing the memory footprint. Lastly, by curating the initial set of possible implementations for the hardware-in-the-loop optimization, the median timeto- solution is reduced by a factor of ×2.40. At the same time, the possibility to have prolonged searches due a bad initial set of implementations is reduced, improving the optimization’s robustness by ×2.35
    corecore