1,001 research outputs found

    HPC Cloud for Scientific and Business Applications: Taxonomy, Vision, and Research Challenges

    Full text link
    High Performance Computing (HPC) clouds are becoming an alternative to on-premise clusters for executing scientific applications and business analytics services. Most research efforts in HPC cloud aim to understand the cost-benefit of moving resource-intensive applications from on-premise environments to public cloud platforms. Industry trends show hybrid environments are the natural path to get the best of the on-premise and cloud resources---steady (and sensitive) workloads can run on on-premise resources and peak demand can leverage remote resources in a pay-as-you-go manner. Nevertheless, there are plenty of questions to be answered in HPC cloud, which range from how to extract the best performance of an unknown underlying platform to what services are essential to make its usage easier. Moreover, the discussion on the right pricing and contractual models to fit small and large users is relevant for the sustainability of HPC clouds. This paper brings a survey and taxonomy of efforts in HPC cloud and a vision on what we believe is ahead of us, including a set of research challenges that, once tackled, can help advance businesses and scientific discoveries. This becomes particularly relevant due to the fast increasing wave of new HPC applications coming from big data and artificial intelligence.Comment: 29 pages, 5 figures, Published in ACM Computing Surveys (CSUR

    Desenvolvimento de uma Infraestrutura baseada em HL7® FHIR® para Interoperabilidade Clínica

    Get PDF
    Throughout the years, the healthcare business knowledge, requirements, and the number of patients seeking medical attention has grown tremendously to a point where sensitive cases needed the input from multiple healthcare institutions in order to track the patient’s medical history and make the most adequate decisions for each situation. Technology and digital information fulfils a great role in addressing these problems and improving healthcare provision. However, due to the immense number of organizations and systems in this business, sharing a patient’s clinical information can be a major problem if the systems are not capable of understanding the data sent to each other. Ensuring interoperability between systems is crucial to guarantee the continuous flow of a patient’s clinical history transmission and to improve the health professionals’ work. As a company working in the field of healthcare, ALERT’s main goal is to help organizations improve in their health business and to help prolong life, by providing the necessary technology that is capable of benefiting the health professional’s work management and sharing the necessary information with other organizations. Thus, the company seeks to constantly improve its product suite, ALERT®, by meeting the worldwide organizations requirements and assuring interoperability based on the existing health standards in the market. This way, the company wants to add in the ALERT suite the latest standard, Fast Healthcare Interoperability Resources (FHIR ® ), which brings great technological innovations for interoperability’s improvement, provided by the standards developing organization, Health Level Seven International (HL7), being also considered to be a suitable standard for mobile applications thanks to its capabilities and ease of implementation. Herewith, thisthesis presents a development and architectural approach to apply FHIR features in the product suite, along with the problem and solution analysis, including the evaluation of suitable frameworks for the implementation phase. Considering the experiments’ results, the implemented FHIR services actually improved the product’s performance, and thanks to the standard’s specification, the implementation of its core features proved to be simple and straightforward while respecting the key criteria for some of the developed services.Ao longo dos anos, o conhecimento, as exigências, e o número de pacientes à procura de cuidados médicos na área de negócio de cuidados de saúde, tem vindo a aumentar drasticamente ao ponto de ser necessária a opinião de outras instituições para casos de maior sensibilidade, de modo a que o historial médico do paciente fosse acompanhado e que servisse para tomar as decisões mais adequadas para o problema em questão. A tecnologia e a informação digital representam um grande papel na resolução de problemas e promoção de entrega de cuidados de saúde. No entanto, devido à imensa quantidade de organizações e sistemas nesta área de negócio, a partilha de informação clínica relativa a um paciente pode vir a ser um grave problema caso os sistemas não sejam capazes de compreender os dados que estão a ser transmitidos entre eles. Deste modo, assegurar interoperabilidade entre sistemas é crucial para garantir um fluxo contínuo de transmissão de informação relativa ao historial clínico de um paciente, e para melhorar o trabalho dos profissionais de saúde. Sendo uma empresa que trabalha na área de cuidados de saúde, a ALERT tem como principal objetivo ajudar as organizações a melhorar o seu negócio de saúde e ajudar a prolongar a vida, fornecendo a tecnologia necessária que beneficie a gestão de trabalho dos profissionais de saúde e que partilhe informação com outras organizações. Portanto, a empresa procura constantemente melhorar o seu produto ALERT®, procurando cumprir com os requisitos de organizações globais e garantindo interoperabilidade baseada nos standards de saúde existentes no mercado. Assim, a empresa pretende adotar o último standard lançado, Fast Healthcare Interoperability Resources (FHIR®), que traz grandes inovações tecnológicas para o aperfeiçoamento da interoperabilidade, fornecida pela organização de desenvolvimento de standards, Health Level Seven International (HL7), sendo também considerado um standard adequado para aplicações móveis graças às suas capacidades e facilidade de implementação. Com isto, esta tese apresenta uma abordagem arquitetural e de desenvolvimento para a aplicação de funcionalidades FHIR no produto, juntamente com a análise do problema e da solução, incluindo a avaliação de ferramentas adequadas para a fase de implementação. Os resultados de teste obtidos para os serviços FHIR implementados, demonstraram uma melhoria na performance do produto, e graças à especificação do standard, a implementação das principais funcionalidades provou ser simples e direta, respeitando os principais critérios para os serviços desenvolvidos

    Moving Towards Analog Functional Safety

    Get PDF
    Over the past century, the exponential growth of the semiconductor industry has led to the creation of tiny and complex integrated circuits, e.g., sensors, actuators, and smart power systems. Innovative techniques are needed to ensure the correct functionality of analog devices that are ubiquitous in every smart system. The standard ISO 26262 related to functional safety in the automotive context specifies that fault injection is necessary to validate all electronic devices. For decades, standardizing fault modeling, injection and simulation mainly focused on digital circuits and disregarding analog ones. An initial attempt is being made with the IEEE P2427 standard draft standard that started to give this field a structured and formal organization. In this context, new fault models, injection, and abstraction methodologies for analog circuits are proposed in this thesis to enhance this application field. The faults proposed by the IEEE P2427 standard draft standard are initially evaluated to understand the associated fault behaviors during the simulation. Moreover, a novel approach is presented for modeling realistic stuck-on/off defects based on oxide defects. These new defects proposed are required because digital stuck-at-fault models where a transistor is frozen in on-state or offstate may not apply well on analog circuits because even a slight variation could create deviations of several magnitudes. Then, for validating the proposed defects models, a novel predictive fault grouping based on faulty AC matrices is applied to group faults with equivalent behaviors. The proposed fault grouping method is computationally cheap because it avoids performing DC or transient simulations with faults injected and limits itself to faulty AC simulations. Using AC simulations results in two different methods that allow grouping faults with the same frequency response are presented. The first method is an AC-based grouping method that exploits the potentialities of the S-parameters ports. While the second is a Circle-based grouping based on the circle-fitting method applied to the extracted AC matrices. Finally, an open-source framework is presented for the fault injection and manipulation perspective. This framework relies on the shared semantics for reading, writing, or manipulating transistor-level designs. The ultimate goal of the framework is: reading an input design written in a specific syntax and then allowing to write the same design in another syntax. As a use case for the proposed framework, a process of analog fault injection is discussed. This activity requires adding, removing, or replacing nodes, components, or even entire sub-circuits. The framework is entirely written in C++, and its APIs are also interfaced with Python. The entire framework is open-source and available on GitHub. The last part of the thesis presents abstraction methodologies that can abstract transistor level models into Verilog-AMS models and Verilog- AMS piecewise and nonlinear models into C++. These abstracted models can be integrated into heterogeneous systems. The purpose of integration is the simulation of heterogeneous components embedded in a Virtual Platforms (VP) needs to be fast and accurate

    A Geospatial Based Decision Framework for Extending MARSSIM Regulatory Principles into the Subsurface

    Get PDF
    The Multi-Agency Radiological Site Survey Investigation Manual (MARSSIM) is a regulatory guidance document regarding compliance evaluation of radiologically contaminated soils and buildings (USNRC, 2000). Compliance is determined by comparing radiological measurements to established limits using a combination of hypothesis testing and scanning measurements. Scanning allows investigators to identify localized pockets of contamination missed during sampling and allows investigators to assess radiological exposure at different spatial scales. Scale is important in radiological dose assessment as regulatory limits can vary with the size of the contaminated area and sites are often evaluated at more than one scale (USNRC, 2000). Unfortunately, scanning is not possible in the subsurface and direct application of MARSSIM breaks down. This dissertation develops a subsurface decision framework called the Geospatial Extension to MARSSIM (GEM) to provide multi-scale subsurface decision support in the absence of scanning technologies. Based on geostatistical simulations of radiological activity, the GEM recasts the decision rule as a multi-scale, geospatial decision rule called the regulatory limit rule (RLR). The RLR requires simultaneous compliance with all scales and depths of interest at every location throughout the site. The RLR is accompanied by a compliance test called the stochastic conceptual site model (SCSM). For those sites that fail compliance, a remedial design strategy is developed called the Multi-scale Remedial Design Model (MrDM) that spatially indicates volumes requiring remedial action. The MrDM is accompanied by a sample design strategy known as the Multi-scale Remedial Sample Design Model (MrsDM) that refines this remedial action volume through careful placement of new sample locations. Finally, a new sample design called “check and cover” is presented that can support early sampling efforts by directly using prior knowledge about where contamination may exist. This dissertation demonstrates how these tools are used within an environmental investigation and situates the GEM within existing regulatory methods with an emphasis on the Environmental Protection Agency’s Triad method which recognizes and encourages the use of advanced decision methods. The GEM is implemented within the Spatial Analysis and Decision Assistance (SADA) software and applied to a hypothetical radiologically contaminated site

    MakerFluidics: low cost microfluidics for synthetic biology

    Full text link
    Recent advancements in multilayer, multicellular, genetic logic circuits often rely on manual intervention throughout the computation cycle and orthogonal signals for each chemical “wire”. These constraints can prevent genetic circuits from scaling. Microfluidic devices can be used to mitigate these constraints. However, continuous-flow microfluidics are largely designed through artisanal processes involving hand-drawing features and accomplishing design rule checks visually: processes that are also inextensible. Additionally, continuous-flow microfluidic routing is only a consideration during chip design and, once built, the routing structure becomes “frozen in silicon,” or for many microfluidic chips “frozen in polydimethylsiloxane (PDMS)”; any changes to fluid routing often require an entirely new device and control infrastructure. The cost of fabricating and controlling a new device is high in terms of time and money; attempts to reduce one cost measure are, generally, paid through increases in the other. This work has three main thrusts: to create a microfluidic fabrication framework, called MakerFluidics, that lowers the barrier to entry for designing and fabricating microfluidics in a manner amenable to automation; to prove this methodology can design, fabricate, and control complex and novel microfluidic devices; and to demonstrate the methodology can be used to solve biologically-relevant problems. Utilizing accessible technologies, rapid prototyping, and scalable design practices, the MakerFluidics framework has demonstrated its ability to design, fabricate and control novel, complex and scalable microfludic devices. This was proven through the development of a reconfigurable, continuous-flow routing fabric driven by a modular, scalable primitive called a transposer. In addition to creating complex microfluidic networks, MakerFluidics was deployed in support of cutting-edge, application-focused research at the Charles Stark Draper Laboratory. Informed by a design of experiments approach using the parametric rapid prototyping capabilities made possible by MakerFluidics, a plastic blood--bacteria separation device was optimized, demonstrating that the new device geometry can separate bacteria from blood while operating at 275% greater flow rate as well as reduce the power requirement by 82% for equivalent separation performance when compared to the state of the art. Ultimately, MakerFluidics demonstrated the ability to design, fabricate, and control complex and practical microfluidic devices while lowering the barrier to entry to continuous-flow microfluidics, thus democratizing cutting edge technology beyond a handful of well-resourced and specialized labs

    Ontologies and Methods for Interoperability of Engineering Analysis Models (eams) in an E-Design Environment

    Get PDF
    ABSTRACT ONTOLOGIES AND METHODS FOR INTEROPERABILITY OF ENGINEERING ANALYSIS MODELS (EAMS) IN AN E-DESIGN ENVIRONMENT SEPTEMBER 2007 NEELIMA KANURI, B.S., BIRLA INSTITUTE OF TECHNOLOGY AND SCIENCES PILANI INDIA M.S., UNIVERSITY OF MASSACHUSETTS AMHERST Directed by: Professor Ian Grosse Interoperability is the ability of two or more systems to exchange and reuse information efficiently. This thesis presents new techniques for interoperating engineering tools using ontologies as the basis for representing, visualizing, reasoning about, and securely exchanging abstract engineering knowledge between software systems. The specific engineering domain that is the primary focus of this report is the modeling knowledge associated with the development of engineering analysis models (EAMs). This abstract modeling knowledge has been used to support integration of analysis and optimization tools in iSIGHT FD , a commercial engineering environment. ANSYS , a commercial FEA tool, has been wrapped as an analysis service available inside of iSIGHT-FD. Engineering analysis modeling (EAM) ontology has been developed and instantiated to form a knowledge base for representing analysis modeling knowledge. The instances of the knowledge base are the analysis models of real world applications. To illustrate how abstract modeling knowledge can be exploited for useful purposes, a cantilever I-Beam design optimization problem has been used as a test bed proof-of-concept application. Two distinct finite element models of the I-beam are available to analyze a given beam design- a beam-element finite element model with potentially lower accuracy but significantly reduced computational costs and a high fidelity, high cost, shell-element finite element model. The goal is to obtain an optimized I-beam design at minimum computational expense. An intelligent KB tool was developed and implemented in FiPER . This tool reasons about the modeling knowledge to intelligently shift between the beam and the shell element models during an optimization process to select the best analysis model for a given optimization design state. In addition to improved interoperability and design optimization, methods are developed and presented that demonstrate the ability to operate on ontological knowledge bases to perform important engineering tasks. One such method is the automatic technical report generation method which converts the modeling knowledge associated with an analysis model to a flat technical report. The second method is a secure knowledge sharing method which allocates permissions to portions of knowledge to control knowledge access and sharing. Both the methods acting together enable recipient specific fine grain controlled knowledge viewing and sharing in an engineering workflow integration environment, such as iSIGHT-FD. These methods together play a very efficient role in reducing the large scale inefficiencies existing in current product design and development cycles due to poor knowledge sharing and reuse between people and software engineering tools. This work is a significant advance in both understanding and application of integration of knowledge in a distributed engineering design framework

    Stochastic scheduling and workload allocation : QoS support and profitable brokering in computing grids

    No full text
    Abstract: The Grid can be seen as a collection of services each of which performs some functionality. Users of the Grid seek to use combinations of these services to perform the overall task they need to achieve. In general this can be seen as aset of services with a workflow document describing how these services should be combined. The user may also have certain constraints on the workflow operations, such as execution time or cost ----t~ th~ user, specified in the form of a Quality of Service (QoS) document. The users . submit their workflow to a brokering service along with the QoS document. The brokering service's task is to map any given workflow to a subset of the Grid services taking the QoS and state of the Grid into account -- service availability and performance. We propose an approach for generating constraint equations describing the workflow, the QoS requirements and the state of the Grid. This set of equations may be solved using Mixed-Integer Linear Programming (MILP), which is the traditional method. We further develop a novel 2-stage stochastic MILP which is capable of dealing with the volatile nature of the Grid and adapting the selection of the services during the lifetime of the workflow. We present experimental results comparing our approaches, showing that the . 2-stage stochastic programming approach performs consistently better than other traditional approaches. Next we addresses workload allocation techniques for Grid workflows in a multi-cluster Grid We model individual clusters as MIMIk. queues and obtain a numerical solutio~ for missed deadlines (failures) of tasks of Grid workflows. We also present an efficient algorithm for obtaining workload allocations of clusters. Next we model individual cluster resources as G/G/l queues and solve an optimisation problem that minimises QoS requirement violation, provides QoS guarantee and outperforms reservation based scheduling algorithms. Both approaches are evaluated through an experimental simulation and the results confirm that the proposed workload allocation strategies combined with traditional scheduling algorithms performs considerably better in terms of satisfying QoS requirements of Grid workflows than scheduling algorithms that don't employ such workload allocation techniques. Next we develop a novel method for Grid brokers that aims at maximising profit whilst satisfying end-user needs with a sufficient guarantee in a volatile utility Grid. We develop a develop a 2-stage stochastic MILP which is capable of dealing with the volatile nature . of the Grid and obtaining cost bounds that ensure that end-user cost is minimised or satisfied and broker's profit is maximised with sufficient guarantee. These bounds help brokers know beforehand whether the budget limits of end-users can be satisfied and. if not then???????? obtain appropriate future leases from service providers. Experimental results confirm the efficacy of our approach.Imperial Users onl
    corecore